Science.gov

Sample records for additive factors method

  1. Supplier Selection Using Weighted Utility Additive Method

    NASA Astrophysics Data System (ADS)

    Karande, Prasad; Chakraborty, Shankar

    2015-10-01

    Supplier selection is a multi-criteria decision-making (MCDM) problem which mainly involves evaluating a number of available suppliers according to a set of common criteria for choosing the best one to meet the organizational needs. For any manufacturing or service organization, selecting the right upstream suppliers is a key success factor that will significantly reduce purchasing cost, increase downstream customer satisfaction and improve competitive ability. The past researchers have attempted to solve the supplier selection problem employing different MCDM techniques which involve active participation of the decision makers in the decision-making process. This paper deals with the application of weighted utility additive (WUTA) method for solving supplier selection problems. The WUTA method, an extension of utility additive approach, is based on ordinal regression and consists of building a piece-wise linear additive decision model from a preference structure using linear programming (LP). It adopts preference disaggregation principle and addresses the decision-making activities through operational models which need implicit preferences in the form of a preorder of reference alternatives or a subset of these alternatives present in the process. The preferential preorder provided by the decision maker is used as a restriction of a LP problem, which has its own objective function, minimization of the sum of the errors associated with the ranking of each alternative. Based on a given reference ranking of alternatives, one or more additive utility functions are derived. Using these utility functions, the weighted utilities for individual criterion values are combined into an overall weighted utility for a given alternative. It is observed that WUTA method, having a sound mathematical background, can provide accurate ranking to the candidate suppliers and choose the best one to fulfill the organizational requirements. Two real time examples are illustrated to prove

  2. Extension of the standard addition method by blank addition.

    PubMed

    Steliopoulos, Panagiotis

    2015-01-01

    Standard addition involves adding varying amounts of the analyte to sample portions of fixed mass or fixed volume and submitting those portions to the sample preparation procedure. After measuring the final extract solutions, the observed signals are linearly regressed on the spiked amounts. The original unknown amount is estimated by the opposite of the abscissa intercept of the fitted straight line [1]. A limitation of this method is that only data points with abscissa values equal to and greater than zero are available so that there is no information on whether linearity holds below the spiking level zero. An approach to overcome this limitation is introduced.•Standard addition is combined with blank addition.•Blank addition means that defined mixtures of blank matrix and sample material are subjected to sample preparation to give final extract solutions.•Equations are presented to estimate the original unknown amount and to calculate the 1-2α confidence interval about this estimate using the combined data set.

  3. Extension of the standard addition method by blank addition

    PubMed Central

    Steliopoulos, Panagiotis

    2015-01-01

    Standard addition involves adding varying amounts of the analyte to sample portions of fixed mass or fixed volume and submitting those portions to the sample preparation procedure. After measuring the final extract solutions, the observed signals are linearly regressed on the spiked amounts. The original unknown amount is estimated by the opposite of the abscissa intercept of the fitted straight line [1]. A limitation of this method is that only data points with abscissa values equal to and greater than zero are available so that there is no information on whether linearity holds below the spiking level zero. An approach to overcome this limitation is introduced.•Standard addition is combined with blank addition.•Blank addition means that defined mixtures of blank matrix and sample material are subjected to sample preparation to give final extract solutions.•Equations are presented to estimate the original unknown amount and to calculate the 1-2α confidence interval about this estimate using the combined data set. PMID:26844210

  4. Bond additivity corrections for quantum chemistry methods

    SciTech Connect

    C. F. Melius; M. D. Allendorf

    1999-04-01

    In the 1980's, the authors developed a bond-additivity correction procedure for quantum chemical calculations called BAC-MP4, which has proven reliable in calculating the thermochemical properties of molecular species, including radicals as well as stable closed-shell species. New Bond Additivity Correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid DFT/MP2 method, BAC-Hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method due to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-Hybrid and BAC-MP4. The BAC-Hybrid method should scale well for large molecules. The BAC-Hybrid method uses the differences between the DFT and MP2 as an indicator of the method's accuracy, while the BAC-G2 method uses its internal methods (G1 and G2MP2) to provide an indicator of its accuracy. Indications of the average error as well as worst cases are provided for each of the BAC methods.

  5. [Patch-testing methods: additional specialised or additional series].

    PubMed

    Cleenewerck, M-B

    2009-01-01

    The tests in the European standard battery must occasionally be supplemented by specialised or additional batteries, particularly where the contact allergy is thought to be of occupational origin. These additional batteries cover all allergens associated with various professional activities (hairdressing, baking, dentistry, printing, etc.) and with different classes of materials and chemical products (glue, plastic, rubber...). These additional tests may also include personal items used by patients on a daily basis such as cosmetics, shoes, plants, textiles and so on.

  6. Simulation method for evaluating progressive addition lenses.

    PubMed

    Qin, Linling; Qian, Lin; Yu, Jingchi

    2013-06-20

    Since progressive addition lenses (PALs) are currently state-of-the-art in multifocal correction for presbyopia, it is important to study the methods for evaluating PALs. A nonoptical simulation method used to accurately characterize PALs during the design and optimization process is proposed in this paper. It involves the direct calculation of each surface of the lens according to the lens heights of front and rear surfaces. The validity of this simulation method for the evaluation of PALs is verified by the good agreement with Rotlex method. In particular, the simulation with a "correction action" included into the design process is potentially a useful method with advantages of time-saving, convenience, and accuracy. Based on the eye-plus-lens model, which is established through an accurate ray tracing calculation along the gaze direction, the method can find an excellent application in actually evaluating the wearer performance for optimal design of more comfortable, satisfactory, and personalized PALs. PMID:23842170

  7. 14 CFR 1203.406 - Additional classification factors.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Additional classification factors. 1203.406... PROGRAM Guides for Original Classification § 1203.406 Additional classification factors. In determining the appropriate classification category, the following additional factors should be considered:...

  8. 14 CFR 1203.406 - Additional classification factors.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Additional classification factors. 1203.406 Section 1203.406 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM Guides for Original Classification § 1203.406 Additional classification factors. In...

  9. 14 CFR 1203.406 - Additional classification factors.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Additional classification factors. 1203.406 Section 1203.406 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM Guides for Original Classification § 1203.406 Additional classification factors. In...

  10. 14 CFR 1203.406 - Additional classification factors.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Additional classification factors. 1203.406 Section 1203.406 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM Guides for Original Classification § 1203.406 Additional classification factors. In...

  11. A kind of optimizing design method of progressive addition lenses

    NASA Astrophysics Data System (ADS)

    Tang, Yunhai; Qian, Lin; Wu, Quanying; Yu, Jingchi; Chen, Hao; Wang, Yuanyuan

    2010-10-01

    Progressive addition lenses are a kind of ophthalmic lenses with freeform surface. The surface curvature of the progressive addition lenses varies gradually from a minimum value in the upper, distance-viewing area, to a maximum value in the lower, near-viewing area. A kind of optimizing design method of progressive addition lenses is proposed to improve the optical quality by modifying the vector heights of the surface of designed progressive addition lenses initially. The relationship among mean power, cylinder power and the vector heights of the surface is deduced, and the optimizing factor is also gained. The vector heights of the surface of designed progressive addition lenses initially are used to calculate the plots of mean power and cylinder power based on the principle of differential geometry. The mean power plot is changed by adjusting the optimizing factor. Otherwise, the novel plot of the mean power can also be derived by shifting the mean power of one selected region to another and then by interpolating and smoothing. A partial differential equation of the elliptic type is founded based on the changed mean power. The solution of the equation is achieved by iterative method. The optimized vector heights of the surface are solved out. Compared with the original lens, the region in which the astigmatism near the nasal side on distance-vision portion is less than 0.5 D has become broader, and the clear regions on distance-vision and near-vision portion are wider.

  12. 14 CFR § 1203.406 - Additional classification factors.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 5 2014-01-01 2014-01-01 false Additional classification factors. § 1203.406 Section § 1203.406 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM Guides for Original Classification § 1203.406 Additional classification...

  13. Evaluating Three Different Methods of Determining Addition in Presbyopia

    PubMed Central

    Yazdani, Negareh; Khorasani, Abbas Azimi; Moghadam, Hanieh Mirhajian; Yekta, Abbas Ali; Ostadimoghaddam, Hadi; Shandiz, Javad Heravian

    2016-01-01

    Purpose: To compare three different methods for determining addition in presbyopes. Methods: The study included 81 subjects with presbyopia who aged 40-70 years. Reading addition values were measured using 3 approaches including the amplitude of accommodation (AA), dynamic retinoscopy (DR), and increasing plus lens (IPL). Results: IPL overestimated reading addition relative to other methods. Mean near addition obtained by AA, DR and IPL were 1.31, 1.68 and 1.77, respectively. Our results showed that IPL method could provide 20/20 vision at near in the majority of presbyopic subjects (63.4%). Conclusion: The results were approximately the same for 3 methods and provided comparable final addition; however, mean near additions were higher with increasing plus lens compared with the other two methods. In presbyopic individuals, increasing plus lens is recommended as the least time-consuming method with the range of ±0.50 diopter at the 40 cm working distance. PMID:27621785

  14. Addition of noise by scatter correction methods in PVI

    SciTech Connect

    Barney, J.S. . Div. of Nuclear Medicine); Harrop, R.; Atkins, M.S. . School of Computing Science)

    1994-08-01

    Effective scatter correction techniques are required to account for errors due to high scatter fraction seen in positron volume imaging (PVI). To be effective, the correction techniques must be accurate and practical, but they also must not add excessively to the statistical noise in the image. The authors have investigated the noise added by three correction methods: a convolution/subtraction method; a method that interpolates the scatter from the events outside the object; and a dual energy window method with and without smoothing of the scatter estimate. The methods were applied to data generated by Monte Carlo simulation to determine their effect on the variance of the corrected projections. The convolution and interpolation methods did not add significantly to the variance. The dual energy window subtraction method without smoothing increased the variance by a factor of more than twelve, but this factor was improved to 1.2 by smoothing the scatter estimate.

  15. Optimal Multicomponent Analysis Using the Generalized Standard Addition Method.

    ERIC Educational Resources Information Center

    Raymond, Margaret; And Others

    1983-01-01

    Describes an experiment on the simultaneous determination of chromium and magnesium by spectophotometry modified to include the Generalized Standard Addition Method computer program, a multivariate calibration method that provides optimal multicomponent analysis in the presence of interference and matrix effects. Provides instructions for…

  16. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  17. Additive manufacturing method for SRF components of various geometries

    SciTech Connect

    Rimmer, Robert; Frigola, Pedro E; Murokh, Alex Y

    2015-05-05

    An additive manufacturing method for forming nearly monolithic SRF niobium cavities and end group components of arbitrary shape with features such as optimized wall thickness and integral stiffeners, greatly reducing the cost and technical variability of conventional cavity construction. The additive manufacturing method for forming an SRF cavity, includes atomizing niobium to form a niobium powder, feeding the niobium powder into an electron beam melter under a vacuum, melting the niobium powder under a vacuum in the electron beam melter to form an SRF cavity; and polishing the inside surface of the SRF cavity.

  18. AP-42 ADDITIONS AND REVISIONS - TRANSPORTABILITY FACTORS FOR FUGITIVE DUST

    EPA Science Inventory

    The product is a table of factors, one for each county in the US, reflecting the portion of fugitive dust removed very close to the source via impaction on vegetation and similar mechanisms. Factors were based on land cover in area (county or grid cell) A praft final product was...

  19. Sugar metabolism, an additional virulence factor in enterobacteria.

    PubMed

    Le Bouguénec, Chantal; Schouler, Catherine

    2011-01-01

    Enterobacteria display a high level of flexibility in their fermentative metabolism. Biotyping assays have thus been developed to discriminate between clinical isolates. Each biotype uses one or more sugars more efficiently than the others. Recent studies show links between sugar metabolism and virulence in enterobacteria. In particular, mechanisms of carbohydrate utilization differ substantially between pathogenic and commensal E. coli strains. We are now starting to gain insight into the importance of this variability in metabolic function. Studies using various animal models of intestinal colonization showed that the presence of the fos and deoK loci involved in the metabolism of short-chain fructoligosaccharides and deoxyribose, respectively, help avian and human pathogenic E. coli to outcompete with the normal flora and colonize the intestine. Both PTS and non-PTS sugar transporters have been found to modulate virulence of extraintestinal pathogenic E. coli strains. The vpe, GimA, and aec35-37 loci contribute to bacterial virulence in vivo during experimental septicemia and urinary tract infection, meningitis, and colibacillosis, respectively. However, in most cases, the sugars metabolized, and the precise role of their utilization in the expression of bacterial virulence is still unknown. The massive development of powerful analytical methods over recent years will allow establishing the knowledge of the metabolic basis of bacterial pathogenesis that appears to be the next challenge in the field of infectious diseases.

  20. Fuzzy Filtering Method for Color Videos Corrupted by Additive Noise

    PubMed Central

    Ponomaryov, Volodymyr I.; Montenegro-Monroy, Hector; Nino-de-Rivera, Luis

    2014-01-01

    A novel method for the denoising of color videos corrupted by additive noise is presented in this paper. The proposed technique consists of three principal filtering steps: spatial, spatiotemporal, and spatial postprocessing. In contrast to other state-of-the-art algorithms, during the first spatial step, the eight gradient values in different directions for pixels located in the vicinity of a central pixel as well as the R, G, and B channel correlation between the analogous pixels in different color bands are taken into account. These gradient values give the information about the level of contamination then the designed fuzzy rules are used to preserve the image features (textures, edges, sharpness, chromatic properties, etc.). In the second step, two neighboring video frames are processed together. Possible local motions between neighboring frames are estimated using block matching procedure in eight directions to perform interframe filtering. In the final step, the edges and smoothed regions in a current frame are distinguished for final postprocessing filtering. Numerous simulation results confirm that this novel 3D fuzzy method performs better than other state-of-the-art techniques in terms of objective criteria (PSNR, MAE, NCD, and SSIM) as well as subjective perception via the human vision system in the different color videos. PMID:24688428

  1. Methods for detecting additional genes underlying Alzheimer disease

    SciTech Connect

    Locke, P.A.; Haines, J.L.; Ter-Minassian, M.

    1994-09-01

    Alzheimer`s disease (AD) is a complex inherited disorder with proven genetic heterogeneity. To date, genes on chromosome 21 (APP) and 14 (not yet identified) are associated with early-onset familial AD, while the APOE gene on chromosome 19 is associated with both late onset familial and sporadic AD and early onset sporadic AD. Although these genes likely account for the majority of AD, many familial cases cannot be traced to any of these genes. From a set of 127 late-onset multiplex families screened for APOE, 43 (34%) families have at least one affected individual with no APOE-4 allele, suggesting an alternative genetic etiology. Simulation studies indicated that additional loci could be identified through a genomic screen with a 10 cM sieve on a subset of 21 well documented, non-APOE-4 families. Given the uncertainties in the mode of inheritance, reliance on a single analytical method could result in a missed linkage. Therefore, we have developed a strategy of using multiple overlapping yet complementary methods to detect linkage. These include sib-pair analysis and affected-pedigree-member analysis, neither of which makes assumptions about mode of inheritance, and lod score analysis (using two predefined genetic models). In order for a marker to qualify for follow-up, it must fit at least two of three criteria. These are nominal P values of 0.05 or less for the non-parametric methods, and/or a lod score greater than 1.0. Adjacent markers each fulfilling a single criterion also warrant follow-up. To date, we have screened 61 markers on chromosomes 1, 2, 3, 18, 19, 21, and 22. One marker, D2S163, generated a lod score of 1.06 ({theta} = 0.15) and an APMT statistic of 3.68 (P < 0.001). This region is currently being investigated in more detail. Updated results of this region plus additional screening data will be presented.

  2. A new approach to NMR chemical shift additivity parameters using simultaneous linear equation method.

    PubMed

    Shahab, Yosif A; Khalil, Rabah A

    2006-10-01

    A new approach to NMR chemical shift additivity parameters using simultaneous linear equation method has been introduced. Three general nitrogen-15 NMR chemical shift additivity parameters with physical significance for aliphatic amines in methanol and cyclohexane and their hydrochlorides in methanol have been derived. A characteristic feature of these additivity parameters is the individual equation can be applied to both open-chain and rigid systems. The factors that influence the (15)N chemical shift of these substances have been determined. A new method for evaluating conformational equilibria at nitrogen in these compounds using the derived additivity parameters has been developed. Conformational analyses of these substances have been worked out. In general, the results indicate that there are four factors affecting the (15)N chemical shift of aliphatic amines; paramagnetic term (p-character), lone pair-proton interactions, proton-proton interactions, symmetry of alkyl substituents and molecular association.

  3. Method for controlling a laser additive process using intrinsic illumination

    NASA Astrophysics Data System (ADS)

    Tait, Robert; Cai, Guoshuang; Azer, Magdi; Chen, Xiaobin; Liu, Yong; Harding, Kevin

    2015-05-01

    One form of additive manufacturing is to use a laser to generate a melt pool from powdered metal that is sprayed from a nozzle. The laser net-shape machining system builds the part a layer at a time by following a predetermined path. However, because the path may need to take many turns, maintaining a constant melt pool may not be easy. A straight section may require one speed and power while a sharp bend would over melt the metal at the same settings. This paper describes a process monitoring method that uses the intrinsic IR radiation from the melt pool along with a process model configured to establish target values for the parameters associated with the manufacture or repair. This model is based upon known properties of the metal being used as well as the properties of the laser beam. An adaptive control technique is then employed to control process parameters of the machining system based upon the real-time weld pool measurement. Since the system uses the heat radiant from the melt pool, other previously deposited metal does not confuse the system as only the melted material is seen by the camera.

  4. Time dependent view factor methods

    SciTech Connect

    Kirkpatrick, R.C.

    1998-03-01

    View factors have been used for treating radiation transport between opaque surfaces bounding a transparent medium for several decades. However, in recent years they have been applied to problems involving intense bursts of radiation in enclosed volumes such as in the laser fusion hohlraums. In these problems, several aspects require treatment of time dependence.

  5. Additive Factors Do Not Imply Discrete Processing Stages: A Worked Example Using Models of the Stroop Task

    PubMed Central

    Stafford, Tom; Gurney, Kevin N.

    2011-01-01

    Previously, it has been shown experimentally that the psychophysical law known as Piéron’s Law holds for color intensity and that the size of the effect is additive with that of Stroop condition (Stafford et al., 2011). According to the additive factors method (Donders, 1868–1869/1969; Sternberg, 1998), additivity is assumed to indicate independent and discrete processing stages. We present computational modeling work, using an existing Parallel Distributed Processing model of the Stroop task (Cohen et al., 1990) and a standard model of decision making (Ratcliff, 1978). This demonstrates that additive factors can be successfully accounted for by existing single stage models of the Stroop effect. Consequently, it is not valid to infer either discrete stages or separate loci of effects from additive factors. Further, our modeling work suggests that information binding may be a more important architectural property for producing additive factors than discrete stages. PMID:22102842

  6. Additives and method for controlling clathrate hydrates in fluid systems

    DOEpatents

    Sloan, E.D. Jr.; Christiansen, R.L.; Lederhos, J.P.; Long, J.P.; Panchalingam, V.; Du, Y.; Sum, A.K.W.

    1997-06-17

    Discussed is a process for preventing clathrate hydrate masses from detrimentally impeding the possible flow of a fluid susceptible to clathrate hydrate formation. The process is particularly useful in the natural gas and petroleum production, transportation and processing industry where gas hydrate formation can cause serious problems. Additives preferably contain one or more five member, six member and/or seven member cyclic chemical groupings. Additives include polymers having lactam rings. Additives can also contain polyelectrolytes that are believed to improve conformance of polymer additives through steric hindrance and/or charge repulsion. Also, polymers having an amide on which a C{sub 1}-C{sub 4} group is attached to the nitrogen and/or the carbonyl carbon of the amide may be used alone, or in combination with ring-containing polymers for enhanced effectiveness. Polymers having at least some repeating units representative of polymerizing at least one of an oxazoline, an N-substituted acrylamide and an N-vinyl alkyl amide are preferred.

  7. Additives and method for controlling clathrate hydrates in fluid systems

    DOEpatents

    Sloan, Jr., Earle Dendy; Christiansen, Richard Lee; Lederhos, Joseph P.; Long, Jin Ping; Panchalingam, Vaithilingam; Du, Yahe; Sum, Amadeu Kun Wan

    1997-01-01

    Discussed is a process for preventing clathrate hydrate masses from detrimentally impeding the possible flow of a fluid susceptible to clathrate hydrate formation. The process is particularly useful in the natural gas and petroleum production, transportation and processing industry where gas hydrate formation can cause serious problems. Additives preferably contain one or more five member, six member and/or seven member cyclic chemical groupings. Additives include polymers having lactam rings. Additives can also contain polyelectrolytes that are believed to improve conformance of polymer additives through steric hinderance and/or charge repulsion. Also, polymers having an amide on which a C.sub.1 -C.sub.4 group is attached to the nitrogen and/or the carbonyl carbon of the amide may be used alone, or in combination with ring-containing polymers for enhanced effectiveness. Polymers having at least some repeating units representative of polymerizing at least one of an oxazoline, an N-substituted acrylamide and an N-vinyl alkyl amide are preferred.

  8. System and method for high power diode based additive manufacturing

    DOEpatents

    El-Dasher, Bassem S.; Bayramian, Andrew; Demuth, James A.; Farmer, Joseph C.; Torres, Sharon G.

    2016-04-12

    A system is disclosed for performing an Additive Manufacturing (AM) fabrication process on a powdered material forming a substrate. The system may make use of a diode array for generating an optical signal sufficient to melt a powdered material of the substrate. A mask may be used for preventing a first predetermined portion of the optical signal from reaching the substrate, while allowing a second predetermined portion to reach the substrate. At least one processor may be used for controlling an output of the diode array.

  9. Investigation of an investment casting method combined with additive manufacturing methods for manufacturing lattice structures

    NASA Astrophysics Data System (ADS)

    Kodira, Ganapathy D.

    Cellular metals exhibit combinations of mechanical, thermal and acoustic properties that provide opportunities for various implementations and applications; light weight aerospace and automobile structures, impact and noise absorption, heat dissipation, and heat exchange. Engineered cell topologies enable one to control mechanical, thermal, and acoustic properties of the gross cell structures. A possible way to manufacture complex 3D metallic cellular solids for mass production with a relatively low cost, the investment casting (IC) method may be used by combining the rapid prototyping (RP) of wax or injection molding. In spite of its potential to produce mass products of various 3D cellular metals, the method is known to have significant casting porosity as a consequence of the complex cellular topology which makes continuous fluid's access to the solidification interface difficult. The effects of temperature on the viscosity of the fluids were studied. A comparative cost analysis between AM-IC and additive manufacturing methods is carried out. In order to manufacture 3D cellular metals with various topologies for multi-functional applications, the casting porosity should be resolved. In this study, the relations between casting porosity and processing conditions of molten metals while interconnecting with complex cellular geometries are investigated. Temperature and pressure conditions on the rapid prototyping -- investment casting (RP-IC) method are reported, thermal stresses induced are also studied. The manufactured samples are compared with those made by additive manufacturing methods.

  10. 34 CFR 377.22 - What additional factors does the Secretary consider in making grants?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROJECTS TO INCREASE CLIENT CHOICE PROGRAM How Does the Secretary Make an Award? § 377.22 What additional factors does the Secretary consider in making grants? In addition to the criteria in § 377.21, the... strategies to increase client choice, in order to ensure that a variety of approaches are demonstrated...

  11. Are major behavioral and sociodemographic risk factors for mortality additive or multiplicative in their effects?

    PubMed

    Mehta, Neil; Preston, Samuel

    2016-04-01

    All individuals are subject to multiple risk factors for mortality. In this paper, we consider the nature of interactions between certain major sociodemographic and behavioral risk factors associated with all-cause mortality in the United States. We develop the formal logic pertaining to two forms of interaction between risk factors, additive and multiplicative relations. We then consider the general circumstances in which additive or multiplicative relations might be expected. We argue that expectations about interactions among socio-demographic variables, and their relation to behavioral variables, have been stated in terms of additivity. However, the statistical models typically used to estimate the relation between risk factors and mortality assume that risk factors act multiplicatively. We examine empirically the nature of interactions among five major risk factors associated with all-cause mortality: smoking, obesity, race, sex, and educational attainment. Data were drawn from the cross-sectional NHANES III (1988-1994) and NHANES 1999-2010 surveys, linked to death records through December 31, 2011. Our analytic sample comprised 35,604 respondents and 5369 deaths. We find that obesity is additive with each of the remaining four variables. We speculate that its additivity is a reflection of the fact that obese status is generally achieved later in life. For all pairings of socio-demographic variables, risks are multiplicative. For survival chances, it is much more dangerous to be poorly educated if you are black or if you are male. And it is much riskier to be a male if you are black. These traits, established at birth or during childhood, literally result in deadly combinations. We conclude that the identification of interactions among risk factors can cast valuable light on the nature of the process being studied. It also has public health implications by identifying especially vulnerable groups and by properly identifying the proportion of deaths

  12. Synthesizing regression results: a factored likelihood method.

    PubMed

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-06-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported in the regression studies to calculate synthesized standardized slopes. It uses available correlations to estimate missing ones through a series of regressions, allowing us to synthesize correlations among variables as if each included study contained all the same variables. Great accuracy and stability of this method under fixed-effects models were found through Monte Carlo simulation. An example was provided to demonstrate the steps for calculating the synthesized slopes through sweep operators. By rearranging the predictors in the included regression models or omitting a relatively small number of correlations from those models, we can easily apply the factored likelihood method to many situations involving synthesis of linear models. Limitations and other possible methods for synthesizing more complicated models are discussed. Copyright © 2012 John Wiley & Sons, Ltd. PMID:26053653

  13. Method for factor analysis of GC/MS data

    DOEpatents

    Van Benthem, Mark H; Kotula, Paul G; Keenan, Michael R

    2012-09-11

    The method of the present invention provides a fast, robust, and automated multivariate statistical analysis of gas chromatography/mass spectroscopy (GC/MS) data sets. The method can involve systematic elimination of undesired, saturated peak masses to yield data that follow a linear, additive model. The cleaned data can then be subjected to a combination of PCA and orthogonal factor rotation followed by refinement with MCR-ALS to yield highly interpretable results.

  14. 34 CFR 648.32 - What additional factors does the Secretary consider?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false What additional factors does the Secretary consider? 648.32 Section 648.32 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION GRADUATE ASSISTANCE IN AREAS OF NATIONAL...

  15. 34 CFR 491.22 - What additional factor does the Secretary consider?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 3 2013-07-01 2013-07-01 false What additional factor does the Secretary consider? 491.22 Section 491.22 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION FOR THE...

  16. 34 CFR 491.22 - What additional factor does the Secretary consider?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 3 2012-07-01 2012-07-01 false What additional factor does the Secretary consider? 491.22 Section 491.22 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION FOR THE...

  17. 34 CFR 491.22 - What additional factor does the Secretary consider?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 3 2014-07-01 2014-07-01 false What additional factor does the Secretary consider? 491.22 Section 491.22 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION FOR THE...

  18. 34 CFR 491.22 - What additional factor does the Secretary consider?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 3 2011-07-01 2011-07-01 false What additional factor does the Secretary consider? 491.22 Section 491.22 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION FOR THE...

  19. 34 CFR 491.22 - What additional factor does the Secretary consider?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false What additional factor does the Secretary consider? 491.22 Section 491.22 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION FOR THE...

  20. 34 CFR 636.22 - What additional factors does the Secretary consider?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false What additional factors does the Secretary consider? 636.22 Section 636.22 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION URBAN COMMUNITY SERVICE PROGRAM How Does...

  1. 34 CFR 636.22 - What additional factors does the Secretary consider?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 3 2014-07-01 2014-07-01 false What additional factors does the Secretary consider? 636.22 Section 636.22 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION URBAN COMMUNITY SERVICE PROGRAM How Does...

  2. 34 CFR 636.22 - What additional factors does the Secretary consider?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 3 2012-07-01 2012-07-01 false What additional factors does the Secretary consider? 636.22 Section 636.22 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION URBAN COMMUNITY SERVICE PROGRAM How Does...

  3. 34 CFR 636.22 - What additional factors does the Secretary consider?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 3 2013-07-01 2013-07-01 false What additional factors does the Secretary consider? 636.22 Section 636.22 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION URBAN COMMUNITY SERVICE PROGRAM How Does...

  4. 34 CFR 636.22 - What additional factors does the Secretary consider?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 3 2011-07-01 2011-07-01 false What additional factors does the Secretary consider? 636.22 Section 636.22 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION URBAN COMMUNITY SERVICE PROGRAM How Does...

  5. 21 CFR 1311.115 - Additional requirements for two-factor authentication.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 9 2013-04-01 2013-04-01 false Additional requirements for two-factor authentication. 1311.115 Section 1311.115 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF JUSTICE... criteria of FIPS 140-2 Security Level 1, as incorporated by reference in § 1311.08, for...

  6. 21 CFR 1311.115 - Additional requirements for two-factor authentication.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 9 2012-04-01 2012-04-01 false Additional requirements for two-factor authentication. 1311.115 Section 1311.115 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF JUSTICE... criteria of FIPS 140-2 Security Level 1, as incorporated by reference in § 1311.08, for...

  7. 34 CFR 425.22 - What additional factors does the Secretary consider?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false What additional factors does the Secretary consider? 425.22 Section 425.22 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION DEMONSTRATION PROJECTS FOR...

  8. 34 CFR 648.32 - What additional factors does the Secretary consider?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... private institutions of higher education. (Authority: 20 U.S.C. 1135-1135c) ... 34 Education 3 2011-07-01 2011-07-01 false What additional factors does the Secretary consider? 648.32 Section 648.32 Education Regulations of the Offices of the Department of Education...

  9. 34 CFR 472.23 - What additional factor does the Secretary consider?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false What additional factor does the Secretary consider? 472.23 Section 472.23 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION NATIONAL WORKPLACE LITERACY PROGRAM...

  10. 34 CFR 472.23 - What additional factor does the Secretary consider?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 3 2012-07-01 2012-07-01 false What additional factor does the Secretary consider? 472.23 Section 472.23 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION NATIONAL WORKPLACE LITERACY PROGRAM...

  11. 34 CFR 472.23 - What additional factor does the Secretary consider?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 3 2014-07-01 2014-07-01 false What additional factor does the Secretary consider? 472.23 Section 472.23 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION NATIONAL WORKPLACE LITERACY PROGRAM...

  12. 34 CFR 472.23 - What additional factor does the Secretary consider?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 3 2011-07-01 2011-07-01 false What additional factor does the Secretary consider? 472.23 Section 472.23 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION NATIONAL WORKPLACE LITERACY PROGRAM...

  13. 34 CFR 472.23 - What additional factor does the Secretary consider?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 3 2013-07-01 2013-07-01 false What additional factor does the Secretary consider? 472.23 Section 472.23 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION NATIONAL WORKPLACE LITERACY PROGRAM...

  14. Integrating products of Bessel functions with an additional exponential or rational factor

    NASA Astrophysics Data System (ADS)

    Van Deun, Joris; Cools, Ronald

    2008-04-01

    We provide two MATLAB programs to compute integrals of the form ex∏i=1kJν_i(ax)dxand 0∞xr+x∏i=1kJν_i(ax)dx with Jν_i(x) the Bessel function of the first kind and (real) order ν. The parameter m is a real number such that ∑ν+m>-1 (to assure integrability near zero), r is real and the numbers c and a are all strictly positive. The program can deliver accurate error estimates. Program summaryProgram title: BESSELINTR, BESSELINTC Catalogue identifier: AEAH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1601 No. of bytes in distributed program, including test data, etc.: 13 161 Distribution format: tar.gz Programming language: Matlab (version ⩾6.5), Octave (version ⩾2.1.69) Computer: All supporting Matlab or Octave Operating system: All supporting Matlab or Octave RAM: For k Bessel functions our program needs approximately ( 500+140k) double precision variables Classification: 4.11 Nature of problem: The problem consists in integrating an arbitrary product of Bessel functions with an additional rational or exponential factor over a semi-infinite interval. Difficulties arise from the irregular oscillatory behaviour and the possible slow decay of the integrand, which prevents truncation at a finite point. Solution method: The interval of integration is split into a finite and infinite part. The integral over the finite part is computed using Gauss-Legendre quadrature. The integrand on the infinite part is approximated using asymptotic expansions and this approximation is integrated exactly with the aid of the upper incomplete gamma function. In the case where a rational factor is present, this factor is first expanded in a Taylor series around infinity. Restrictions: Some (and eventually all

  15. Additive Methods for Prediction of Thermochemical Properties. The Laidler Method Revisited. 1. Hydrocarbons

    NASA Astrophysics Data System (ADS)

    Leal, Joa˜O. Paulo

    2006-03-01

    A new parameterization of the Laidler method for estimation of atomization enthalpies and standard enthalpies of formation at 298.15 K for several families of hydrocarbons (alkanes, alkenes, alkynes, polyenes, poly-ynes, alkyl radicals, cycloalkanes, cycloalkenes, benzene derivatives, and polyaromatics) is presented. A total of 200 compounds (164 for liquid phase) are used for the calculation of the parameters. Comparison between the experimental values and those calculated using the group additive scheme led to an average difference of 1.28 kJṡmol-1 for the gas phase enthalpy of formation (excluding the polyaromatic compounds) and of 1.38 kJṡmol-1 for the liquid phase enthalpy of formation. The data base used appears to be essentially error free, but for some compounds (e.g., 2,2,4-trimethyl-pentane, with the highest deviation among all compounds except the polyaromatic ones) the experimental values might need a reevaluation. An Excel worksheet is provided to simplify the calculation of enthalpies of formation and atomization enthalpies based on the Laidler terms defined in this paper.

  16. Isolation of an additional member of the fibroblast growth factor receptor family, FGFR-3.

    PubMed Central

    Keegan, K; Johnson, D E; Williams, L T; Hayman, M J

    1991-01-01

    The fibroblast growth factors are a family of polypeptide growth factors involved in a variety of activities including mitogenesis, angiogenesis, and wound healing. Fibroblast growth factor receptors (FGFRs) have previously been identified in chicken, mouse, and human and have been shown to contain an extracellular domain with either two or three immunoglobulin-like domains, a transmembrane domain, and a cytoplasmic tyrosine kinase domain. We have isolated a human cDNA for another tyrosine kinase receptor that is highly homologous to the previously described FGFR. Expression of this receptor cDNA in COS cells directs the expression of a 125-kDa glycoprotein. We demonstrate that this cDNA encodes a biologically active receptor by showing that human acidic and basic fibroblast growth factors activate this receptor as measured by 45Ca2+ efflux assays. These data establish the existence of an additional member of the FGFR family that we have named FGFR-3. Images PMID:1847508

  17. Confirmatory factor analysis of the Penn State Worry Questionnaire: Multiple factors or method effects?

    PubMed

    Brown, Timothy A

    2003-12-01

    The latent structure of the Penn State Worry Questionnaire (PSWQ) was evaluated with confirmatory factor analyses (CFAs) in 1200 outpatients with DSM-IV anxiety and mood disorders. Of particular interest was the comparative fit and interpretability of a two-factor solution (cf. Behaviour Research and Therapy 40 (2002) 313) vs. a one-factor model that specified method effects arising from five reverse-worded items. Consistent with prediction, the superiority of the one-factor model was demonstrated in split-sample CFA replications (ns=600). Multiple-group CFAs indicated that the measurement properties of the PSWQ were invariant in male and female patients. In addition to their direct relevance to the psychometrics of the PSWQ, the results are discussed in regard to methodological considerations for using factor analytic methods in the evaluation of psychological tests.

  18. Methods for Proteomic Analysis of Transcription Factors

    PubMed Central

    Jiang, Daifeng; Jarrett, Harry W.; Haskins, William E.

    2009-01-01

    Investigation of the transcription factor (TF) proteome presents challenges including the large number of low abundance and post-translationally modified proteins involved. Specialized purification and analysis methods have been developed over the last decades which facilitate the study of the TF proteome and these are reviewed here. Generally applicable proteomics methods that have been successfully applied are also discussed. TFs are selectively purified by affinity techniques using the DNA response element (RE) as the basis for highly specific binding, and several agents have been discovered that either enhance binding or diminish non-specific binding. One such affinity method called “trapping” enables purification of TFs bound to nM concentrations and recovery of TF complexes in a highly purified state. The electrophoretic mobility shift assay (EMSA) is the most important assay of TFs because it provides both measures of the affinity and amount of the TF present. Southwestern (SW) blotting and DNA-protein crosslinking (DPC) allow in vitro estimates of DNA-binding-protein mass, while chromatin immunoprecipitation (ChIP) allows confirmation of promoter binding in vivo. Two-dimensional gel electrophoresis methods (2-DE), and 3-DE methods which combines EMSA with 2-DE, allow further resolution of TFs. The synergy of highly selective purification and analytical strategies has led to an explosion of knowledge about the TF proteome and the proteomes of other DNA- and RNA-binding proteins. PMID:19726046

  19. Bleeding after endoscopic submucosal dissection: Risk factors and preventive methods

    PubMed Central

    Kataoka, Yosuke; Tsuji, Yosuke; Sakaguchi, Yoshiki; Minatsuki, Chihiro; Asada-Hirayama, Itsuko; Niimi, Keiko; Ono, Satoshi; Kodashima, Shinya; Yamamichi, Nobutake; Fujishiro, Mitsuhiro; Koike, Kazuhiko

    2016-01-01

    Endoscopic submucosal dissection (ESD) has become widely accepted as a standard method of treatment for superficial gastrointestinal neoplasms because it enables en block resection even for large lesions or fibrotic lesions with minimal invasiveness, and decreases the local recurrence rate. Moreover, specimens resected in an en block fashion enable accurate histological assessment. Taking these factors into consideration, ESD seems to be more advantageous than conventional endoscopic mucosal resection (EMR), but the associated risks of perioperative adverse events are higher than in EMR. Bleeding after ESD is the most frequent among these adverse events. Although post-ESD bleeding can be controlled by endoscopic hemostasis in most cases, it may lead to serious conditions including hemorrhagic shock. Even with preventive methods including administration of acid secretion inhibitors and preventive hemostasis, post-ESD bleeding cannot be completely prevented. In addition high-risk cases for post-ESD bleeding, which include cases with the use of antithrombotic agents or which require large resection, are increasing. Although there have been many reports about associated risk factors and methods of preventing post-ESD bleeding, many issues remain unsolved. Therefore, in this review, we have overviewed risk factors and methods of preventing post-ESD bleeding from previous studies. Endoscopists should have sufficient knowledge of these risk factors and preventive methods when performing ESD. PMID:27468187

  20. Three WRKY transcription factors additively repress abscisic acid and gibberellin signaling in aleurone cells.

    PubMed

    Zhang, Liyuan; Gu, Lingkun; Ringler, Patricia; Smith, Stanley; Rushton, Paul J; Shen, Qingxi J

    2015-07-01

    Members of the WRKY transcription factor superfamily are essential for the regulation of many plant pathways. Functional redundancy due to duplications of WRKY transcription factors, however, complicates genetic analysis by allowing single-mutant plants to maintain wild-type phenotypes. Our analyses indicate that three group I WRKY genes, OsWRKY24, -53, and -70, act in a partially redundant manner. All three showed characteristics of typical WRKY transcription factors: each localized to nuclei and yeast one-hybrid assays indicated that they all bind to W-boxes, including those present in their own promoters. Quantitative real time-PCR (qRT-PCR) analyses indicated that the expression levels of the three WRKY genes varied in the different tissues tested. Particle bombardment-mediated transient expression analyses indicated that all three genes repress the GA and ABA signaling in a dosage-dependent manner. Combination of all three WRKY genes showed additive antagonism of ABA and GA signaling. These results suggest that these WRKY proteins function as negative transcriptional regulators of GA and ABA signaling. However, different combinations of these WRKY genes can lead to varied strengths in suppression of their targets.

  1. Relative Importance and Additive Effects of Maternal and Infant Risk Factors on Childhood Asthma

    PubMed Central

    Rosas-Salazar, Christian; James, Kristina; Escobar, Gabriel; Gebretsadik, Tebeb; Li, Sherian Xu; Carroll, Kecia N.; Walsh, Eileen; Mitchel, Edward; Das, Suman; Kumar, Rajesh; Yu, Chang; Dupont, William D.; Hartert, Tina V.

    2016-01-01

    Background Environmental exposures that occur in utero and during early life may contribute to the development of childhood asthma through alteration of the human microbiome. The objectives of this study were to estimate the cumulative effect and relative importance of environmental exposures on the risk of childhood asthma. Methods We conducted a population-based birth cohort study of mother-child dyads who were born between 1995 and 2003 and were continuously enrolled in the PRIMA (Prevention of RSV: Impact on Morbidity and Asthma) cohort. The individual and cumulative impact of maternal urinary tract infections (UTI) during pregnancy, maternal colonization with group B streptococcus (GBS), mode of delivery, infant antibiotic use, and older siblings at home, on the risk of childhood asthma were estimated using logistic regression. Dose-response effect on childhood asthma risk was assessed for continuous risk factors: number of maternal UTIs during pregnancy, courses of infant antibiotics, and number of older siblings at home. We further assessed and compared the relative importance of these exposures on the asthma risk. In a subgroup of children for whom maternal antibiotic use during pregnancy information was available, the effect of maternal antibiotic use on the risk of childhood asthma was estimated. Results Among 136,098 singleton birth infants, 13.29% developed asthma. In both univariate and adjusted analyses, maternal UTI during pregnancy (odds ratio [OR] 1.2, 95% confidence interval [CI] 1.18, 1.25; adjusted OR [AOR] 1.04, 95%CI 1.02, 1.07 for every additional UTI) and infant antibiotic use (OR 1.21, 95%CI 1.20, 1.22; AOR 1.16, 95%CI 1.15, 1.17 for every additional course) were associated with an increased risk of childhood asthma, while having older siblings at home (OR 0.92, 95%CI 0.91, 0.93; AOR 0.85, 95%CI 0.84, 0.87 for each additional sibling) was associated with a decreased risk of childhood asthma, in a dose-dependent manner. Compared with vaginal

  2. Effect of KCl addition method on the Pt/KL catalyst for the aromatization of hexane

    SciTech Connect

    Dai, Lian-Xin; Sakashita, Haru; Tatsumi, Takashi )

    1994-05-01

    The influence of the method for loading platinum precursor and adding KCl, KCl loading content, calcination temperature, KCl addition procedure, various additives, and water washing on the activity and selectivity of Pt/KL catalysts for hexane reforming reaction has been investigated. The catalyst preparation methods involve ion exchange (IE), incipient wetness impregnation (IWI), and coimpregnation with KCl (IWI-KCl). The Pt/KL catalysts prepared by ion exchange with [Pt(NH[sub 3])[sub 4

  3. Dirac equation in low dimensions: The factorization method

    NASA Astrophysics Data System (ADS)

    Sánchez-Monroy, J. A.; Quimbay, C. J.

    2014-11-01

    We present a general approach to solve the (1 + 1) and (2 + 1) -dimensional Dirac equations in the presence of static scalar, pseudoscalar and gauge potentials, for the case in which the potentials have the same functional form and thus the factorization method can be applied. We show that the presence of electric potentials in the Dirac equation leads to two Klein-Gordon equations including an energy-dependent potential. We then generalize the factorization method for the case of energy-dependent Hamiltonians. Additionally, the shape invariance is generalized for a specific class of energy-dependent Hamiltonians. We also present a condition for the absence of the Klein paradox (stability of the Dirac sea), showing how Dirac particles in low dimensions can be confined for a wide family of potentials.

  4. Factors which Limit the Value of Additional Redundancy in Human Rated Launch Vehicle Systems

    NASA Technical Reports Server (NTRS)

    Anderson, Joel M.; Stott, James E.; Ring, Robert W.; Hatfield, Spencer; Kaltz, Gregory M.

    2008-01-01

    The National Aeronautics and Space Administration (NASA) has embarked on an ambitious program to return humans to the moon and beyond. As NASA moves forward in the development and design of new launch vehicles for future space exploration, it must fully consider the implications that rule-based requirements of redundancy or fault tolerance have on system reliability/risk. These considerations include common cause failure, increased system complexity, combined serial and parallel configurations, and the impact of design features implemented to control premature activation. These factors and others must be considered in trade studies to support design decisions that balance safety, reliability, performance and system complexity to achieve a relatively simple, operable system that provides the safest and most reliable system within the specified performance requirements. This paper describes conditions under which additional functional redundancy can impede improved system reliability. Examples from current NASA programs including the Ares I Upper Stage will be shown.

  5. 40 CFR 80.8 - Sampling methods for gasoline, diesel fuel, fuel additives, and renewable fuels.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of the Federal Register under 5 U.S.C. 552(a) and 1 CFR part 51. To enforce any edition other than... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Sampling methods for gasoline, diesel... Provisions § 80.8 Sampling methods for gasoline, diesel fuel, fuel additives, and renewable fuels....

  6. Parametric and Nonparametric Statistical Methods for Genomic Selection of Traits with Additive and Epistatic Genetic Architectures

    PubMed Central

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2014-01-01

    Parametric and nonparametric methods have been developed for purposes of predicting phenotypes. These methods are based on retrospective analyses of empirical data consisting of genotypic and phenotypic scores. Recent reports have indicated that parametric methods are unable to predict phenotypes of traits with known epistatic genetic architectures. Herein, we review parametric methods including least squares regression, ridge regression, Bayesian ridge regression, least absolute shrinkage and selection operator (LASSO), Bayesian LASSO, best linear unbiased prediction (BLUP), Bayes A, Bayes B, Bayes C, and Bayes Cπ. We also review nonparametric methods including Nadaraya-Watson estimator, reproducing kernel Hilbert space, support vector machine regression, and neural networks. We assess the relative merits of these 14 methods in terms of accuracy and mean squared error (MSE) using simulated genetic architectures consisting of completely additive or two-way epistatic interactions in an F2 population derived from crosses of inbred lines. Each simulated genetic architecture explained either 30% or 70% of the phenotypic variability. The greatest impact on estimates of accuracy and MSE was due to genetic architecture. Parametric methods were unable to predict phenotypic values when the underlying genetic architecture was based entirely on epistasis. Parametric methods were slightly better than nonparametric methods for additive genetic architectures. Distinctions among parametric methods for additive genetic architectures were incremental. Heritability, i.e., proportion of phenotypic variability, had the second greatest impact on estimates of accuracy and MSE. PMID:24727289

  7. Methods for Estimating Uncertainty in Factor Analytic Solutions

    EPA Science Inventory

    The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...

  8. Testing for Additivity at Select Mixture Groups of Interest Based on Statistical Equivalence Testing Methods

    SciTech Connect

    Stork, LeAnna M.; Gennings, Chris; Carchman, Richard; Carter, Jr., Walter H.; Pounds, Joel G.; Mumtaz, Moiz

    2006-12-01

    Several assumptions, defined and undefined, are used in the toxicity assessment of chemical mixtures. In scientific practice mixture components in the low-dose region, particularly subthreshold doses, are often assumed to behave additively (i.e., zero interaction) based on heuristic arguments. This assumption has important implications in the practice of risk assessment, but has not been experimentally tested. We have developed methodology to test for additivity in the sense of Berenbaum (Advances in Cancer Research, 1981), based on the statistical equivalence testing literature where the null hypothesis of interaction is rejected for the alternative hypothesis of additivity when data support the claim. The implication of this approach is that conclusions of additivity are made with a false positive rate controlled by the experimenter. The claim of additivity is based on prespecified additivity margins, which are chosen using expert biological judgment such that small deviations from additivity, which are not considered to be biologically important, are not statistically significant. This approach is in contrast to the usual hypothesis-testing framework that assumes additivity in the null hypothesis and rejects when there is significant evidence of interaction. In this scenario, failure to reject may be due to lack of statistical power making the claim of additivity problematic. The proposed method is illustrated in a mixture of five organophosphorus pesticides that were experimentally evaluated alone and at relevant mixing ratios. Motor activity was assessed in adult male rats following acute exposure. Four low-dose mixture groups were evaluated. Evidence of additivity is found in three of the four low-dose mixture groups.The proposed method tests for additivity of the whole mixture and does not take into account subset interactions (e.g., synergistic, antagonistic) that may have occurred and cancelled each other out.

  9. Dirac equation in low dimensions: The factorization method

    SciTech Connect

    Sánchez-Monroy, J.A.; Quimbay, C.J.

    2014-11-15

    We present a general approach to solve the (1+1) and (2+1)-dimensional Dirac equations in the presence of static scalar, pseudoscalar and gauge potentials, for the case in which the potentials have the same functional form and thus the factorization method can be applied. We show that the presence of electric potentials in the Dirac equation leads to two Klein–Gordon equations including an energy-dependent potential. We then generalize the factorization method for the case of energy-dependent Hamiltonians. Additionally, the shape invariance is generalized for a specific class of energy-dependent Hamiltonians. We also present a condition for the absence of the Klein paradox (stability of the Dirac sea), showing how Dirac particles in low dimensions can be confined for a wide family of potentials. - Highlights: • The low-dimensional Dirac equation in the presence of static potentials is solved. • The factorization method is generalized for energy-dependent Hamiltonians. • The shape invariance is generalized for energy-dependent Hamiltonians. • The stability of the Dirac sea is related to the existence of supersymmetric partner Hamiltonians.

  10. The Hull Method for Selecting the Number of Common Factors

    ERIC Educational Resources Information Center

    Lorenzo-Seva, Urbano; Timmerman, Marieke E.; Kiers, Henk A. L.

    2011-01-01

    A common problem in exploratory factor analysis is how many factors need to be extracted from a particular data set. We propose a new method for selecting the number of major common factors: the Hull method, which aims to find a model with an optimal balance between model fit and number of parameters. We examine the performance of the method in an…

  11. A uniform nonlinearity criterion for rational functions applied to calibration curve and standard addition methods.

    PubMed

    Michałowska-Kaczmarczyk, Anna Maria; Asuero, Agustin G; Martin, Julia; Alonso, Esteban; Jurado, Jose Marcos; Michałowski, Tadeusz

    2014-12-01

    Rational functions of the Padé type are used for the calibration curve (CCM), and standard addition (SAM) methods purposes. In this paper, the related functions were applied to results obtained from the analyses of (a) nickel with use of FAAS method, (b) potassium according to FAES method, and (c) salicylic acid according to HPLC-MS/MS method. A uniform, integral criterion of nonlinearity of the curves, obtained according to CCM and SAM, is suggested. This uniformity is based on normalization of the approximating functions within the frames of a unit area.

  12. Additive cytotoxicity of different monoclonal antibody-cobra venom factor conjugates for human neuroblastoma cells.

    PubMed

    Juhl, H; Petrella, E C; Cheung, N K; Bredehorst, R; Vogel, C W

    1997-11-01

    Insufficient numbers of antigen molecules and heterogeneity of antigen expression on tumor cells are major factors limiting the immunotherapeutic potential of the few clinically useful monoclonal antibodies capable of mediating complement cytotoxicity and antibody-dependent cellular cytotoxicity. To overcome this limitation, we converted two non-cytotoxic monoclonal anti-neuroblastoma antibodies, designated 3E7 (IgG2b) and 8H9 (IgG1), and the non-cytotoxic F(ab')2 fragment of the cytotoxic monoclonal anti-GD2 antibody 3F8 (IgG3) into cytotoxic antibody conjugates by covalent attachment of cobra venom factor (CVF), a structural and functional homologue of the activated third component of complement. Competitive binding experiments confirmed the different specificities of the three antibodies. In the presence of human complement, all three antibody-CVF conjugates mediated selective complement-dependent lysis of human neuroblastoma cells. Consistent with the kinetics of the alternative pathway of complement, approximately seven hours incubation were required to reach maximum cytotoxicity of up to 25% for the 3E7-CVF conjugate, up to 60% for the 8H9-CVF conjugate, and up to 95% for the 3F8 F(ab')2-CVF conjugate. The different extent of maximal cytotoxic activity of the three conjugates was reflected by corresponding differences in the extent of binding of both unconjugated antibodies and the respective conjugates. Any combination of the three antibody-CVF conjugates caused an additive effect in complement-mediated lysis. Using a cocktail of all three conjugates, the extent of complement-mediated killing could be increased up to 100%. These data demonstrate that by coupling of CVF the relative large number of non-cytotoxic monoclonal anti-tumor antibodies of interesting specificity can be used to design cocktails of cytotoxic conjugates and, thereby, to overcome the problem of insufficient and heterogeneous antigen expression on tumor cells for immunotherapy.

  13. Comparison of prosthetic models produced by traditional and additive manufacturing methods

    PubMed Central

    Park, Jin-Young; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Jae-Hong

    2015-01-01

    PURPOSE The purpose of this study was to verify the clinical-feasibility of additive manufacturing by comparing the accuracy of four different manufacturing methods for metal coping: the conventional lost wax technique (CLWT); subtractive methods with wax blank milling (WBM); and two additive methods, multi jet modeling (MJM), and micro-stereolithography (Micro-SLA). MATERIALS AND METHODS Thirty study models were created using an acrylic model with the maxillary upper right canine, first premolar, and first molar teeth. Based on the scan files from a non-contact blue light scanner (Identica; Medit Co. Ltd., Seoul, Korea), thirty cores were produced using the WBM, MJM, and Micro-SLA methods, respectively, and another thirty frameworks were produced using the CLWT method. To measure the marginal and internal gap, the silicone replica method was adopted, and the silicone images obtained were evaluated using a digital microscope (KH-7700; Hirox, Tokyo, Japan) at 140X magnification. Analyses were performed using two-way analysis of variance (ANOVA) and Tukey post hoc test (α=.05). RESULTS The mean marginal gaps and internal gaps showed significant differences according to tooth type (P<.001 and P<.001, respectively) and manufacturing method (P<.037 and P<.001, respectively). Micro-SLA did not show any significant difference from CLWT regarding mean marginal gap compared to the WBM and MJM methods. CONCLUSION The mean values of gaps resulting from the four different manufacturing methods were within a clinically allowable range, and, thus, the clinical use of additive manufacturing methods is acceptable as an alternative to the traditional lost wax-technique and subtractive manufacturing. PMID:26330976

  14. Releasing-addition method for the flame-photometric determination of calcium in thermal waters

    USGS Publications Warehouse

    Rowe, J.J.

    1963-01-01

    Study of the interferences of silica and sulfate in the flame-photometric determination of calcium in thermal waters has led to the development of a method requiring no prior chemical separations. The interference effects of silica, sulfate, potassium, sodium, aluminum, and phosphate are overcome by an addition technique coupled with the use of magnesium as a releasing agent. ?? 1963.

  15. Comparison of oxytetracycline degradation behavior in pig manure with different antibiotic addition methods.

    PubMed

    Wang, Yan; Chen, Guixiu; Liang, Juanboo; Zou, Yongde; Wen, Xin; Liao, Xindi; Wu, Yinbao

    2015-12-01

    Using manure collected from swine fed with diet containing antibiotics and antibiotic-free swine manure spiked with antibiotics are the two common methods of studying the degradation behavior of veterinary antibiotic in manure in the environment. However, few studies had been conducted to co-compare these two different antibiotic addition methods. This study used oxytetracycline (OTC) as a model antibiotic to study antibiotic degradation behavior in manure under the above two OTC addition methods. In addition, the role of microorganisms present in the manure on degradation behavior was also examined. The results showed that degradation half-life of OTC in manure from swine fed OTC (9.04 days) was significantly shorter than that of the manure directly treated with OTC (9.65 days). Concentration of 4-epi-OTC in manure from swine fed OTC peaked earlier than that in manure spiked with OTC, and the degradation rates of 4-epi-OTC and α-apo-OTC in the manure from swine fed OTC were faster, but the peak concentrations were lower, than those in manure spiked with OTC. Bacterial diversity and relative abundance of Bacillus cereus data demonstrated that sterilization of the manure before experiment significantly decreased OTC degradation rate in both of the addition methods. Results of the present study demonstrated that the presence of the metabolites (especially 4-epi-OTC) and microorganisms had significant influence on OTC degradation.

  16. The Capacity Profile: A Method to Classify Additional Care Needs in Children with Neurodevelopmental Disabilities

    ERIC Educational Resources Information Center

    Meester-Delver, Anke; Beelen, Anita; Hennekam, Raoul; Nollet, Frans; Hadders-Algra, Mijna

    2007-01-01

    The aim of this study was to determine the interrater reliability and stability over time of the Capacity Profile (CAP). The CAP is a standardized method for classifying additional care needs indicated by current impairments in five domains of body functions: physical health, neuromusculoskeletal and movement-related, sensory, mental, and voice…

  17. Analytic method for three-center nuclear attraction integrals: a generalization of the Gegenbauer addition theorem

    SciTech Connect

    Weatherford, C.A.

    1988-01-01

    A completely analytic method for evaluating three-center nuclear-attraction integrals for STOS is presented. The method exploits a separation of the STO into an evenly loaded solid harmonic and a OS STO. The harmonics are translated to the molecular center of mass in closed finite terms. The OS STO is translated using the Gegenbauer addition theorem; ls STOS are translated using a single parametric differentiation of the OS formula. Explicit formulas for the integrals are presented for arbitrarily located atoms. A numerical example is given to illustrate the method.

  18. 34 CFR 359.32 - What additional factors does the Secretary consider in making a grant under this program?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... EDUCATION DISABILITY AND REHABILITATION RESEARCH: SPECIAL PROJECTS AND DEMONSTRATIONS FOR SPINAL CORD INJURIES How Does the Secretary Make a Grant? § 359.32 What additional factors does the Secretary...

  19. 34 CFR 359.32 - What additional factors does the Secretary consider in making a grant under this program?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... EDUCATION DISABILITY AND REHABILITATION RESEARCH: SPECIAL PROJECTS AND DEMONSTRATIONS FOR SPINAL CORD INJURIES How Does the Secretary Make a Grant? § 359.32 What additional factors does the Secretary...

  20. 34 CFR 359.32 - What additional factors does the Secretary consider in making a grant under this program?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... EDUCATION DISABILITY AND REHABILITATION RESEARCH: SPECIAL PROJECTS AND DEMONSTRATIONS FOR SPINAL CORD INJURIES How Does the Secretary Make a Grant? § 359.32 What additional factors does the Secretary...

  1. 34 CFR 359.32 - What additional factors does the Secretary consider in making a grant under this program?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... EDUCATION DISABILITY AND REHABILITATION RESEARCH: SPECIAL PROJECTS AND DEMONSTRATIONS FOR SPINAL CORD INJURIES How Does the Secretary Make a Grant? § 359.32 What additional factors does the Secretary...

  2. 34 CFR 359.32 - What additional factors does the Secretary consider in making a grant under this program?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... EDUCATION DISABILITY AND REHABILITATION RESEARCH: SPECIAL PROJECTS AND DEMONSTRATIONS FOR SPINAL CORD INJURIES How Does the Secretary Make a Grant? § 359.32 What additional factors does the Secretary...

  3. Aitchbone hanging and ageing period are additive factors influencing pork eating quality.

    PubMed

    Channon, H A; Taverner, M R; D'Souza, D N; Warner, R D

    2014-01-01

    The effects of abattoir, carcase weight (60 or 80 kg HCW), hanging method (Achilles or aitchbone) and ageing period (2 or 7 day post-slaughter) on eating quality attributes of pork were investigated in this 3×2×2×2 factorial study. A total of 144 Large White×Landrace female pigs were slaughtered at one of three abattoirs and sides hung from either the Achilles tendon or the aitchbone. After 24 h chilling, loin (M. longissimus thoracis et lumborum) and topside (M. semimembranosus) muscles were individually vacuum packaged and aged for 2 or 7 days post-slaughter. Consumers (n=852) evaluated eating quality. Neither abattoir nor carcase weight influenced tenderness, flavour or overall liking of pork. Improvements in tenderness, flavour and overall liking were found due to aitchbone hanging (P<0.001) and ageing (P<0.001) for 7 days compared with Achilles-hung carcases and pork aged for 2 days, respectively. This study demonstrated that aitchbone hanging and 7 day ageing can improve eating quality, but these effects were additive as the interaction term was not significant. PMID:24013699

  4. Methods of cracking a crude product to produce additional crude products

    DOEpatents

    Mo, Weijian; Roes, Augustinus Wilhelmus Maria; Nair, Vijay

    2009-09-08

    A method for producing a crude product is disclosed. Formation fluid is produced from a subsurface in situ heat treatment process. The formation fluid is separated to produce a liquid stream and a first gas stream. The first gas stream includes olefins. The liquid stream is fractionated to produce one or more crude products. At least one of the crude products has a boiling range distribution from 38.degree. C. and 343.degree. C. as determined by ASTM Method D5307. The crude product having the boiling range distribution from 38.degree. C. and 343.degree. C. is catalytically cracked to produce one or more additional crude products. At least one of the additional crude products is a second gas stream. The second gas stream has a boiling point of at most 38.degree. C. at 0.101 MPa.

  5. I like your GRIN: Deign methods for gradient-index progressive addition lenses

    NASA Astrophysics Data System (ADS)

    Fischer, David J.; Moore, Duncan T.

    2002-12-01

    Progressive addition lenses (PALs) are vision correction lenses with a continuous change in power, used to treat the physical condition presbyopia. These lenses are currently fabricated using non-rotationally symmetric surfaces to achieve the focal power transition and aberration control. In this research, we consider the use of Gradient-Index (GRIN) designs for providing both power progression and aberration control. The use of B-Spline curves for GRIN representation is explained. Design methods and simulation results for GRIN PALs are presented. Possible uses for the design methods with other lenses, such as unifocal lenses and axicons, are also discussed.

  6. Well cementing method using an am/amps fluid loss additive blend

    SciTech Connect

    Boncan, V.G.; Gandy, R.

    1986-12-30

    A method is described of cementing a wellbore, comprising the steps of: mixing together a hydraulic cement, water in an amount to produce a pumpable slurry, and a non-retarding fluid loss additive blend. The blend comprises a copolymer of acrylamide and 2-acrylamide-2-methylpropane sulfonate, the sodium salt of naphthalene formaldehyde sulfonate, and polyvinylpyrrolidone polymer; pumping the cement slurry to the desired location in the wellbore; and allowing the cement slurry to harden to a solid mass.

  7. The method of manufacture of nylon dental partially removable prosthesis using additive technologies

    NASA Astrophysics Data System (ADS)

    Kashapov, R. N.; Korobkina, A. I.; Platonov, E. V.; Saleeva, G. T.

    2014-12-01

    The article is devoted to the topic of creating new methods of dental prosthesis. The aim of this work is to investigate the possibility of using additive technology to create nylon prosthesis. As a result of experimental studies, was made a sample of nylon partially removable prosthesis using 3D printing has allowed to simplify, accelerate and reduce the coat of manufacturing high-precision nylon dentures.

  8. Hybrid Residual Flexibility/Mass-Additive Method for Structural Dynamic Testing

    NASA Technical Reports Server (NTRS)

    Tinker, M. L.

    2003-01-01

    A large fixture was designed and constructed for modal vibration testing of International Space Station elements. This fixed-base test fixture, which weighs thousands of pounds and is anchored to a massive concrete floor, initially utilized spherical bearings and pendulum mechanisms to simulate Shuttle orbiter boundary constraints for launch of the hardware. Many difficulties were encountered during a checkout test of the common module prototype structure, mainly due to undesirable friction and excessive clearances in the test-article-to-fixture interface bearings. Measured mode shapes and frequencies were not representative of orbiter-constrained modes due to the friction and clearance effects in the bearings. As a result, a major redesign effort for the interface mechanisms was undertaken. The total cost of the fixture design, construction and checkout, and redesign was over $2 million. Because of the problems experienced with fixed-base testing, alternative free-suspension methods were studied, including the residual flexibility and mass-additive approaches. Free-suspension structural dynamics test methods utilize soft elastic bungee cords and overhead frame suspension systems that are less complex and much less expensive than fixed-base systems. The cost of free-suspension fixturing is on the order of tens of thousands of dollars as opposed to millions, for large fixed-base fixturing. In addition, free-suspension test configurations are portable, allowing modal tests to be done at sites without modal test facilities. For example, a mass-additive modal test of the ASTRO-1 Shuttle payload was done at the Kennedy Space Center launch site. In this Technical Memorandum, the mass-additive and residual flexibility test methods are described in detail. A discussion of a hybrid approach that combines the best characteristics of each method follows and is the focus of the study.

  9. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  10. Validating a nondestructive optical method for apportioning colored particulate matter into black carbon and additional components

    PubMed Central

    Yan, Beizhan; Kennedy, Daniel; Miller, Rachel L.; Cowin, James P.; Jung, Kyung-hwa; Perzanowski, Matt; Balletta, Marco; Perera, Federica P.; Kinney, Patrick L.; Chillrud, Steven N.

    2011-01-01

    Exposure of black carbon (BC) is associated with a variety of adverse health outcomes. A number of optical methods for estimating BC on Teflon filters have been adopted but most assume all light absorption is due to BC while other sources of colored particulate matter exist. Recently, a four-wavelength-optical reflectance measurement for distinguishing second hand cigarette smoke (SHS) from soot-BC was developed (Brook et al., 2010; Lawless et al., 2004). However, the method has not been validated for soot-BC nor SHS and little work has been done to look at the methodological issues of the optical reflectance measurements for samples that could have SHS, BC, and other colored particles. We refined this method using a lab-modified integrating sphere with absorption measured continuously from 350 nm to 1000 nm. Furthermore, we characterized the absorption spectrum of additional components of particulate matter (PM) on PM2.5 filters including ammonium sulfate, hematite, goethite, and magnetite. Finally, we validate this method for BC by comparison to other standard methods. Use of synthesized data indicates that it is important to optimize the choice of wavelengths to minimize computational errors as additional components (more than 2) are added to the apportionment model of colored components. We found that substantial errors are introduced when using 4 wavelengths suggested by Lawless et al. to quantify four substances, while an optimized choice of wavelengths can reduce model-derived error from over 10% to less than 2%. For environmental samples, the method was sensitive for estimating airborne levels of BC and SHS, but not mass loadings of iron oxides and sulfate. Duplicate samples collected in NYC show high reproducibility (points consistent with a 1:1 line, R2 = 0.95). BC data measured by this method were consistent with those measured by other optical methods, including Aethalometer and Smoke-stain Reflectometer (SSR); although the SSR looses sensitivity at

  11. Validating a nondestructive optical method for apportioning colored particulate matter into black carbon and additional components

    NASA Astrophysics Data System (ADS)

    Yan, Beizhan; Kennedy, Daniel; Miller, Rachel L.; Cowin, James P.; Jung, Kyung-hwa; Perzanowski, Matt; Balletta, Marco; Perera, Federica P.; Kinney, Patrick L.; Chillrud, Steven N.

    2011-12-01

    Exposure of black carbon (BC) is associated with a variety of adverse health outcomes. A number of optical methods for estimating BC on Teflon filters have been adopted but most assume all light absorption is due to BC while other sources of colored particulate matter exist. Recently, a four-wavelength-optical reflectance measurement for distinguishing second hand cigarette smoke (SHS) from soot-BC was developed (Brook et al., 2010; Lawless et al., 2004). However, the method has not been validated for soot-BC nor SHS and little work has been done to look at the methodological issues of the optical reflectance measurements for samples that could have SHS, BC, and other colored particles. We refined this method using a lab-modified integrating sphere with absorption measured continuously from 350 nm to 1000 nm. Furthermore, we characterized the absorption spectrum of additional components of particulate matter (PM) on PM 2.5 filters including ammonium sulfate, hematite, goethite, and magnetite. Finally, we validate this method for BC by comparison to other standard methods. Use of synthesized data indicates that it is important to optimize the choice of wavelengths to minimize computational errors as additional components (more than 2) are added to the apportionment model of colored components. We found that substantial errors are introduced when using 4 wavelengths suggested by Lawless et al. to quantify four substances, while an optimized choice of wavelengths can reduce model-derived error from over 10% to less than 2%. For environmental samples, the method was sensitive for estimating airborne levels of BC and SHS, but not mass loadings of iron oxides and sulfate. Duplicate samples collected in NYC show high reproducibility (points consistent with a 1:1 line, R2 = 0.95). BC data measured by this method were consistent with those measured by other optical methods, including Aethalometer and Smoke-stain Reflectometer (SSR); although the SSR looses sensitivity at

  12. Standard addition method for free acid determination in solutions with hydrolyzable ions

    SciTech Connect

    Baumann, E.W.

    1981-01-01

    The free acid content of solutions containing hydrolyzable ions has been determined potentiometrically by a standard addition method. Two increments of acid are added to the sample in a 1M potassium thiocyanate solution. The sample concentration is calculated by solution of three simultaneous Nernst equations. The method has been demonstrated for solutions containing Al/sup 3 +/, Cr/sup 3 +/, Fe/sup 3 +/, Ni/sup 2 +/, Th/sup 4 +/, or UO/sub 2//sup 2 +/ with a metal-to-acid ratio of < 2.5. The method is suitable for determination of 10 ..mu..moles acid in 10 mL total volume. The accuracy is verifiable by reasonable agreement of the Nerst slopes found in the presence and absence of hydrolyzable ions. The relative standard deviation is < 2.5 percent.

  13. Simultaneous kinetic determination of levodopa and carbidopa by H-point standard addition method.

    PubMed

    Safavi, Afsaneh; Tohidi, Maryam

    2007-05-01

    The kinetic H-point standard addition method (HPSAM) was applied to the simultaneous determination of levodopa and carbidopa. The method was based on the difference in the rate of oxidation of these compounds with Cu(II)-neocuproine system and formation of Cu(I)-neocuproine complex at pH 5.5. The absorbance of the Cu(I)-neocuproine complex was monitored at 453 nm. Experimental conditions such as pH, reagent concentrations, ionic strength and temperature were optimized. Simultaneous determination of levodopa and carbidopa was performed in the range of 0.8-4 and 0.2-1.5 microg ml(-1), respectively. The proposed method was applied to the simultaneous determination of levodopa and carbidopa in pharmaceutical samples, and satisfactory results were obtained.

  14. [Quantitative determination of morphine in opium powder by addition and correlation method using capillary electrophoresis].

    PubMed

    Sun, Guo-xiang; Miao, Ju-ru; Wang, Yu; Sun, Yu-qing

    2002-01-01

    The morphine in opium powder has been quantitatively determined by addition and correlation method (ACM), in which capillary zone electrophoresis was applied, and the average recovery was 100.6%. The relative standard deviation (RSD) of migration time was not more than 2.4%, the RSD of relative migration time was not more than 1.1%, and the RSD of the relative area was not more than 0.51%. Meanwhile, the contrast test has been done by the calibration curve method with an internal standard correlation. The content of morphine in opium powder determined by ACM was the same as that by using the calibration curve method with an internal standard correlated. The study shows that ACM is simple, quick and accurate.

  15. Effect of the chlortetracycline addition method on methane production from the anaerobic digestion of swine wastewater.

    PubMed

    Huang, Lu; Wen, Xin; Wang, Yan; Zou, Yongde; Ma, Baohua; Liao, Xindi; Liang, Juanboo; Wu, Yinbao

    2014-10-01

    Effects of antibiotic residues on methane production in anaerobic digestion are commonly studied using the following two antibiotic addition methods: (1) adding manure from animals that consume a diet containing antibiotics, and (2) adding antibiotic-free animal manure spiked with antibiotics. This study used chlortetracycline (CTC) as a model antibiotic to examine the effects of the antibiotic addition method on methane production in anaerobic digestion under two different swine wastewater concentrations (0.55 and 0.22mg CTC/g dry manure). The results showed that CTC degradation rate in which manure was directly added at 0.55mg CTC/g (HSPIKE treatment) was lower than the control values and the rest of the treatment groups. Methane production from the HSPIKE treatment was reduced (p<0.05) by 12% during the whole experimental period and 15% during the first 7days. The treatments had no significant effect on the pH and chemical oxygen demand value of the digesters, and the total nitrogen of the 0.55mg CTC/kg manure collected from mediated swine was significantly higher than the other values. Therefore, different methane production under different antibiotic addition methods might be explained by the microbial activity and the concentrations of antibiotic intermediate products and metabolites. Because the primary entry route of veterinary antibiotics into an anaerobic digester is by contaminated animal manure, the most appropriate method for studying antibiotic residue effects on methane production may be using manure from animals that are given a particular antibiotic, rather than adding the antibiotic directly to the anaerobic digester.

  16. 21 CFR 1311.115 - Additional requirements for two-factor authentication.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ..., such as a password or response to a challenge question. (2) Something the practitioner is, biometric... modules or one-time-password devices. (c) If one factor is a biometric, the biometric subsystem...

  17. 21 CFR 1311.115 - Additional requirements for two-factor authentication.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., such as a password or response to a challenge question. (2) Something the practitioner is, biometric... modules or one-time-password devices. (c) If one factor is a biometric, the biometric subsystem...

  18. The factorization method and ground state energy bounds

    NASA Astrophysics Data System (ADS)

    Schmutz, M.

    1985-04-01

    We discuss the relationship between the factorization method and the Barnsley bound to the ground state energy. The latter method is extended in such a way that both lower and upper analytic bounds can be obtained.

  19. The Wavelet Element Method. Part 2; Realization and Additional Features in 2D and 3D

    NASA Technical Reports Server (NTRS)

    Canuto, Claudio; Tabacco, Anita; Urban, Karsten

    1998-01-01

    The Wavelet Element Method (WEM) provides a construction of multiresolution systems and biorthogonal wavelets on fairly general domains. These are split into subdomains that are mapped to a single reference hypercube. Tensor products of scaling functions and wavelets defined on the unit interval are used on the reference domain. By introducing appropriate matching conditions across the interelement boundaries, a globally continuous biorthogonal wavelet basis on the general domain is obtained. This construction does not uniquely define the basis functions but rather leaves some freedom for fulfilling additional features. In this paper we detail the general construction principle of the WEM to the 1D, 2D and 3D cases. We address additional features such as symmetry, vanishing moments and minimal support of the wavelet functions in each particular dimension. The construction is illustrated by using biorthogonal spline wavelets on the interval.

  20. Standard addition method for laser ablation ICPMS using a spinning platform.

    PubMed

    Claverie, Fanny; Malherbe, Julien; Bier, Naomi; Molloy, John L; Long, Stephen E

    2013-04-01

    A method has been developed for the fast and easy determination of Pb, Sr, Ba, Ni, Cu, and Zn, which are of geological and environmental interest, in solid samples by laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) using a spinning sample platform. The platform, containing a sample and a standard, is spun during the ablation, allowing the quasi-simultaneous ablation of both materials. The aerosols resulting from the ablation of sample and standard were mixed in the ablation cell allowing quantification of analytes by standard additions. The proportion of standard versus sample of the mixing can be increased by performing the ablation further from the axis of rotation. The ablated masses have been determined using a new strategy based on isotope dilution analysis. This spinning laser ablation method has been applied to the Allende meteorite and four powdered standard reference materials (SRMs) fused in lithium borate glasses: two sediments as well as a soil and a rock material. SRM 612 (Trace Elements in Glass) was also analyzed despite having a matrix slightly different from the glass standard obtained by lithium borate fusion. The deviation from the certified values was found to be less than 15% for most of the mass fractions for all the elements and samples studied, with an average precision of 10%. These results demonstrate the validity of the proposed method for the direct and fast analysis of solid samples of different matrixes by standard additions, using a single standard sample.

  1. Standard addition method for laser ablation ICPMS using a spinning platform.

    PubMed

    Claverie, Fanny; Malherbe, Julien; Bier, Naomi; Molloy, John L; Long, Stephen E

    2013-04-01

    A method has been developed for the fast and easy determination of Pb, Sr, Ba, Ni, Cu, and Zn, which are of geological and environmental interest, in solid samples by laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) using a spinning sample platform. The platform, containing a sample and a standard, is spun during the ablation, allowing the quasi-simultaneous ablation of both materials. The aerosols resulting from the ablation of sample and standard were mixed in the ablation cell allowing quantification of analytes by standard additions. The proportion of standard versus sample of the mixing can be increased by performing the ablation further from the axis of rotation. The ablated masses have been determined using a new strategy based on isotope dilution analysis. This spinning laser ablation method has been applied to the Allende meteorite and four powdered standard reference materials (SRMs) fused in lithium borate glasses: two sediments as well as a soil and a rock material. SRM 612 (Trace Elements in Glass) was also analyzed despite having a matrix slightly different from the glass standard obtained by lithium borate fusion. The deviation from the certified values was found to be less than 15% for most of the mass fractions for all the elements and samples studied, with an average precision of 10%. These results demonstrate the validity of the proposed method for the direct and fast analysis of solid samples of different matrixes by standard additions, using a single standard sample. PMID:23418996

  2. Biological Stability of Drinking Water: Controlling Factors, Methods, and Challenges.

    PubMed

    Prest, Emmanuelle I; Hammes, Frederik; van Loosdrecht, Mark C M; Vrouwenvelder, Johannes S

    2016-01-01

    Biological stability of drinking water refers to the concept of providing consumers with drinking water of same microbial quality at the tap as produced at the water treatment facility. However, uncontrolled growth of bacteria can occur during distribution in water mains and premise plumbing, and can lead to hygienic (e.g., development of opportunistic pathogens), aesthetic (e.g., deterioration of taste, odor, color) or operational (e.g., fouling or biocorrosion of pipes) problems. Drinking water contains diverse microorganisms competing for limited available nutrients for growth. Bacterial growth and interactions are regulated by factors, such as (i) type and concentration of available organic and inorganic nutrients, (ii) type and concentration of residual disinfectant, (iii) presence of predators, such as protozoa and invertebrates, (iv) environmental conditions, such as water temperature, and (v) spatial location of microorganisms (bulk water, sediment, or biofilm). Water treatment and distribution conditions in water mains and premise plumbing affect each of these factors and shape bacterial community characteristics (abundance, composition, viability) in distribution systems. Improved understanding of bacterial interactions in distribution systems and of environmental conditions impact is needed for better control of bacterial communities during drinking water production and distribution. This article reviews (i) existing knowledge on biological stability controlling factors and (ii) how these factors are affected by drinking water production and distribution conditions. In addition, (iii) the concept of biological stability is discussed in light of experience with well-established and new analytical methods, enabling high throughput analysis and in-depth characterization of bacterial communities in drinking water. We discussed, how knowledge gained from novel techniques will improve design and monitoring of water treatment and distribution systems in order

  3. Biological Stability of Drinking Water: Controlling Factors, Methods, and Challenges

    PubMed Central

    Prest, Emmanuelle I.; Hammes, Frederik; van Loosdrecht, Mark C. M.; Vrouwenvelder, Johannes S.

    2016-01-01

    Biological stability of drinking water refers to the concept of providing consumers with drinking water of same microbial quality at the tap as produced at the water treatment facility. However, uncontrolled growth of bacteria can occur during distribution in water mains and premise plumbing, and can lead to hygienic (e.g., development of opportunistic pathogens), aesthetic (e.g., deterioration of taste, odor, color) or operational (e.g., fouling or biocorrosion of pipes) problems. Drinking water contains diverse microorganisms competing for limited available nutrients for growth. Bacterial growth and interactions are regulated by factors, such as (i) type and concentration of available organic and inorganic nutrients, (ii) type and concentration of residual disinfectant, (iii) presence of predators, such as protozoa and invertebrates, (iv) environmental conditions, such as water temperature, and (v) spatial location of microorganisms (bulk water, sediment, or biofilm). Water treatment and distribution conditions in water mains and premise plumbing affect each of these factors and shape bacterial community characteristics (abundance, composition, viability) in distribution systems. Improved understanding of bacterial interactions in distribution systems and of environmental conditions impact is needed for better control of bacterial communities during drinking water production and distribution. This article reviews (i) existing knowledge on biological stability controlling factors and (ii) how these factors are affected by drinking water production and distribution conditions. In addition, (iii) the concept of biological stability is discussed in light of experience with well-established and new analytical methods, enabling high throughput analysis and in-depth characterization of bacterial communities in drinking water. We discussed, how knowledge gained from novel techniques will improve design and monitoring of water treatment and distribution systems in order

  4. A simple method for the addition of rotenone in Arabidopsis thaliana leaves.

    PubMed

    Maliandi, María V; Rius, Sebastián P; Busi, María V; Gomez-Casati, Diego F

    2015-01-01

    A simple and reproducible method for the treatment of Arabidopsis thaliana leaves with rotenone is presented. Rosette leaves were incubated with rotenone and Triton X-100 for at least 15 h. Treated leaves showed increased expression of COX19 and BCS1a, 2 genes known to be induced in Arabidopsis cell cultures after rotenone treatment. Moreover, rotenone/Triton X-100 incubated leaves presented an inhibition of oxygen uptake. The simplicity of the procedure shows this methodology is useful for studying the effect of the addition of rotenone to a photosynthetic tissue in situ.

  5. A simple method for the addition of rotenone in Arabidopsis thaliana leaves.

    PubMed

    Maliandi, María V; Rius, Sebastián P; Busi, María V; Gomez-Casati, Diego F

    2015-01-01

    A simple and reproducible method for the treatment of Arabidopsis thaliana leaves with rotenone is presented. Rosette leaves were incubated with rotenone and Triton X-100 for at least 15 h. Treated leaves showed increased expression of COX19 and BCS1a, 2 genes known to be induced in Arabidopsis cell cultures after rotenone treatment. Moreover, rotenone/Triton X-100 incubated leaves presented an inhibition of oxygen uptake. The simplicity of the procedure shows this methodology is useful for studying the effect of the addition of rotenone to a photosynthetic tissue in situ. PMID:26357865

  6. Calcifying nanoparticles (nanobacteria): an additional potential factor for urolithiasis in space flight crews.

    PubMed

    Jones, Jeffrey A; Ciftcioglu, Neva; Schmid, Josef F; Barr, Yael R; Griffith, Donald

    2009-01-01

    Spaceflight-induced microgravity appears to be a risk factor for the development of urinary calculi, resulting in urolithiasis during and after spaceflight. Calcifying nanoparticles, or nanobacteria, multiply more rapidly in simulated microgravity and create external shells of calcium phosphate. The question arises whether calcifying nanoparticles are nidi for calculi and contribute to the development of clinically significant urolithiasis in those who are predisposed to the development of urinary calculi because of intrinsic or extrinsic factors. This case report describes a calculus recovered after flight from an astronaut that, on morphologic and immunochemical analysis (including specific monoclonal antibody staining), demonstrated characteristics of calcifying nanoparticles. PMID:18718644

  7. Gravimetric approach to the standard addition method in instrumental analysis. 1.

    PubMed

    Kelly, W Robert; MacDonald, Bruce S; Guthrie, William F

    2008-08-15

    A mathematical formulation for a gravimetric approach to the univariate standard addition method (SAM) is presented that has general applicability for both liquids and solids. Using gravimetry rather than volumetry reduces the preparation time, increases design flexibility, and makes increased accuracy possible. SAM has most often been used with analytes in aqueous solutions that are aspirated into flames or plasmas and determined by absorption, emission, or mass spectrometric techniques. The formulation presented here shows that the method can also be applied to complex matrixes, such as distillate and residual fuel oils, using techniques such as X-ray fluorescence (XRF) or combustion combined with atomic fluorescence or absorption. These techniques, which can be subject to matrix-induced interferences, could realize the same benefits that have been demonstrated for dilute aqueous solutions.

  8. Evaporation model for beam based additive manufacturing using free surface lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Klassen, Alexander; Scharowsky, Thorsten; Körner, Carolin

    2014-07-01

    Evaporation plays an important role in many technical applications including beam-based additive manufacturing processes, such as selective electron beam or selective laser melting (SEBM/SLM). In this paper, we describe an evaporation model which we employ within the framework of a two-dimensional free surface lattice Boltzmann method. With this method, we solve the hydrodynamics as well as thermodynamics of the molten material taking into account the mass and energy losses due to evaporation and the recoil pressure acting on the melt pool. Validation of the numerical model is performed by measuring maximum melt depths and evaporative losses in samples of pure titanium and Ti-6Al-4V molten by an electron beam. Finally, the model is applied to create processing maps for an SEBM process. The results predict that the penetration depth of the electron beam, which is a function of the acceleration voltage, has a significant influence on evaporation effects.

  9. Performance of the Tariff Method: validation of a simple additive algorithm for analysis of verbal autopsies

    PubMed Central

    2011-01-01

    Background Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff), which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs) from verbal autopsy data. Methods Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score) provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data. Results Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates. Conclusions Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science. PMID:21816107

  10. 21 CFR 1311.115 - Additional requirements for two-factor authentication.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) separate from the computer to which the practitioner is gaining access. (b) If one factor is a hard token, it must be separate from the computer to which it is gaining access and must meet at least the criteria of FIPS 140-2 Security Level 1, as incorporated by reference in § 1311.08, for...

  11. Calculation Method of Lateral Strengths and Ductility Factors of Constructions with Shear Walls of Different Ductility

    SciTech Connect

    Yamaguchi, Nobuyoshi; Nakao, Masato; Murakami, Masahide; Miyazawa, Kenji

    2008-07-08

    For seismic design, ductility-related force modification factors are named R factor in Uniform Building Code of U.S, q factor in Euro Code 8 and Ds (inverse of R) factor in Japanese Building Code. These ductility-related force modification factors for each type of shear elements are appeared in those codes. Some constructions use various types of shear walls that have different ductility, especially for their retrofit or re-strengthening. In these cases, engineers puzzle the decision of force modification factors of the constructions. Solving this problem, new method to calculate lateral strengths of stories for simple shear wall systems is proposed and named 'Stiffness--Potential Energy Addition Method' in this paper. This method uses two design lateral strengths for each type of shear walls in damage limit state and safety limit state. Two lateral strengths of stories in both limit states are calculated from these two design lateral strengths for each type of shear walls in both limit states. Calculated strengths have the same quality as values obtained by strength addition method using many steps of load-deformation data of shear walls. The new method to calculate ductility factors is also proposed in this paper. This method is based on the new method to calculate lateral strengths of stories. This method can solve the problem to obtain ductility factors of stories with shear walls of different ductility.

  12. The effect of nutritional additives on anti-infective factors in human milk.

    PubMed

    Quan, R; Yang, C; Rubinstein, S; Lewiston, N J; Stevenson, D K; Kerner, J A

    1994-06-01

    It has become a common practice to supplement human milk with a variety of additives to improve the nutritive content of the feeding for the premature infant. Twenty-two freshly frozen human milk samples were measured for lysozyme activity, total IgA, and specific IgA to Escherichia coli serotypes 01, 04, and 06. One mL aliquots were mixed with the following: 1 mL of Similac, Similac Special Care, Enfamil, Enfamil Premature Formula, and sterile water; 33 mL of Poly-Vi-Sol, 33 mg of Moducal, and 38 mg of breast-milk fortifier, and then reanalyzed. Significant decreases (41% to 74%) in lysozyme activity were seen with the addition of all formulas; breast-milk fortifier reduced activity by 19%, while no differences were seen with Moducal, sterile water, or Poly-Vi-Sol. No differences were seen in total IgA content, but some decreases were seen in specific IgA to E. coli serotypes 04 and 06. E. coli growth was determined after 3 1/2 hours of incubation at 37 degrees C after mixing. All cow-milk formulas enhanced E. coli growth; soy formulas and other additives preserved inhibition of bacterial growth. Nutritional additives can impair anti-infective properties of human milk, and such interplay should be considered in the decision on the feeding regimen of premature infants.

  13. 34 CFR 377.22 - What additional factors does the Secretary consider in making grants?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (Continued) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION DEMONSTRATION PROJECTS TO INCREASE CLIENT CHOICE PROGRAM How Does the Secretary Make an Award? § 377.22 What additional... strategies to increase client choice, in order to ensure that a variety of approaches are demonstrated...

  14. 34 CFR 377.22 - What additional factors does the Secretary consider in making grants?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (Continued) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION DEMONSTRATION PROJECTS TO INCREASE CLIENT CHOICE PROGRAM How Does the Secretary Make an Award? § 377.22 What additional... strategies to increase client choice, in order to ensure that a variety of approaches are demonstrated...

  15. Method for adding additional isotopes to actinide-only burnup credit

    SciTech Connect

    Lancaster, D.B.; Fuentes, E.; Kang, C.

    1998-01-01

    The Topical Report on Actinide-Only Burnup Credit for Pressurized Water Reactor Spent Nuclear Fuel Packages requires computer code validation to be performed against a benchmark set of chemical assays for isotopic concentration and against a benchmark set of critical experiments for package criticality. Both sets contain all the isotopes included in the methodology. The chemical assays used include the uranium and plutonium isotopes, while the critical experiments were composed of UO{sub 2} or MOX rods, covering the isotopes in the actinide only approach. Since other isotopes are not included in the validation benchmark sets, it would be necessary to justify both the content and worth of any additional isotope for which burnup credit is to be taken (i.e., both the concentration and criticality effect of each particular isotope must be validated). A method is proposed here that can be used for any number of additional isotopes. As does the actinide-only burnup credit methodology, this method makes use of chemical assay data to establish the conservatism in the prediction of each isotope`s concentration. Criticality validation is also performed using a benchmark set of UO{sub 2} and MOX critical experiments, where the additional isotopes are validated using worth experiments to conservatively account for any uncertainty in their cross sections. The remaining requirements (analysis and modeling parameters, loading criteria generation, and physical implementation and controls) are performed exactly as described in the actinide-only burnup credit methodology. This report provides insight into each particular requirement in the new methodology.

  16. An identification method for enclosed voids restriction in manufacturability design for additive manufacturing structures

    NASA Astrophysics Data System (ADS)

    Liu, Shutian; Li, Quhao; Chen, Wenjiong; Tong, Liyong; Cheng, Gengdong

    2015-06-01

    Additive manufacturing (AM) technologies, such as selective laser sintering (SLS) and fused deposition modeling (FDM), have become the powerful tools for direct manufacturing of complex parts. This breakthrough in manufacturing technology makes the fabrication of new geometrical features and multiple materials possible. Past researches on designs and design methods often focused on how to obtain desired functional performance of the structures or parts, specific manufacturing capabilities as well as manufacturing constraints of AM were neglected. However, the inherent constraints in AM processes should be taken into account in design process. In this paper, the enclosed voids, one type of manufacturing constraints of AM, are investigated. In mathematics, enclosed voids restriction expressed as the solid structure is simplyconnected. We propose an equivalent description of simply-connected constraint for avoiding enclosed voids in structures, named as virtual temperature method (VTM). In this method, suppose that the voids in structure are filled with a virtual heating material with high heat conductivity and solid areas are filled with another virtual material with low heat conductivity. Once the enclosed voids exist in structure, the maximum temperature value of structure will be very high. Based upon this method, the simplyconnected constraint is equivalent to maximum temperature constraint. And this method can be easily used to formulate the simply-connected constraint in topology optimization. The effectiveness of this description method is illustrated by several examples. Based upon topology optimization, an example of 3D cantilever beam is used to illustrate the trade-off between manufacturability and functionality. Moreover, the three optimized structures are fabricated by FDM technology to indicate further the necessity of considering the simply-connected constraint in design phase for AM.

  17. Molecular cloning and expression of an additional epidermal growth factor receptor-related gene.

    PubMed Central

    Plowman, G D; Whitney, G S; Neubauer, M G; Green, J M; McDonald, V L; Todaro, G J; Shoyab, M

    1990-01-01

    Epidermal growth factor (EGF), transforming growth factor alpha (TGF-alpha), and amphiregulin are structurally and functionally related growth regulatory proteins. These secreted polypeptides all bind to the 170-kDa cell-surface EGF receptor, activating its intrinsic kinase activity. However, amphiregulin exhibits different activities than EGF and TGF-alpha in a number of biological assays. Amphiregulin only partially competes with EGF for binding EGF receptor, and amphiregulin does not induce anchorage-independent growth of normal rat kidney cells (NRK) in the presence of TGF-beta. Amphiregulin also appears to abrogate the stimulatory effect of TGF-alpha on the growth of several aggressive epithelial carcinomas that overexpress EGF receptor. These findings suggest that amphiregulin may interact with a separate receptor in certain cell types. Here we report the cloning of another member of the human EGF receptor (HER) family of receptor tyrosine kinases, which we have named "HER3/ERRB3." The cDNA was isolated from a human carcinoma cell line, and its 6-kilobase transcript was identified in various human tissues. We have generated peptide-specific antisera that recognizes the 160-kDa HER3 protein when transiently expressed in COS cells. These reagents will allow us to determine whether HER3 binds amphiregulin or other growth regulatory proteins and what role HER3 protein plays in the regulation of cell growth. Images PMID:2164210

  18. Study of cadmium, zinc and lead biosorption by orange wastes using the subsequent addition method.

    PubMed

    Pérez-Marín, A B; Ballester, A; González, F; Blázquez, M L; Muñoz, J A; Sáez, J; Zapata, V Meseguer

    2008-11-01

    The biosorption of several metals (Cd2+, Zn2+ and Pb2+) by orange wastes has been investigated in binary systems. Multicomponent sorption isotherms were obtained using an original procedure, similar to that proposed by Pagnanelli et al. [Pagnanelli, F., Petrangeli, M.P., Toro, L., Trifoni, M., Veglio, F., 2001a. Biosorption of metal ions on Arthrobacter sp.: biomass characterization and biosorption modelling. Environ. Sci. Technol. 34, 2773-2778] for monoelement systems, known as subsequent addition method (SAM). Experimental sorption data were analysed using an extended multicomponent Langmuir equation. The maximum sorption uptake was approximately 0.25mmol/g for the three binary systems studied. The reliability of the proposed procedure for obtaining the equilibrium data in binary systems was verified by means of a statistical F-test. PMID:18440805

  19. Method for simultaneous use of a single additive for coal flotation, dewatering and reconstitution

    SciTech Connect

    Wen, Wu-Wey; Gray, M.L.; Champagne, K.J.

    1993-11-09

    A single dose of additive contributes to three consecutive fine coal unit operations, i.e., flotation, dewatering and reconstitution, whereby the fine coal is first combined with water in a predetermined proportion so as to formulate a slurry. The slurry is then mixed with a heavy hydrocarbon-based emulsion in a second predetermined proportion and at a first predetermined mixing speed and for a predetermined period of time. The conditioned slurry is then cleaned by a froth flotation method to form a clean coal froth and then the froth is dewatered by vacuum filtration or a centrifugation process to form reconstituted products that are dried to dust-less clumps prior to combustion.

  20. Method for simultaneous use of a single additive for coal flotation, dewatering, and reconstitution

    DOEpatents

    Wen, Wu-Wey; Gray, McMahan L.; Champagne, Kenneth J.

    1995-01-01

    A single dose of additive contributes to three consecutive fine coal unit operations, i.e., flotation, dewatering and reconstitution, whereby the fine coal is first combined with water in a predetermined proportion so as to formulate a slurry. The slurry is then mixed with a heavy hydrocarbon-based emulsion in a second predetermined proportion and at a first predetermined mixing speed and for a predetermined period of time. The conditioned slurry is then cleaned by a froth flotation method to form a clean coal froth and then the froth is dewatered by vacuum filtration or a centrifugation process to form reconstituted products that are dried to dust-less clumps prior to combustion.

  1. Application of a New Method for Analyzing Images: Two-Dimensional Non-Linear Additive Decomposition

    SciTech Connect

    MA Zaccaria; DM Drudnoy; JE Stasenko

    2006-07-05

    This paper documents the application of a new image processing algorithm, two-dimensional non-linear additive decomposition (NLAD), which is used to identify regions in a digital image whose gray-scale (or color) intensity is different than the surrounding background. Standard image segmentation algorithms exist that allow users to segment images based on gray-scale intensity and/or shape. However, these processing techniques do not adequately account for the image noise and lighting variation that typically occurs across an image. NLAD is designed to separate image noise and background from artifacts thereby providing the ability to consistently evaluate images. The decomposition techniques used in this algorithm are based on the concepts of mathematical morphology. NLAD emulates the human capability of visually separating an image into different levels of resolution components, denoted as ''coarse'', ''fine'', and ''intermediate''. Very little resolution information overlaps any two of the component images. This method can easily determine and/or remove trends and noise from an image. NLAD has several additional advantages over conventional image processing algorithms, including no need for a transformation from one space to another, such as is done with Fourier transforms, and since only finite summations are required, the calculational effort is neither extensive nor complicated.

  2. Tandem sequence of phenol oxidation and intramolecular addition as a method in building heterocycles.

    PubMed

    Ratnikov, Maxim O; Farkas, Linda E; Doyle, Michael P

    2012-11-16

    A tandem phenol oxidation-Michael addition furnishing oxo- and -aza-heterocycles has been developed. Dirhodium caprolactamate [Rh(2)(cap)(4)] catalyzed oxidation by T-HYDRO of phenols with alcohols, ketones, amides, carboxylic acids, and N-Boc protected amines tethered to their 4-position afforded 4-(tert-butylperoxy)cyclohexa-2,5-dienones that undergo Brønsted acid catalyzed intramolecular Michael addition in one-pot to produce oxo- and -aza-heterocycles in moderate to good yields. The scope of the developed methodology includes dipeptides Boc-Tyr-Gly-OEt and Boc-Tyr-Phe-Me and provides a pathway for understanding the possible transformations arising from oxidative stress of tyrosine residues. A novel method of selective cleavage of O-O bond in hindered internal peroxide using TiCl(4) has been discovered in efforts directed to the construction of cleroindicin F, whose synthesis was completed in 50% yield over just 3 steps from tyrosol using the developed methodology.

  3. [Multi-residue method for determination of veterinary drugs and feed additives in meats by HPLC].

    PubMed

    Chonan, Takao; Fujimoto, Toru; Ueno, Ken-Ichi; Tazawa, Teijiro; Ogawa, Hiroshi

    2007-10-01

    A simple and rapid multi-residue method was developed for the determination of 28 kinds of veterinary drugs and feed additives (drugs) in muscle of cattle, pig and chicken. The drugs were extracted with acetonitrile-water (95:5) in a homogenizer and ultrasonic generator. The extracted solution was poured into an alumina column and the drugs were eluted with acetonitrile-water (90:10). The eluate was washed with n-hexane saturated with acetonitrile and then evaporated. The drugs were separated on a Inertsil ODS-3V column (4.6 mm i.d. x 250 mm) with a gradient system of 0.1% phosphoric acid-acetonitrile as the mobile phase, with monitoring at 280 and 340 nm. The recoveries of the 26 kinds of drugs were over 60% from the meats fortified at 0.1 microg/g, and the quantification limits of most drugs were 0.01 microg/g. This proposed method was found to be effective and suitable for the screening of the above drugs in meats.

  4. Simulation of Powder Layer Deposition in Additive Manufacturing Processes Using the Discrete Element Method

    SciTech Connect

    Herbold, E. B.; Walton, O.; Homel, M. A.

    2015-10-26

    This document serves as a final report to a small effort where several improvements were added to a LLNL code GEODYN-­L to develop Discrete Element Method (DEM) algorithms coupled to Lagrangian Finite Element (FE) solvers to investigate powder-­bed formation problems for additive manufacturing. The results from these simulations will be assessed for inclusion as the initial conditions for Direct Metal Laser Sintering (DMLS) simulations performed with ALE3D. The algorithms were written and performed on parallel computing platforms at LLNL. The total funding level was 3-­4 weeks of an FTE split amongst two staff scientists and one post-­doc. The DEM simulations emulated, as much as was feasible, the physical process of depositing a new layer of powder over a bed of existing powder. The DEM simulations utilized truncated size distributions spanning realistic size ranges with a size distribution profile consistent with realistic sample set. A minimum simulation sample size on the order of 40-­particles square by 10-­particles deep was utilized in these scoping studies in order to evaluate the potential effects of size segregation variation with distance displaced in front of a screed blade. A reasonable method for evaluating the problem was developed and validated. Several simulations were performed to show the viability of the approach. Future investigations will focus on running various simulations investigating powder particle sizing and screen geometries.

  5. Lactic Acid Fermentation, Urea and Lime Addition: Promising Faecal Sludge Sanitizing Methods for Emergency Sanitation.

    PubMed

    Anderson, Catherine; Malambo, Dennis Hanjalika; Perez, Maria Eliette Gonzalez; Nobela, Happiness Ngwanamoseka; de Pooter, Lobke; Spit, Jan; Hooijmans, Christine Maria; de Vossenberg, Jack van; Greya, Wilson; Thole, Bernard; van Lier, Jules B; Brdjanovic, Damir

    2015-10-29

    In this research, three faecal sludge sanitizing methods-lactic acid fermentation, urea treatment and lime treatment-were studied for application in emergency situations. These methods were investigated by undertaking small scale field trials with pit latrine sludge in Blantyre, Malawi. Hydrated lime was able to reduce the E. coli count in the sludge to below the detectable limit within 1 h applying a pH > 11 (using a dosage from 7% to 17% w/w, depending faecal sludge alkalinity), urea treatment required about 4 days using 2.5% wet weight urea addition, and lactic acid fermentation needed approximately 1 week after being dosed with 10% wet weight molasses (2 g (glucose/fructose)/kg) and 10% wet weight pre-culture (99.8% pasteurised whole milk and 0.02% fermented milk drink containing Lactobacillus casei Shirota). Based on Malawian prices, the cost of sanitizing 1 m³ of faecal sludge was estimated to be €32 for lactic acid fermentation, €20 for urea treatment and €12 for hydrated lime treatment.

  6. Lactic Acid Fermentation, Urea and Lime Addition: Promising Faecal Sludge Sanitizing Methods for Emergency Sanitation.

    PubMed

    Anderson, Catherine; Malambo, Dennis Hanjalika; Perez, Maria Eliette Gonzalez; Nobela, Happiness Ngwanamoseka; de Pooter, Lobke; Spit, Jan; Hooijmans, Christine Maria; de Vossenberg, Jack van; Greya, Wilson; Thole, Bernard; van Lier, Jules B; Brdjanovic, Damir

    2015-11-01

    In this research, three faecal sludge sanitizing methods-lactic acid fermentation, urea treatment and lime treatment-were studied for application in emergency situations. These methods were investigated by undertaking small scale field trials with pit latrine sludge in Blantyre, Malawi. Hydrated lime was able to reduce the E. coli count in the sludge to below the detectable limit within 1 h applying a pH > 11 (using a dosage from 7% to 17% w/w, depending faecal sludge alkalinity), urea treatment required about 4 days using 2.5% wet weight urea addition, and lactic acid fermentation needed approximately 1 week after being dosed with 10% wet weight molasses (2 g (glucose/fructose)/kg) and 10% wet weight pre-culture (99.8% pasteurised whole milk and 0.02% fermented milk drink containing Lactobacillus casei Shirota). Based on Malawian prices, the cost of sanitizing 1 m³ of faecal sludge was estimated to be €32 for lactic acid fermentation, €20 for urea treatment and €12 for hydrated lime treatment. PMID:26528995

  7. Multiple Linkage Disequilibrium Mapping Methods to Validate Additive Quantitative Trait Loci in Korean Native Cattle (Hanwoo).

    PubMed

    Li, Yi; Kim, Jong-Joo

    2015-07-01

    The efficiency of genome-wide association analysis (GWAS) depends on power of detection for quantitative trait loci (QTL) and precision for QTL mapping. In this study, three different strategies for GWAS were applied to detect QTL for carcass quality traits in the Korean cattle, Hanwoo; a linkage disequilibrium single locus regression method (LDRM), a combined linkage and linkage disequilibrium analysis (LDLA) and a BayesCπ approach. The phenotypes of 486 steers were collected for weaning weight (WWT), yearling weight (YWT), carcass weight (CWT), backfat thickness (BFT), longissimus dorsi muscle area, and marbling score (Marb). Also the genotype data for the steers and their sires were scored with the Illumina bovine 50K single nucleotide polymorphism (SNP) chips. For the two former GWAS methods, threshold values were set at false discovery rate <0.01 on a chromosome-wide level, while a cut-off threshold value was set in the latter model, such that the top five windows, each of which comprised 10 adjacent SNPs, were chosen with significant variation for the phenotype. Four major additive QTL from these three methods had high concordance found in 64.1 to 64.9Mb for Bos taurus autosome (BTA) 7 for WWT, 24.3 to 25.4Mb for BTA14 for CWT, 0.5 to 1.5Mb for BTA6 for BFT and 26.3 to 33.4Mb for BTA29 for BFT. Several candidate genes (i.e. glutamate receptor, ionotropic, ampa 1 [GRIA1], family with sequence similarity 110, member B [FAM110B], and thymocyte selection-associated high mobility group box [TOX]) may be identified close to these QTL. Our result suggests that the use of different linkage disequilibrium mapping approaches can provide more reliable chromosome regions to further pinpoint DNA makers or causative genes in these regions.

  8. Multiple Linkage Disequilibrium Mapping Methods to Validate Additive Quantitative Trait Loci in Korean Native Cattle (Hanwoo).

    PubMed

    Li, Yi; Kim, Jong-Joo

    2015-07-01

    The efficiency of genome-wide association analysis (GWAS) depends on power of detection for quantitative trait loci (QTL) and precision for QTL mapping. In this study, three different strategies for GWAS were applied to detect QTL for carcass quality traits in the Korean cattle, Hanwoo; a linkage disequilibrium single locus regression method (LDRM), a combined linkage and linkage disequilibrium analysis (LDLA) and a BayesCπ approach. The phenotypes of 486 steers were collected for weaning weight (WWT), yearling weight (YWT), carcass weight (CWT), backfat thickness (BFT), longissimus dorsi muscle area, and marbling score (Marb). Also the genotype data for the steers and their sires were scored with the Illumina bovine 50K single nucleotide polymorphism (SNP) chips. For the two former GWAS methods, threshold values were set at false discovery rate <0.01 on a chromosome-wide level, while a cut-off threshold value was set in the latter model, such that the top five windows, each of which comprised 10 adjacent SNPs, were chosen with significant variation for the phenotype. Four major additive QTL from these three methods had high concordance found in 64.1 to 64.9Mb for Bos taurus autosome (BTA) 7 for WWT, 24.3 to 25.4Mb for BTA14 for CWT, 0.5 to 1.5Mb for BTA6 for BFT and 26.3 to 33.4Mb for BTA29 for BFT. Several candidate genes (i.e. glutamate receptor, ionotropic, ampa 1 [GRIA1], family with sequence similarity 110, member B [FAM110B], and thymocyte selection-associated high mobility group box [TOX]) may be identified close to these QTL. Our result suggests that the use of different linkage disequilibrium mapping approaches can provide more reliable chromosome regions to further pinpoint DNA makers or causative genes in these regions. PMID:26104396

  9. Electrical inhibition of lens epithelial cell proliferation: an additional factor in secondary cataract?

    PubMed Central

    Wang, Entong; Reid, Brian; Lois, Noemi; Forrester, John V.; McCaig, Colin D.; Zhao, Min

    2005-01-01

    Cataract is the most common cause of blindness but is at least curable by surgery. Unfortunately, many patients gradually develop the complication of posterior capsule opacification (PCO) or secondary cataract. This arises from stimulated cell growth within the lens capsule and can greatly impair vision. It is not fully understood why residual lens epithelial cell growth occurs after surgery. We propose and show that cataract surgery might remove an important inhibitory factor for lens cell growth, namely electric fields. The lens generates a unique pattern of electric currents constantly flowing out from the equator and entering the anterior and posterior poles. We show here that cutting and removing part of the anterior capsule as in cataract surgery significantly decreases the equatorial outward electric currents. Application of electric fields in culture inhibits proliferation of human lens epithelial cells. This inhibitory effect is likely to be mediated through a cell cycle control mechanism that decreases entry of cells into S phase from G1 phase by decreasing the G1-specific cell cycle protein cyclin E and increasing the cyclin-Cdk complex inhibitor p27kip1. Capsulorrhexis in vivo, which reduced endogenous lens electric fields, significantly increased LEC growth. This, together with our previous findings that electric fields have significant effects on the direction of lens cell migration, points to a controlling mechanism for the aberrant cell growth in posterior capsule opacification. A novel approach to control growth of lens epithelial cells using electric fields combined with other controlling mechanisms may be more effective in the prevention and treatment of this common complication of cataract surgery. PMID:15764648

  10. Insulin resistance: an additional risk factor in the pathogenesis of cardiovascular disease in type 2 diabetes.

    PubMed

    Patel, Tushar P; Rawal, Komal; Bagchi, Ashim K; Akolkar, Gauri; Bernardes, Nathalia; Dias, Danielle da Silva; Gupta, Sarita; Singal, Pawan K

    2016-01-01

    Sedentary life style and high calorie dietary habits are prominent leading cause of metabolic syndrome in modern world. Obesity plays a central role in occurrence of various diseases like hyperinsulinemia, hyperglycemia and hyperlipidemia, which lead to insulin resistance and metabolic derangements like cardiovascular diseases (CVDs) mediated by oxidative stress. The mortality rate due to CVDs is on the rise in developing countries. Insulin resistance (IR) leads to micro or macro angiopathy, peripheral arterial dysfunction, hampered blood flow, hypertension, as well as the cardiomyocyte and the endothelial cell dysfunctions, thus increasing risk factors for coronary artery blockage, stroke and heart failure suggesting that there is a strong association between IR and CVDs. The plausible linkages between these two pathophysiological conditions are altered levels of insulin signaling proteins such as IR-β, IRS-1, PI3K, Akt, Glut4 and PGC-1α that hamper insulin-mediated glucose uptake as well as other functions of insulin in the cardiomyocytes and the endothelial cells of the heart. Reduced AMPK, PFK-2 and elevated levels of NADP(H)-dependent oxidases produced by activated M1 macrophages of the adipose tissue and elevated levels of circulating angiotensin are also cause of CVD in diabetes mellitus condition. Insulin sensitizers, angiotensin blockers, superoxide scavengers are used as therapeutics in the amelioration of CVD. It evidently becomes important to unravel the mechanisms of the association between IR and CVDs in order to formulate novel efficient drugs to treat patients suffering from insulin resistance-mediated cardiovascular diseases. The possible associations between insulin resistance and cardiovascular diseases are reviewed here. PMID:26542377

  11. Lactic Acid Fermentation, Urea and Lime Addition: Promising Faecal Sludge Sanitizing Methods for Emergency Sanitation

    PubMed Central

    Anderson, Catherine; Malambo, Dennis Hanjalika; Gonzalez Perez, Maria Eliette; Nobela, Happiness Ngwanamoseka; de Pooter, Lobke; Spit, Jan; Hooijmans, Christine Maria; van de Vossenberg, Jack; Greya, Wilson; Thole, Bernard; van Lier, Jules B.; Brdjanovic, Damir

    2015-01-01

    In this research, three faecal sludge sanitizing methods—lactic acid fermentation, urea treatment and lime treatment—were studied for application in emergency situations. These methods were investigated by undertaking small scale field trials with pit latrine sludge in Blantyre, Malawi. Hydrated lime was able to reduce the E. coli count in the sludge to below the detectable limit within 1 h applying a pH > 11 (using a dosage from 7% to 17% w/w, depending faecal sludge alkalinity), urea treatment required about 4 days using 2.5% wet weight urea addition, and lactic acid fermentation needed approximately 1 week after being dosed with 10% wet weight molasses (2 g (glucose/fructose)/kg) and 10% wet weight pre-culture (99.8% pasteurised whole milk and 0.02% fermented milk drink containing Lactobacillus casei Shirota). Based on Malawian prices, the cost of sanitizing 1 m3 of faecal sludge was estimated to be €32 for lactic acid fermentation, €20 for urea treatment and €12 for hydrated lime treatment. PMID:26528995

  12. Rosenberg's Self-Esteem Scale: Two Factors or Method Effects.

    ERIC Educational Resources Information Center

    Tomas, Jose M.; Oliver, Amparo

    1999-01-01

    Results of a study with 640 Spanish high school students suggest the existence of a global self-esteem factor underlying responses to Rosenberg's (M. Rosenberg, 1965) Self-Esteem Scale, although the inclusion of method effects is needed to achieve a good model fit. Method effects are associated with item wording. (SLD)

  13. A Comparison of Imputation Methods for Bayesian Factor Analysis Models

    ERIC Educational Resources Information Center

    Merkle, Edgar C.

    2011-01-01

    Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…

  14. Effect of olive mill waste addition on the properties of porous fired clay bricks using Taguchi method.

    PubMed

    Sutcu, Mucahit; Ozturk, Savas; Yalamac, Emre; Gencel, Osman

    2016-10-01

    Production of porous clay bricks lightened by adding olive mill waste as a pore making additive was investigated. Factors influencing the brick manufacturing process were analyzed by an experimental design, Taguchi method, to find out the most favorable conditions for the production of bricks. The optimum process conditions for brick preparation were investigated by studying the effects of mixture ratios (0, 5 and 10 wt%) and firing temperatures (850, 950 and 1050 °C) on the physical, thermal and mechanical properties of the bricks. Apparent density, bulk density, apparent porosity, water absorption, compressive strength, thermal conductivity, microstructure and crystalline phase formations of the fired brick samples were measured. It was found that the use of 10% waste addition reduced the bulk density of the samples up to 1.45 g/cm(3). As the porosities increased from 30.8 to 47.0%, the compressive strengths decreased from 36.9 to 10.26 MPa at firing temperature of 950 °C. The thermal conductivities of samples fired at the same temperature showed a decrease of 31% from 0.638 to 0.436 W/mK, which is hopeful for heat insulation in the buildings. Increasing of the firing temperature also affected their mechanical and physical properties. This study showed that the olive mill waste could be used as a pore maker in brick production. PMID:27343435

  15. Effect of olive mill waste addition on the properties of porous fired clay bricks using Taguchi method.

    PubMed

    Sutcu, Mucahit; Ozturk, Savas; Yalamac, Emre; Gencel, Osman

    2016-10-01

    Production of porous clay bricks lightened by adding olive mill waste as a pore making additive was investigated. Factors influencing the brick manufacturing process were analyzed by an experimental design, Taguchi method, to find out the most favorable conditions for the production of bricks. The optimum process conditions for brick preparation were investigated by studying the effects of mixture ratios (0, 5 and 10 wt%) and firing temperatures (850, 950 and 1050 °C) on the physical, thermal and mechanical properties of the bricks. Apparent density, bulk density, apparent porosity, water absorption, compressive strength, thermal conductivity, microstructure and crystalline phase formations of the fired brick samples were measured. It was found that the use of 10% waste addition reduced the bulk density of the samples up to 1.45 g/cm(3). As the porosities increased from 30.8 to 47.0%, the compressive strengths decreased from 36.9 to 10.26 MPa at firing temperature of 950 °C. The thermal conductivities of samples fired at the same temperature showed a decrease of 31% from 0.638 to 0.436 W/mK, which is hopeful for heat insulation in the buildings. Increasing of the firing temperature also affected their mechanical and physical properties. This study showed that the olive mill waste could be used as a pore maker in brick production.

  16. A habitat suitability model for Chinese sturgeon determined using the generalized additive method

    NASA Astrophysics Data System (ADS)

    Yi, Yujun; Sun, Jie; Zhang, Shanghong

    2016-03-01

    The Chinese sturgeon is a type of large anadromous fish that migrates between the ocean and rivers. Because of the construction of dams, this sturgeon's migration path has been cut off, and this species currently is on the verge of extinction. Simulating suitable environmental conditions for spawning followed by repairing or rebuilding its spawning grounds are effective ways to protect this species. Various habitat suitability models based on expert knowledge have been used to evaluate the suitability of spawning habitat. In this study, a two-dimensional hydraulic simulation is used to inform a habitat suitability model based on the generalized additive method (GAM). The GAM is based on real data. The values of water depth and velocity are calculated first via the hydrodynamic model and later applied in the GAM. The final habitat suitability model is validated using the catch per unit effort (CPUEd) data of 1999 and 2003. The model results show that a velocity of 1.06-1.56 m/s and a depth of 13.33-20.33 m are highly suitable ranges for the Chinese sturgeon to spawn. The hydraulic habitat suitability indexes (HHSI) for seven discharges (4000; 9000; 12,000; 16,000; 20,000; 30,000; and 40,000 m3/s) are calculated to evaluate integrated habitat suitability. The results show that the integrated habitat suitability reaches its highest value at a discharge of 16,000 m3/s. This study is the first to apply a GAM to evaluate the suitability of spawning grounds for the Chinese sturgeon. The study provides a reference for the identification of potential spawning grounds in the entire basin.

  17. First-Grade Methods of Single-Digit Addition with Two or More Addends

    ERIC Educational Resources Information Center

    Guerrero, Shannon M.; Palomaa, Kimberly

    2012-01-01

    In an attempt to further understand connections between children's proficiency and development with single- and multidigit addition, this study investigated the conceptualizations and solution strategies of 26 first-graders presented with several single-digit, multiple addend addition problems. The changes in students' solution strategies over the…

  18. Analysis methods for the determination of anthropogenic additions of P to agricultural soils

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phosphorus additions and measurement in soil is of concern on lands where biosolids have been applied. Colorimetric analysis for plant-available P may be inadequate for the accurate assessment of soil P. Phosphate additions in a regulatory environment need to be accurately assessed as the reported...

  19. Effects of Factor XIII Deficiency on Thromboelastography. Thromboelastography with Calcium and Streptokinase Addition is more Sensitive than Solubility Tests

    PubMed Central

    Martinuzzo, M.; Barrera, L.; Altuna, D.; Baña, F. Tisi; Bieti, J.; Amigo, Q.; D’Adamo, M.; López, M.S.; Oyhamburu, J.; Otaso, J.C.

    2016-01-01

    Background Homozygous or double heterozygous factor XIII (FXIII) deficiency is characterized by soft tissue hematomas, intracranial and delayed spontaneous bleeding. Alterations of thromboelastography (TEG) parameters in these patients have been reported. The aim of the study was to show results of TEG, TEG Lysis (Lys 60) induced by subthreshold concentrations of streptokinase (SK), and to compare them to the clot solubility studies results in samples of a 1-year-old girl with homozygous or double heterozygous FXIII deficiency. Case A year one girl with a history of bleeding from the umbilical cord. During her first year of life, several hematomas appeared in soft upper limb tissue after punctures for vaccination and a gluteal hematoma. One additional sample of a heterozygous patient and three samples of acquired FXIII deficiency were also evaluated. Materials and Methods Clotting tests, von Willebrand factor (vWF) antigen and activity, plasma FXIII-A subunit (pFXIII-A) were measured by an immunoturbidimetric assay in a photo-optical coagulometer. Solubility tests were performed with Ca2+-5 M urea and thrombin-2% acetic acid. Basal and post-FXIII concentrate infusion samples were studied. TEG was performed with CaCl2 or CaCl2 + SK (3.2 U/mL) in a Thromboelastograph. Results Prothrombin time (PT), activated partial thromboplastin time (APTT), thrombin time, fibrinogen, factor VIIIc, vWF, and platelet aggregation were normal. Antigenic pFXIII-A subunit was < 2%. TEG, evaluated at diagnosis and post FXIII concentrate infusion (pFXIII-A= 37%), presented a normal reaction time (R), 8 min, prolonged k (14 and 11min respectively), a low Maximum-Amplitude (MA) ( 39 and 52 mm respectively), and Clot Lysis (Lys60) slightly increased (23 and 30% respectively). In the sample at diagnosis, clot solubility was abnormal, 50 and 45 min with Ca-Urea and thrombin-acetic acid, respectively, but normal (>16 hours) 1-day post-FXIII infusion. Analysis of FXIII deficient and normal

  20. Effects of Factor XIII Deficiency on Thromboelastography. Thromboelastography with Calcium and Streptokinase Addition is more Sensitive than Solubility Tests

    PubMed Central

    Martinuzzo, M.; Barrera, L.; Altuna, D.; Baña, F. Tisi; Bieti, J.; Amigo, Q.; D’Adamo, M.; López, M.S.; Oyhamburu, J.; Otaso, J.C.

    2016-01-01

    Background Homozygous or double heterozygous factor XIII (FXIII) deficiency is characterized by soft tissue hematomas, intracranial and delayed spontaneous bleeding. Alterations of thromboelastography (TEG) parameters in these patients have been reported. The aim of the study was to show results of TEG, TEG Lysis (Lys 60) induced by subthreshold concentrations of streptokinase (SK), and to compare them to the clot solubility studies results in samples of a 1-year-old girl with homozygous or double heterozygous FXIII deficiency. Case A year one girl with a history of bleeding from the umbilical cord. During her first year of life, several hematomas appeared in soft upper limb tissue after punctures for vaccination and a gluteal hematoma. One additional sample of a heterozygous patient and three samples of acquired FXIII deficiency were also evaluated. Materials and Methods Clotting tests, von Willebrand factor (vWF) antigen and activity, plasma FXIII-A subunit (pFXIII-A) were measured by an immunoturbidimetric assay in a photo-optical coagulometer. Solubility tests were performed with Ca2+-5 M urea and thrombin-2% acetic acid. Basal and post-FXIII concentrate infusion samples were studied. TEG was performed with CaCl2 or CaCl2 + SK (3.2 U/mL) in a Thromboelastograph. Results Prothrombin time (PT), activated partial thromboplastin time (APTT), thrombin time, fibrinogen, factor VIIIc, vWF, and platelet aggregation were normal. Antigenic pFXIII-A subunit was < 2%. TEG, evaluated at diagnosis and post FXIII concentrate infusion (pFXIII-A= 37%), presented a normal reaction time (R), 8 min, prolonged k (14 and 11min respectively), a low Maximum-Amplitude (MA) ( 39 and 52 mm respectively), and Clot Lysis (Lys60) slightly increased (23 and 30% respectively). In the sample at diagnosis, clot solubility was abnormal, 50 and 45 min with Ca-Urea and thrombin-acetic acid, respectively, but normal (>16 hours) 1-day post-FXIII infusion. Analysis of FXIII deficient and normal

  1. Bifurcated method and apparatus for floating point addition with decreased latency time

    DOEpatents

    Farmwald, Paul M.

    1987-01-01

    Apparatus for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.

  2. Generalized net analyte signal standard addition as a novel method for simultaneous determination: application in spectrophotometric determination of some pesticides.

    PubMed

    Asadpour-Zeynali, Karim; Saeb, Elhameh; Vallipour, Javad; Bamorowat, Mehdi

    2014-01-01

    Simultaneous spectrophotometric determination of three neonicotinoid insecticides (acetamiprid, imidacloprid, and thiamethoxam) by a novel method named generalized net analyte signal standard addition method (GNASSAM) in some binary and ternary synthetic mixtures was investigated. For this purpose, standard addition was performed using a single standard solution consisting of a mixture of standards of all analytes. Savings in time and amount of used materials are some of the advantages of this method. All determinations showed appropriate applicability of this method with less than 5% error. This method may be applied for linearly dependent data in the presence of known interferents. The GNASSAM combines the advantages of both the generalized standard addition method and net analyte signal; therefore, it may be a proper alternative for some other multivariate methods. PMID:24672886

  3. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    ERIC Educational Resources Information Center

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  4. Method for determining formation quality factor from seismic data

    DOEpatents

    Taner, M. Turhan; Treitel, Sven

    2005-08-16

    A method is disclosed for calculating the quality factor Q from a seismic data trace. The method includes calculating a first and a second minimum phase inverse wavelet at a first and a second time interval along the seismic data trace, synthetically dividing the first wavelet by the second wavelet, Fourier transforming the result of the synthetic division, calculating the logarithm of this quotient of Fourier transforms and determining the slope of a best fit line to the logarithm of the quotient.

  5. An Inventory of Methods for the Assessment of Additive Increased Addictiveness of Tobacco Products

    PubMed Central

    van de Nobelen, Suzanne; Kienhuis, Anne S.

    2016-01-01

    Background: Cigarettes and other forms of tobacco contain the addictive drug nicotine. Other components, either naturally occurring in tobacco or additives that are intentionally added during the manufacturing process, may add to the addictiveness of tobacco products. As such, these components can make cigarette smokers more easily and heavily dependent. Efforts to regulate tobacco product dependence are emerging globally. Additives that increase tobacco dependence will be prohibited under the new European Tobacco Product Directive. Objective: This article provides guidelines and recommendations for developing a regulatory strategy for assessment of increase in tobacco dependence due to additives. Relevant scientific literature is summarized and criteria and experimental studies that can define increased dependence of tobacco products are described. Conclusions: Natural tobacco smoke is a very complex matrix of components, therefore analysis of the contribution of an additive or a combination of additives to the level of dependence on this product is challenging. We propose to combine different type of studies analyzing overall tobacco product dependence potential and the functioning of additives in relation to nicotine. By using a combination of techniques, changes associated with nicotine dependence such as behavioral, physiological, and neurochemical alterations can be examined to provide sufficient information. Research needs and knowledge gaps will be discussed and recommendations will be made to translate current knowledge into legislation. As such, this article aids in implementation of the Tobacco Product Directive, as well as help enable regulators and researchers worldwide to develop standards to reduce dependence on tobacco products. Implications: This article provides an overall view on how to assess tobacco product constituents for their potential contribution to use and dependence. It provides guidelines that help enable regulators worldwide to

  6. EVALUATION OF TWO METHODS FOR PREDICTION OF BIOACCUMULATION FACTORS

    EPA Science Inventory

    Two methods for deriving bioaccumulation factors (BAFs) used by the U.S. Environmental Protection Agency (EPA) in development of water quality criteria were evaluated using polychlorinated biphenyls (PCB) data from the Hudson River and Green Bay ecosystems. Greater than 90% of th...

  7. Methods and energy storage devices utilizing electrolytes having surface-smoothing additives

    SciTech Connect

    Xu, Wu; Zhang, Jiguang; Graff, Gordon L; Chen, Xilin; Ding, Fei

    2015-11-12

    Electrodeposition and energy storage devices utilizing an electrolyte having a surface-smoothing additive can result in self-healing, instead of self-amplification, of initial protuberant tips that give rise to roughness and/or dendrite formation on the substrate and anode surface. For electrodeposition of a first metal (M1) on a substrate or anode from one or more cations of M1 in an electrolyte solution, the electrolyte solution is characterized by a surface-smoothing additive containing cations of a second metal (M2), wherein cations of M2 have an effective electrochemical reduction potential in the solution lower than that of the cations of M1.

  8. Estimation of quality factors by energy ratio method

    NASA Astrophysics Data System (ADS)

    Wang, Zong-Jun; Cao, Si-Yuan; Zhang, Hao-Ran; Qu, Ying-Ming; Yuan, Dian; Yang, Jin-Hao; Shao, Guan-Ming

    2015-03-01

    The quality factor Q, which reflects the energy attenuation of seismic waves in subsurface media, is a diagnostic tool for hydrocarbon detection and reservoir characterization. In this paper, we propose a new Q extraction method based on the energy ratio before and after the wavelet attenuation, named the energy-ratio method (ERM). The proposed method uses multipoint signal data in the time domain to estimate the wavelet energy without invoking the source wavelet spectrum, which is necessary in conventional Q extraction methods, and is applicable to any source wavelet spectrum; however, it requires high-precision seismic data. Forward zero-offset VSP modeling suggests that the ERM can be used for reliable Q inversion after nonintrinsic attenuation (geometric dispersion, reflection, and transmission loss) compensation. The application to real zero-offset VSP data shows that the Q values extracted by the ERM and spectral ratio methods are identical, which proves the reliability of the new method.

  9. Methods of Measuring Vapor Pressures of Lubricants With Their Additives Using TGA and/or Microbalances

    NASA Technical Reports Server (NTRS)

    Scialdone, John J.; Miller, Michael K.; Montoya, Alex F.

    1996-01-01

    The life of a space system may be critically dependent on the lubrication of some of its moving parts. The vapor pressure, the quantity of the available lubricant, the temperature and the exhaust venting conductance passage are important considerations in the selection and application of a lubricant. In addition, the oil additives employed to provide certain properties of low friction, surface tension, antioxidant and load bearing characteristics, are also very important and need to be known with regard to their amounts and vapor pressures. This paper reports on the measurements and analyses carried out to obtain those parameters for two often employed lubricants, the Apiezon(TM)-C and the Krytox(TM) AB. The measurements were made employing an electronic microbalance and a thermogravimetric analyzer (TGA) modified to operate in a vacuum. The results have been compared to other data on these oils when available. The identification of the mass fractions of the additives in the oil and their vapor pressures as a function of the temperature were carried out. These may be used to estimate the lubricant life given its quantity and the system vent exhaust conductance. It was found that the Apiezon(TM)-C has three main components with different rates of evaporation while the Krytox(TM) did not indicate any measurable additive.

  10. Activity Approach to the Formation of the Method of Addition and Subtraction in Elementary Students

    ERIC Educational Resources Information Center

    Maksimov, L. K.; Maksimova, L. V.

    2013-01-01

    One of the main tasks in teaching mathematics to elementary students is to form calculating methods and techniques. The efforts of teachers and methodologists are aimed at solving this problem. Educational and psychological research is devoted to it. At the same time school teaching experience demonstrates some difficulties in learning methods of…

  11. MAGNETOMETRY, SELF-POTENTIAL, AND SEISMIC - ADDITIONAL GEOPHYSICAL METHODS HAVING POTENTIALLY SIGNIFICANT FUTURE UTILIZATION IN AGRICULTURE

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Geophysical methods can provide important information in agricultural settings, and the use of these techniques are becoming more and more widespread. Magnetrometry, self-potential, and seismic are three geophysical methods, all of which have the potential for substantial future use in agriculture, ...

  12. Intermediate boundary conditions for LOD, ADI and approximate factorization methods

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.

    1985-01-01

    A general approach to determining the correct intermediate boundary conditions for dimensional splitting methods is presented. The intermediate solution U is viewed as a second order accurate approximation to a modified equation. Deriving the modified equation and using the relationship between this equation and the original equation allows us to determine the correct boundary conditions for U*. This technique is illustrated by applying it to locally one dimensional (LOD) and alternating direction implicit (ADI) methods for the heat equation in two and three space dimensions. The approximate factorization method is considered in slightly more generality.

  13. Combustion sources of particles: 2. Emission factors and measurement methods.

    PubMed

    Zhang, Junfeng Jim; Morawska, Lidia

    2002-12-01

    Emissions from the combustion of biomass and fossil fuels are a significant source of particulate matter (PM) in ambient outdoor and/or indoor air. It is important to quantify PM emissions from combustion sources for regulatory and control purposes in relation to air quality. In this paper, we review emission factors for several types of important combustion sources: road transport, industrial facilities, small household combustion devices, environmental tobacco smoke, and vegetation burning. We also review current methods for measuring particle physical characteristics (mass and number concentrations) and principles of methodologies for measuring emission factors. The emission factors can be measured on a fuel-mass basis and/or a task basis. Fuel-mass based emission factors (e.g., g/kg of fuel) can be readily used for the development of emission inventories when the amount of fuels consumed are known. Task-based emission factors (g/mile driven, g/MJ generated) are more appropriate when used to conduct comparisons of air pollution potentials of different combustion devices. Finally, we discuss major shortcomings and limitations of current methods for measuring particle emissions and present recommendations for development of future measurement techniques. PMID:12492165

  14. Effect of method of heterogenization of ephedrine and reaction conditions on the enantioselectivity of Michael additions

    SciTech Connect

    Krotov, V.V.; Staroverov, S.M.; Nesterenko, P.N.; Lisichkin. G.V.

    1987-11-10

    A series of heterogeneous catalysts for asymmetric Michael additions was synthesized based on ephedrine chemically bound to the surface of silica. The length of the hydrocarbon chain binding the active center to the support surface affects the sign of rotation of the reaction product from the asymmetric addition of thiophenol to benzylideneacetophenone. Grafting ephedrine to the silica surface via a short hydrocarbon chain results in a change in the configuration of the reaction product. Silanol groups on the silica surface are involved in the transition state, as evidenced by data obtained using silica which has been exhaustively treated with trimethylchlorosilane. The absolute specific rotation of 1,3-diphenyl-3-thiophenylpropan-1-one has been established.

  15. Low edge safety factor operation and passive disruption avoidance in current carrying plasmas by the addition of stellarator rotational transform

    NASA Astrophysics Data System (ADS)

    Pandya, M. D.; ArchMiller, M. C.; Cianciosa, M. R.; Ennis, D. A.; Hanson, J. D.; Hartwell, G. J.; Hebert, J. D.; Herfindal, J. L.; Knowlton, S. F.; Ma, X.; Massidda, S.; Maurer, D. A.; Roberds, N. A.; Traverso, P. J.

    2015-11-01

    Low edge safety factor operation at a value less than two ( q (a )=1 /ι̷tot(a )<2 ) is routine on the Compact Toroidal Hybrid device with the addition of sufficient external rotational transform. Presently, the operational space of this current carrying stellarator extends down to q (a )=1.2 without significant n = 1 kink mode activity after the initial plasma current rise phase of the discharge. The disruption dynamics of these low edge safety factor plasmas depend upon the fraction of helical field rotational transform from external stellarator coils to that generated by the plasma current. We observe that with approximately 10% of the total rotational transform supplied by the stellarator coils, low edge q disruptions are passively suppressed and avoided even though q(a) < 2. When the plasma does disrupt, the instability precursors measured and implicated as the cause are internal tearing modes with poloidal, m, and toroidal, n, helical mode numbers of m /n =3 /2 and 4/3 observed on external magnetic sensors and m /n =1 /1 activity observed on core soft x-ray emissivity measurements. Even though the edge safety factor passes through and becomes much less than q(a) < 2, external n = 1 kink mode activity does not appear to play a significant role in the disruption phenomenology observed.

  16. The factorization method for the acoustic transmission problem

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Konstantinos A.; Charalambopoulos, Antonios; Kleefeld, Andreas

    2013-11-01

    In this work, the shape reconstruction problem of acoustically penetrable bodies from the far-field data corresponding to time-harmonic plane wave incidence is investigated within the framework of the factorization method. Although the latter technique has received considerable attention in inverse scattering problems dealing with impenetrable scatterers and it has not been elaborated for inverse transmission problems with the only exception being a work by the first two authors and co-workers. We aim to bridge this gap in the field of acoustic scattering; the paper on one hand focuses on establishing rigorously the necessary theoretical framework for the application of the factorization method to the inverse acoustic transmission problem. The main outcome of the investigation undertaken is the derivation of an explicit formula for the scatterer's characteristic function, which depends solely on the far-field data feeding the inverse scattering scheme. Extended numerical examples in three dimensions are also presented, where a variety of different surfaces are successfully reconstructed by the factorization method, thus, complementing the method's validation from the computational point of view.

  17. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    PubMed

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  18. Intercomparison of selected fixed-area areal reduction factor methods

    NASA Astrophysics Data System (ADS)

    Pavlovic, Sandra; Perica, Sanja; St Laurent, Michael; Mejía, Alfonso

    2016-06-01

    The areal reduction factor (ARF) is a concept used in many hydrologic designs to transform a point precipitation frequency estimate of a given duration and frequency to a corresponding areal estimate. Various methods have been proposed in the literature to calculate ARFs. Proposed ARFs could vary significantly, and it is unclear if discrepancies are primarily due to differences in methodologies, the dissimilar datasets used to calculate ARFs, or if they originate from regional uniqueness. Our goal in this study is to analyze differences among ARFs derived from different types of fixed-area ARF methods, which are suitable for use with precipitation frequency estimates. For this intercomparison, all the ARFs were computed using the same, high-quality rainfall-radar merged dataset for a common geographic region. The selected ARFs methods represent four commonly used approaches: empirical methods, methods that are based on the spatial correlation structure of rainfall, methods that rely on the scaling properties of rainfall in space and time, and methods that utilize extreme value theory. The state of Oklahoma was selected as the study area, as it has a good quality radar data and a dense network of rain gauges. Results indicate significant uncertainties in the ARF estimates, regardless of the method used. Even when calculated from the same dataset and for the same geographic area, the ARF estimates from the selected methods differ. The differences are more pronounced for the shorter durations and larger areas. Results also indicate some ARF dependence on the average recurrence intervals.

  19. Comparison between evaluation methods from sun protection factors.

    PubMed

    Martini, M C

    1986-10-01

    Objective methods for evaluation of Sun Protection Factors (SPF) are numerous. Only the most used methods both in vitro and in vivo will be described. The results obtained with different types of spectrophotometric methods (solution, thin layer over quartz slides or measurement of transmittance and diffusion after coating with emulsions over the stratum corneum) show that only the last method, which involves an integration sphere, is able to give data in good correlation with in vivo Sun protection factors. Among in vivo methods, the animal of choice is the albino guinea pig, because of its sensitivity and erythemateous reactions similar to those of human skin. Nevertheless, this method is only reliable for product screening and true SPF values must be determined on humans. Two official methods, the American (FDA) and the German (DIN 67501). are described with advantages and disadvantages. In Fine, a new method which is a combination of these two methods is proposed. Twenty people are irradiated by a Xenon lamp which emits about 0.60 mw/cm(2) of UVB, 3.5 mw cm(-2) for UVA and IR, sufficient to obtain a temperature of 35 degrees C of the skin surface. The product is applied on the back of volunteers in quantity of 1 mg/cm(-2). Test zones have a surface of 2.25 cm(2). Irradiation begins 10 min after application of the product and the exposure times are increased from zone to zone following a geometric progression, with 1.25 as ratio. Two standard prepara- tions are used, one with SPF=4, the other with SPF=9-10. Erythema is evaluated visually 16 to 24 h after irradiation. Each SPF is determined using the classical ratio MED with sunscreenlMED without sunscreen and the geometrical mean is calculated to obtain the definitive value of SPF. PMID:19457219

  20. Method for the addition of vulcanized waste rubber to virgin rubber products

    DOEpatents

    Romine, Robert A.; Snowden-Swan, Lesley J.

    1997-01-01

    The invention is a method of using enzymes from thiophyllic microbes for selectively breaking the sulfur rubber cross-link bonds in vulcanized rubber. The process is halted at the sulfoxide or sulfone step so that a devulcanized layer is reactive with virgin rubber.

  1. Method for the addition of vulcanized waste rubber to virgin rubber products

    DOEpatents

    Romine, R.A.; Snowden-Swan, L.J.

    1997-01-28

    The invention is a method of using enzymes from thiophyllic microbes for selectively breaking the sulfur rubber cross-link bonds in vulcanized rubber. The process is halted at the sulfoxide or sulfone step so that a devulcanized layer is reactive with virgin rubber. 8 figs.

  2. On methods for bounding the overall properties of nonlinear composites: Correction and addition

    NASA Astrophysics Data System (ADS)

    Willis, J. R.

    WILLIS ( J. Mech. Phys. Solids39, 73, 1991) concluded that a new bounding method for nonlinear composites, presented by PONTE CASTAñEDA ( J. Mech. Phys. Solids39, 45, 1991) was equivalent to an earlier method which employed a nonlinear generalization of the Hashin-Shtrikman variational principle. This conclusion was reached by first showing that the nonlinear Hashin-Shtrikman bound is at least as good as the new bound and then that the new bound is at least as good as the older one. A fallacy in the latter part of this demonstration is exposed by considering a simple one-dimensional counter-example, corresponding to a nonlinear laminate. The conditions for coincidence identified by WILLIS (1991) are incomplete through failure to require explicitly that a stationary point defined by them yields a global minimum. Several cases have been studied previously, for which the two methods do yield the same bound; when they do, Ponte Castañeda's procedure has the potential to give an improvement by the use at an intermediate stage of an improved bound for a linear composite. When the methods yield different bounds, however, that produced by the nonlinear Hashin-Shtrikman procedure is the better.

  3. [ELISA method for the determination of factor VII antigen].

    PubMed

    Jorquera, J I; Aznar, J A; Monteagudo, J; Montoro, J M; Casaña, P; Pascual, I; Bañuls, E; Curats, R; Llopis, F

    1989-12-01

    The low plasma concentration of clotting factor VII makes it difficult to assay its antigenic fraction by the conventional methods of precipitation with specific antigens. Simple and peroxidase-conjugated antisera are currently available from commercial sources, thus allowing one to determine F VII:Ag by enzyme immunoassay. An ELISA method has been developed in this laboratory which provides sensitivity limits about 0.1% of the plasma concentration of F VII and correlates significantly with its functional activity (r = 0.603, n = 44, p less than 0.001). This technique can be highly helpful in characterising molecular variants of F VII, as well as in detecting acquired deficiencies of this factor.

  4. Method to characterize collective impact of factors on indoor air

    NASA Astrophysics Data System (ADS)

    Szczurek, Andrzej; Maciejewska, Monika; Teuerle, Marek; Wyłomańska, Agnieszka

    2015-02-01

    One of the most important problems in studies of building environment is a description of how it is influenced by various dynamically changing factors. In this paper we characterized the joint impact of a collection of factors on indoor air quality (IAQ). We assumed that the influence is reflected in the temporal variability of IAQ parameters and may be deduced from it. The proposed method utilizes mean square displacement (MSD) analysis which was originally developed for studying the dynamics in various systems. Based on the MSD time-dependence descriptor β, we distinguished three types of the collective impact of factors on IAQ: retarding, stabilizing and promoting. We presented how the aggregated factors influence the temperature, relative humidity and CO2 concentration, as these parameters are informative for the condition of indoor air. We discovered, that during a model day there are encountered one, two or even three types of influence. The presented method allows us to study the impacts from the perspective of the dynamics of indoor air.

  5. Methods for determining radionuclide retardation factors: status report

    SciTech Connect

    Relyea, J.F.; Serne, R.J.; Rai, D.

    1980-04-01

    This report identifies a number of mechanisms that retard radionuclide migration, and describes the static and dynamic methods that are used to study such retardation phenomena. Both static and dynamic methods are needed for reliable safety assessments of underground nuclear-waste repositories. This report also evaluates the extent to which the two methods may be used to diagnose radionuclide migration through various types of geologic media, among them unconsolidated, crushed, intact, and fractured rocks. Adsorption is one mechanism that can control radionuclide concentrations in solution and therefore impede radionuclide migration. Other mechanisms that control a solution's radionuclide concentration and radionuclide migration are precipitation of hydroxides and oxides, oxidation-reduction reactions, and the formation of minerals that might include the radionuclide as a structural element. The retardation mechanisms mentioned above are controlled by such factors as surface area, cation exchange capacity, solution pH, chemical composition of the rock and of the solution, oxidation-reduction potential, and radionuclide concentration. Rocks and ground waters used in determining retardation factors should represent the expected equilibrium conditions in the geologic system under investigation. Static test methods can be used to rapidly screen the effects of the factors mentioned above. Dynamic (or column) testing, is needed to assess the effects of hydrodynamics and the interaction of hydrodynamics with the other important parameters. This paper proposes both a standard method for conducting batch Kd determinations, and a standard format for organizing and reporting data. Dynamic testing methods are not presently developed to the point that a standard methodology can be proposed. Normal procedures are outlined for column experimentation and the data that are needed to analyze a column experiment are identified.

  6. Evaluation of SHM System Produced by Additive Manufacturing via Acoustic Emission and Other NDT Methods

    PubMed Central

    Strantza, Maria; Aggelis, Dimitrios G.; de Baere, Dieter; Guillaume, Patrick; van Hemelrijck, Danny

    2015-01-01

    During the last decades, structural health monitoring (SHM) systems are used in order to detect damage in structures. We have developed a novel structural health monitoring approach, the so-called “effective structural health monitoring” (eSHM) system. The current SHM system is incorporated into a metallic structure by means of additive manufacturing (AM) and has the possibility to advance life safety and reduce direct operative costs. It operates based on a network of capillaries that are integrated into an AM structure. The internal pressure of the capillaries is continuously monitored by a pressure sensor. When a crack nucleates and reaches the capillary, the internal pressure changes signifying the existence of the flaw. The main objective of this paper is to evaluate the crack detection capacity of the eSHM system and crack location accuracy by means of various non-destructive testing (NDT) techniques. During this study, detailed acoustic emission (AE) analysis was applied in AM materials for the first time in order to investigate if phenomena like the Kaiser effect and waveform parameters used in conventional metals can offer valuable insight into the damage accumulation of the AM structure as well. Liquid penetrant inspection, eddy current and radiography were also used in order to confirm the fatigue damage and indicate the damage location on un-notched four-point bending AM metallic specimens with an integrated eSHM system. It is shown that the eSHM system in combination with NDT can provide correct information on the damage condition of additive manufactured metals. PMID:26506349

  7. Evaluation of SHM system produced by additive manufacturing via acoustic emission and other NDT methods.

    PubMed

    Strantza, Maria; Aggelis, Dimitrios G; de Baere, Dieter; Guillaume, Patrick; van Hemelrijck, Danny

    2015-01-01

    During the last decades, structural health monitoring (SHM) systems are used in order to detect damage in structures. We have developed a novel structural health monitoring approach, the so-called "effective structural health monitoring" (eSHM) system. The current SHM system is incorporated into a metallic structure by means of additive manufacturing (AM) and has the possibility to advance life safety and reduce direct operative costs. It operates based on a network of capillaries that are integrated into an AM structure. The internal pressure of the capillaries is continuously monitored by a pressure sensor. When a crack nucleates and reaches the capillary, the internal pressure changes signifying the existence of the flaw. The main objective of this paper is to evaluate the crack detection capacity of the eSHM system and crack location accuracy by means of various non-destructive testing (NDT) techniques. During this study, detailed acoustic emission (AE) analysis was applied in AM materials for the first time in order to investigate if phenomena like the Kaiser effect and waveform parameters used in conventional metals can offer valuable insight into the damage accumulation of the AM structure as well. Liquid penetrant inspection, eddy current and radiography were also used in order to confirm the fatigue damage and indicate the damage location on un-notched four-point bending AM metallic specimens with an integrated eSHM system. It is shown that the eSHM system in combination with NDT can provide correct information on the damage condition of additive manufactured metals.

  8. A Method to Evaluate Additional Waste Forms to Optimize Performance of the HLW Repository

    SciTech Connect

    D. Gombert; L. Lauerhass

    2006-02-01

    The DOE high-level waste (HLW) disposal system is based on decisions made in the 1970s. The de facto Yucca Mountain WAC for HLW, contained in the Waste Acceptance System Requirements Document (WASRD), and the DOE-EM Waste Acceptance Product Specification for Vitrified High Level Waste Forms (WAPS) tentatively describes waste forms to be interred in the repository, and limits them to borosilicate glass (BSG). It is known that many developed waste forms are as durable as or better than environmental assessment or “EA”-glass. Among them are the salt-ceramic and metallic waste forms developed at ANL-W. Also, iron phosphate glasses developed at University of Missouri show promise in stabilizing the most refractory materials in Hanford HLW. However, for any of this science to contribute, the current Total System Performance Assessment model must be able to evaluate the additional waste form to determine potential impacts on repository performance. The results can then support the technical bases required in the repository license application. A methodology is proposed to use existing analysis models to evaluate potential additional waste forms for disposal without gathering costly material specific degradation data. The concept is to analyze the potential impacts of waste form chemical makeup on repository performance assuming instantaneous waste matrix dissolution. This assumption obviates the need for material specific degradation models and is based on the relatively modest fractional contribution DOE HLW makes to the repository radionuclide and hazardous metals inventory. The existing analysis models, with appropriate data modifications, are used to evaluate geochemical interactions and material transport through the repository. This methodology would support early screening of proposed waste forms through simplified evaluation of disposal performance, and would provide preliminary guidance for repository license amendment in the future.

  9. [High Throughput Screening Analysis of Preservatives and Sweeteners in Carbonated Beverages Based on Improved Standard Addition Method].

    PubMed

    Wang, Su-fang; Liu, Yun; Gong, Li-hua; Dong, Chun-hong; Fu, De-xue; Wang, Guo-qing

    2016-02-01

    Simulated water samples of 3 kinds of preservatives and 4 kinds of sweeteners were formulated by using orthogonal design. Kernel independent component analysis (KICA) was used to process the UV spectra of the simulated water samples and the beverages added different amounts of the additive standards, then the independent components (ICs), i. e. the UV spectral profiles of the additives, and the ICs' coefficient matrices were used to establish UV-KICA-SVR prediction model of the simulated preservatives and sweeteners solutions using support vector regression (SVR) analysis. The standards added beverages samples were obtained by adding different amounts level of additives in carbonated beverages, their UV spectra were processed by KICA, then IC information represented to the additives and other sample matrix were obtained, and the sample background can be deducted by removing the corresponding IC, other ICs' coefficient matrices were used to estimate the amounts of the additives in the standard added beverage samples based on the UV-KICA-SVR model, while the intercept of linear regression equation of predicted amounts and the added amounts in the standard added samples is the additive content in the raw beverage sample. By utilization of chemometric "blind source separation" method for extracting IC information of the tested additives in the beverage and other sample matrix, and using SVR regression modeling to improve the traditional standard addition method, a new method was proposed for the screening of the preservatives and sweeteners in carbonated beverages. The proposed UV-KICA-SVR method can be used to determine 3 kinds of preservatives and 4 kinds of sweetener in the carbonate beverages with the limit of detection (LOD) are located with the range 0.2-1.0 mg · L⁻¹, which are comparable to that of the traditional high performance liquid chromatographic (HPLC) method. PMID:27209754

  10. [High Throughput Screening Analysis of Preservatives and Sweeteners in Carbonated Beverages Based on Improved Standard Addition Method].

    PubMed

    Wang, Su-fang; Liu, Yun; Gong, Li-hua; Dong, Chun-hong; Fu, De-xue; Wang, Guo-qing

    2016-02-01

    Simulated water samples of 3 kinds of preservatives and 4 kinds of sweeteners were formulated by using orthogonal design. Kernel independent component analysis (KICA) was used to process the UV spectra of the simulated water samples and the beverages added different amounts of the additive standards, then the independent components (ICs), i. e. the UV spectral profiles of the additives, and the ICs' coefficient matrices were used to establish UV-KICA-SVR prediction model of the simulated preservatives and sweeteners solutions using support vector regression (SVR) analysis. The standards added beverages samples were obtained by adding different amounts level of additives in carbonated beverages, their UV spectra were processed by KICA, then IC information represented to the additives and other sample matrix were obtained, and the sample background can be deducted by removing the corresponding IC, other ICs' coefficient matrices were used to estimate the amounts of the additives in the standard added beverage samples based on the UV-KICA-SVR model, while the intercept of linear regression equation of predicted amounts and the added amounts in the standard added samples is the additive content in the raw beverage sample. By utilization of chemometric "blind source separation" method for extracting IC information of the tested additives in the beverage and other sample matrix, and using SVR regression modeling to improve the traditional standard addition method, a new method was proposed for the screening of the preservatives and sweeteners in carbonated beverages. The proposed UV-KICA-SVR method can be used to determine 3 kinds of preservatives and 4 kinds of sweetener in the carbonate beverages with the limit of detection (LOD) are located with the range 0.2-1.0 mg · L⁻¹, which are comparable to that of the traditional high performance liquid chromatographic (HPLC) method.

  11. The Etiology of Presbyopia, Contributing Factors, and Future Correction Methods

    NASA Astrophysics Data System (ADS)

    Hickenbotham, Adam Lyle

    Presbyopia has been a complicated problem for clinicians and researchers for centuries. Defining what constitutes presbyopia and what are its primary causes has long been a struggle for the vision and scientific community. Although presbyopia is a normal aging process of the eye, the continuous and gradual loss of accommodation is often dreaded and feared. If presbyopia were to be considered a disease, its global burden would be enormous as it affects more than a billion people worldwide. In this dissertation, I explore factors associated with presbyopia and develop a model for explaining the onset of presbyopia. In this model, the onset of presbyopia is associated primarily with three factors; depth of focus, focusing ability (accommodation), and habitual reading (or task) distance. If any of these three factors could be altered sufficiently, the onset of presbyopia could be delayed or prevented. Based on this model, I then examine possible optical methods that would be effective in correcting for presbyopia by expanding depth of focus. Two methods that have been show to be effective at expanding depth of focus include utilizing a small pupil aperture or generating higher order aberrations, particularly spherical aberration. I compare these two optical methods through the use of simulated designs, monitor testing, and visual performance metrics and then apply them in subjects through an adaptive optics system that corrects aberrations through a wavefront aberrometer and deformable mirror. I then summarize my findings and speculate about the future of presbyopia correction.

  12. Investigation in the use of plasma arc welding and alternative feedstock delivery method in additive manufacture

    NASA Astrophysics Data System (ADS)

    Alhuzaim, Abdullah F.

    The work conducted for this thesis was to investigate the use of plasma arc welding (PAW) and steel shot as a means of additive manufacturing. A robotic PAW system and automatic shot feeder were used to manufacture linear walls approximately 100 mm long by 7 mm wide and 20 mm tall. The walls were built, layer-by-layer, on plain carbon steel substrate by adding individual 2.5 mm diameter plain carbon steel shot. Each layer was built, shot-by-shot, using a pulse of arc current to form a molten pool on the deposit into which each shot was deposited and melted. The deposition rate, a measure of productivity, was approximately 50 g/hour. Three walls were built using the same conditions except for the deposit preheat temperature prior to adding each new layer. The deposit preheat temperature was controlled by allowing the deposit to cool after each layer for an amount of time called the inter-layer wait time. The walls were sectioned and grain size and hardness distribution were measured as a function of wall height. The results indicated that, for all specimens, deposit grain size increased and hardness decreased as wall height increased. Furthermore, average grain size decreased and hardness increased as interlayer wait time increased. An analytical heat flow model was developed to study the influence of interlayer wait time on deposit temperature and therefore grain size and hardness. The results of the model indicated that as wall height increased, the rate of deposit heat removal by conduction to the substrate decreased leading to a higher preheat temperature after a fixed interlayer wait time causing grain size to increase as wall height increased. However, the model results also show that as wall height increased, the deposit surface area from which heat energy is lost via convection and radiation increased. The model also demonstrated that the use of a means of forced convection to rapidly remove heat from the deposit could be an effective way to boost

  13. New Laboratory Methods for Characterizing the Immersion Factors for Irradiance

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Zibordi, Giuseppe; DAlimonte, Davide; vaderLinde, Dirk; Brown, James W.

    2003-01-01

    The experimental determination of the immersion factor, I(sub f)(lambda), of irradiance collectors is a requirement of any in-water radiometer. The eighth SeaWiFS Intercalibration Round-Robin Experiment (SIRREX-8) showed different implementations, at different laboratories, of the same I(sub f)(lambda) measurement protocol. The different implementations make use of different setups, volumes, and water types. Consequently, they exhibit different accuracies and require different execution times for characterizing an irradiance sensor. In view of standardizing the characterization of I(sub f)(lambda) values for in-water radiometers, together with an increase in the accuracy of methods and a decrease in the execution time, alternative methods are presented, and assessed versus the traditional method. The proposed new laboratory methods include: a) the continuous method, in which optical measurements taken with discrete water depths are substituted by continuous profiles created by removing the water from the water vessel at a constant flow rate (which significantly reduces the time required for the characterization of a single radiometer); and b) the Compact Portable Advanced Characterization Tank (ComPACT) method, in which the commonly used large tanks are replaced by a small water vessel, thereby allowing the determination of I(sub f)(lambda) values with a small water volume, and more importantly, permitting I(sub f)(lambda) characterizations with pure water. Intercomparisons between the continuous and the traditional method showed results within the variance of I(sub f) (lambda) determinations. The use of the continuous method, however, showed a much shorter realization time. Intercomparisons between the ComPACT and the traditional method showed generally higher I(sub f)(lambda) values for the former. This is in agreement with the generalized expectations of a reduction in scattering effects, because of the use of pure water with the ComPACT method versus the use of

  14. A method to approximate the inverse of a part of the additive relationship matrix.

    PubMed

    Faux, P; Gengler, N

    2015-06-01

    Single-step genomic predictions need the inverse of the part of the additive relationship matrix between genotyped animals (A22 ). Gains in computing time are feasible with an algorithm that sets up the sparsity pattern of A22-1 (SP algorithm) using pedigree searches, when A22-1 is close to sparse. The objective of this study is to present a modification of the SP algorithm (RSP algorithm) and to assess its use in approximating A22-1 when the actual A22-1 is dense. The RSP algorithm sets up a restricted sparsity pattern of A22-1 by limiting the pedigree search to a maximum number of searched branches. We have tested its use on four different simulated genotyped populations, from 10 000 to 75 000 genotyped animals. Accuracy of approximation is tested by replacing the actual A22-1 by its approximation in an equivalent mixed model including only genotyped animals. Results show that limiting the pedigree search to four branches is enough to provide accurate approximations of A22-1, which contain approximately 80% of zeros. Computing approximations is not expensive in time but may require a great amount of memory (at maximum, approximately 81 min and approximately 55 Gb of RAM for 75 000 genotyped animals using parallel processing on four threads). PMID:25560252

  15. Simultaneous determination of antazoline and naphazoline by the net analyte signal standard addition method and spectrophotometric technique.

    PubMed

    Asadpour-Zeynali, Karim; Ghavami, Raoof; Esfandiari, Roghayeh; Soheili-Azad, Payam

    2010-01-01

    A novel net analyte signal standard addition method (NASSAM) was used for simultaneous determination of the drugs anthazoline and naphazoline. The NASSAM can be applied for determination of analytes in the presence of known interferents. The proposed method is used to eliminate the calibration and prediction steps of multivariate calibration methods; the determination is carried out in a single step for each analyte. The accuracy of the predictions against the H-point standard addition method is independent of the shape of the analyte and interferent spectra. The net analyte signal concept was also used to calculate multivariate analytical figures of merit, such as LOD, selectivity, and sensitivity. The method was successfully applied to the simultaneous determination of anthazoline and naphazoline in a commercial eye drop sample.

  16. The crowding factor method applied to parafoveal vision

    PubMed Central

    Ghahghaei, Saeideh; Walker, Laura

    2016-01-01

    Crowding increases with eccentricity and is most readily observed in the periphery. During natural, active vision, however, central vision plays an important role. Measures of critical distance to estimate crowding are difficult in central vision, as these distances are small. Any overlap of flankers with the target may create an overlay masking confound. The crowding factor method avoids this issue by simultaneously modulating target size and flanker distance and using a ratio to compare crowded to uncrowded conditions. This method was developed and applied in the periphery (Petrov & Meleshkevich, 2011b). In this work, we apply the method to characterize crowding in parafoveal vision (<3.5 visual degrees) with spatial uncertainty. We find that eccentricity and hemifield have less impact on crowding than in the periphery, yet radial/tangential asymmetries are clearly preserved. There are considerable idiosyncratic differences observed between participants. The crowding factor method provides a powerful tool for examining crowding in central and peripheral vision, which will be useful in future studies that seek to understand visual processing under natural, active viewing conditions. PMID:27690170

  17. A method for calculating minimum biodiversity offset multipliers accounting for time discounting, additionality and permanence

    PubMed Central

    Laitila, Jussi; Moilanen, Atte; Pouzols, Federico M

    2014-01-01

    Biodiversity offsetting, which means compensation for ecological and environmental damage caused by development activity, has recently been gaining strong political support around the world. One common criticism levelled at offsets is that they exchange certain and almost immediate losses for uncertain future gains. In the case of restoration offsets, gains may be realized after a time delay of decades, and with considerable uncertainty. Here we focus on offset multipliers, which are ratios between damaged and compensated amounts (areas) of biodiversity. Multipliers have the attraction of being an easily understandable way of deciding the amount of offsetting needed. On the other hand, exact values of multipliers are very difficult to compute in practice if at all possible. We introduce a mathematical method for deriving minimum levels for offset multipliers under the assumption that offsetting gains must compensate for the losses (no net loss offsetting). We calculate absolute minimum multipliers that arise from time discounting and delayed emergence of offsetting gains for a one-dimensional measure of biodiversity. Despite the highly simplified model, we show that even the absolute minimum multipliers may easily be quite large, in the order of dozens, and theoretically arbitrarily large, contradicting the relatively low multipliers found in literature and in practice. While our results inform policy makers about realistic minimal offsetting requirements, they also challenge many current policies and show the importance of rigorous models for computing (minimum) offset multipliers. The strength of the presented method is that it requires minimal underlying information. We include a supplementary spreadsheet tool for calculating multipliers to facilitate application. PMID:25821578

  18. 25 CFR 39.1101 - Addition of pre-kindergarten as a weight factor to the Indian School Equalization Formula in...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true Addition of pre-kindergarten as a weight factor to the Indian School Equalization Formula in fiscal year 1982. 39.1101 Section 39.1101 Indians BUREAU OF INDIAN... Programs § 39.1101 Addition of pre-kindergarten as a weight factor to the Indian School...

  19. 25 CFR 39.1101 - Addition of pre-kindergarten as a weight factor to the Indian School Equalization Formula in...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false Addition of pre-kindergarten as a weight factor to the Indian School Equalization Formula in fiscal year 1982. 39.1101 Section 39.1101 Indians BUREAU OF INDIAN... Programs § 39.1101 Addition of pre-kindergarten as a weight factor to the Indian School...

  20. 25 CFR 39.1101 - Addition of pre-kindergarten as a weight factor to the Indian School Equalization Formula in...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false Addition of pre-kindergarten as a weight factor to the Indian School Equalization Formula in fiscal year 1982. 39.1101 Section 39.1101 Indians BUREAU OF INDIAN... Programs § 39.1101 Addition of pre-kindergarten as a weight factor to the Indian School...

  1. 25 CFR 39.1101 - Addition of pre-kindergarten as a weight factor to the Indian School Equalization Formula in...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Addition of pre-kindergarten as a weight factor to the Indian School Equalization Formula in fiscal year 1982. 39.1101 Section 39.1101 Indians BUREAU OF INDIAN... Programs § 39.1101 Addition of pre-kindergarten as a weight factor to the Indian School...

  2. 25 CFR 39.1101 - Addition of pre-kindergarten as a weight factor to the Indian School Equalization Formula in...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false Addition of pre-kindergarten as a weight factor to the Indian School Equalization Formula in fiscal year 1982. 39.1101 Section 39.1101 Indians BUREAU OF INDIAN... Programs § 39.1101 Addition of pre-kindergarten as a weight factor to the Indian School...

  3. The determination of ammonium in Kjeldahl digests using the gas-sensing ammonia electrode. Comparison of the direct method with the known-addition method.

    PubMed

    Nubé, M; Van den Aarsen, C P; Giliams, J P; Hekkens, W T

    1980-01-31

    The efficacy of the ammonia electrode for analysis of the nitrogen content of a large series of Kjeldahl digests was investigated. By using this electrode, two methods for the measurement of ammonium concentrations were compared, the direct method and the known-addition method. When the direct method was used, a marked shift in the electrode potential occurred within a few hours, causing errors of 9-17% in the results. When the ammonium concentrations were calculated from the difference in electrode potential before and after addition of a known amount of an ammonium standard solution (known-addition method), it was possible to carry out reproducible measurements and the shift in the electrode potential did not influence the results. In two series of identical samples the coefficient of variation was respectively 1.45% and 0.80%.

  4. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    DOEpatents

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  5. Routing Corners of Building Structures - by the Method of Vector Addition - Measured with RTN GNSS Surveying Technology

    NASA Astrophysics Data System (ADS)

    Krzyżek, Robert

    2015-12-01

    The paper deals with the problem of surveying buildings in the RTN GNSS mode using modernized indirect methods of measurement. As a result of the classical realtime measurements using indirect methods (intersection of straight lines or a point on a straight line), we obtain a building structure (a building) which is largely deformed. This distortion is due to the inconsistency of the actual dimensions of the building (tie distances) relative to the obtained measurement results. In order to eliminate these discrepancies, and thus to ensure full consistency of the building geometric structure, an innovative solution was applied - the method of vector addition - to modify the linear values (tie distances) of the external face of the building walls. A separate research problem tackled in the article, although not yet fully solved, is the issue of coordinates of corners of a building obtained after the application of the method of vector addition.

  6. Standard addition flow method for potentiometric measurements at low concentration levels: application to the determination of fluoride in food samples.

    PubMed

    Galvis-Sánchez, Andrea C; Santos, João Rodrigo; Rangel, António O S S

    2015-02-01

    A standard addition method was implemented by using a flow manifold able to perform automatically multiple standard additions and in-line sample treatment. This analytical strategy was based on the in-line mixing of sample and standard addition solutions, using a merging zone approach. The flow system aimed to exploit the standard addition method to quantify the target analyte particularly in cases where the analyte concentration in the matrix is below the lower limit of linear response of the detector. The feasibility of the proposed flow configuration was assessed through the potentiometric determination of fluoride in sea salts of different origins and different types of coffee infusions. The limit of quantification of the proposed manifold was 5×10(-6) mol L(-1), 10-fold lower than the lower limit of linear response of the potentiometric detector used. A determination rate of 8 samples h(-1) was achieved considering an experimental procedure based on three standard additions per sample. The main advantage of the proposed strategy is the simple approach to perform multiple standard additions, which can be implemented with other ion selective electrodes, especially in cases when the primary ion is below the lower limit of linear response of the detector.

  7. Proton Form Factor Measurements Using Polarization Method: Beyond Born Approximation

    SciTech Connect

    Pentchev, Lubomir

    2008-10-13

    Significant theoretical and experimental efforts have been made over the past 7 years aiming to explain the discrepancy between the proton form factor ratio data obtained at JLab using the polarization method and the previous Rosenbluth measurements. Preliminary results from the first high precision polarization experiment dedicated to study effects beyond Born approximation will be presented. The ratio of the transferred polarization components and, separately, the longitudinal polarization in ep elastic scattering have been measured at a fixed Q{sup 2} of 2.5 GeV{sup 2} over a wide kinematic range. The two quantities impose constraints on the real part of the ep elastic amplitudes.

  8. Sodium Benzoate, a Metabolite of Cinnamon and a Food Additive, Upregulates Ciliary Neurotrophic Factor in Astrocytes and Oligodendrocytes.

    PubMed

    Modi, Khushbu K; Jana, Malabendu; Mondal, Susanta; Pahan, Kalipada

    2015-11-01

    Ciliary neurotrophic factor (CNTF) is a promyelinating trophic factor that plays an important role in multiple sclerosis (MS). However, mechanisms by which CNTF expression could be increased in the brain are poorly understood. Recently we have discovered anti-inflammatory and immunomodulatory activities of sodium benzoate (NaB), a metabolite of cinnamon and a widely-used food additive. Here, we delineate that NaB is also capable of increasing the mRNA and protein expression of CNTF in primary mouse astrocytes and oligodendrocytes and primary human astrocytes. Accordingly, oral administration of NaB and cinnamon led to the upregulation of astroglial and oligodendroglial CNTF in vivo in mouse brain. Induction of experimental allergic encephalomyelitis, an animal model of MS, reduced the level of CNTF in the brain, which was restored by oral administration of cinnamon. While investigating underlying mechanisms, we observed that NaB induced the activation of protein kinase A (PKA) and H-89, an inhibitor of PKA, abrogated NaB-induced expression of CNTF. The activation of cAMP response element binding (CREB) protein by NaB, the recruitment of CREB and CREB-binding protein to the CNTF promoter by NaB and the abrogation of NaB-induced expression of CNTF in astrocytes by siRNA knockdown of CREB suggest that NaB increases the expression of CNTF via the activation of CREB. These results highlight a novel myelinogenic property of NaB and cinnamon, which may be of benefit for MS and other demyelinating disorders.

  9. Additional cytosine inside mitochondrial C-tract D-loop as a progression risk factor in oral precancer cases

    PubMed Central

    Pandey, Rahul; Mehrotra, Divya; Mahdi, Abbas Ali; Sarin, Rajiv; Kowtal, Pradnya

    2014-01-01

    Introduction Alterations inside Polycytosine tract (C-tract) of mitochondrial DNA (mtDNA) have been described in many different tumor types. The Poly-Cytosine region is located within the mtDNA D-loop region which acts as point of mitochondrial replication origin. A suggested pathogenesis is that it interferes with the replication process of mtDNA which in turn affects the mitochondrial functioning and generates disease. Methodology 100 premalignant cases (50 leukoplakia & 50 oral submucous fibrosis) were selected and the mitochondrial DNA were isolated from the lesion tissues and from the blood samples. Polycytosine tract in mtDNA was sequenced by direct capillary sequencing. Results 40 (25 leukoplakia & 15 oral submucous fibrosis) patients harbored lesions that displayed one additional cytosine after nucleotide thymidine (7CT6C) at nt position 316 in C-tract of mtDNA which were absent in corresponding mtDNA derived from blood samples. Conclusion Our results show an additional cytosine in the mtDNA at polycytosine site in oral precancer cases. It is postulated that any increase/decrease in the number of cytosine residues in the Poly-Cytosine region may affect the rate of mtDNA replication by impairing the binding of polymerase and other transacting factors. By promoting mitochondrial genomic instability, it may have a central role in the dysregulation of mtDNA functioning, for example alterations in energy metabolism that may promote tumor development. We, therefore, report and propose that this alteration may represent the early development of oral cancer. Further studies with large number of samples are needed in to confirm the role of such mutation in carcinogenesis. PMID:25737911

  10. Mixed model methods for genomic prediction and variance component estimation of additive and dominance effects using SNP markers.

    PubMed

    Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo

    2014-01-01

    We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005-0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level.

  11. Source Distribution Method for Unsteady One-Dimensional Flows With Small Mass, Momentum, and Heat Addition and Small Area Variation

    NASA Technical Reports Server (NTRS)

    Mirels, Harold

    1959-01-01

    A source distribution method is presented for obtaining flow perturbations due to small unsteady area variations, mass, momentum, and heat additions in a basic uniform (or piecewise uniform) one-dimensional flow. First, the perturbations due to an elemental area variation, mass, momentum, and heat addition are found. The general solution is then represented by a spatial and temporal distribution of these elemental (source) solutions. Emphasis is placed on discussing the physical nature of the flow phenomena. The method is illustrated by several examples. These include the determination of perturbations in basic flows consisting of (1) a shock propagating through a nonuniform tube, (2) a constant-velocity piston driving a shock, (3) ideal shock-tube flows, and (4) deflagrations initiated at a closed end. The method is particularly applicable for finding the perturbations due to relatively thin wall boundary layers.

  12. Project-Method Fit: Exploring Factors That Influence Agile Method Use

    ERIC Educational Resources Information Center

    Young, Diana K.

    2013-01-01

    While the productivity and quality implications of agile software development methods (SDMs) have been demonstrated, research concerning the project contexts where their use is most appropriate has yielded less definitive results. Most experts agree that agile SDMs are not suited for all project contexts. Several project and team factors have been…

  13. Nontargeted Screening Method for Illegal Additives Based on Ultrahigh-Performance Liquid Chromatography-High-Resolution Mass Spectrometry.

    PubMed

    Fu, Yanqing; Zhou, Zhihui; Kong, Hongwei; Lu, Xin; Zhao, Xinjie; Chen, Yihui; Chen, Jia; Wu, Zeming; Xu, Zhiliang; Zhao, Chunxia; Xu, Guowang

    2016-09-01

    Identification of illegal additives in complex matrixes is important in the food safety field. In this study a nontargeted screening strategy was developed to find illegal additives based on ultrahigh-performance liquid chromatography-high-resolution mass spectrometry (UHPLC-HRMS). First, an analytical method for possible illegal additives in complex matrixes was established including fast sample pretreatment, accurate UHPLC separation, and HRMS detection. Second, efficient data processing and differential analysis workflow were suggested and applied to find potential risk compounds. Third, structure elucidation of risk compounds was performed by (1) searching online databases [Metlin and the Human Metabolome Database (HMDB)] and an in-house database which was established at the above-defined conditions of UHPLC-HRMS analysis and contains information on retention time, mass spectra (MS), and tandem mass spectra (MS/MS) of 475 illegal additives, (2) analyzing fragment ions, and (3) referring to fragmentation rules. Fish was taken as an example to show the usefulness of the nontargeted screening strategy, and six additives were found in suspected fish samples. Quantitative analysis was further carried out to determine the contents of these compounds. The satisfactory application of this strategy in fish samples means that it can also be used in the screening of illegal additives in other kinds of food samples.

  14. Nontargeted Screening Method for Illegal Additives Based on Ultrahigh-Performance Liquid Chromatography-High-Resolution Mass Spectrometry.

    PubMed

    Fu, Yanqing; Zhou, Zhihui; Kong, Hongwei; Lu, Xin; Zhao, Xinjie; Chen, Yihui; Chen, Jia; Wu, Zeming; Xu, Zhiliang; Zhao, Chunxia; Xu, Guowang

    2016-09-01

    Identification of illegal additives in complex matrixes is important in the food safety field. In this study a nontargeted screening strategy was developed to find illegal additives based on ultrahigh-performance liquid chromatography-high-resolution mass spectrometry (UHPLC-HRMS). First, an analytical method for possible illegal additives in complex matrixes was established including fast sample pretreatment, accurate UHPLC separation, and HRMS detection. Second, efficient data processing and differential analysis workflow were suggested and applied to find potential risk compounds. Third, structure elucidation of risk compounds was performed by (1) searching online databases [Metlin and the Human Metabolome Database (HMDB)] and an in-house database which was established at the above-defined conditions of UHPLC-HRMS analysis and contains information on retention time, mass spectra (MS), and tandem mass spectra (MS/MS) of 475 illegal additives, (2) analyzing fragment ions, and (3) referring to fragmentation rules. Fish was taken as an example to show the usefulness of the nontargeted screening strategy, and six additives were found in suspected fish samples. Quantitative analysis was further carried out to determine the contents of these compounds. The satisfactory application of this strategy in fish samples means that it can also be used in the screening of illegal additives in other kinds of food samples. PMID:27480407

  15. Factors Governing the Accuracy of Subvisible Particle Counting Methods.

    PubMed

    Ríos Quiroz, Anacelia; Finkler, Christof; Huwyler, Joerg; Mahler, Hanns-Christian; Schmidt, Roland; Koulov, Atanas V

    2016-07-01

    A number of new techniques for subvisible particle characterization in biotechnological products have emerged in the last decade. Although the pharmaceutical community is actively using them, the current knowledge about the analytical performance of some of these tools is still inadequate to support their routine use in the development of biopharmaceuticals (especially in the case of submicron methods). With the aim of increasing this knowledge and our understanding of the most prominent techniques for subvisible particle characterization, this study reports the results of a systematic evaluation of their accuracy. Our results showed a marked overcounting effect especially for low concentrated samples and particles fragile in nature. Furthermore, we established the relative sample size distribution as the most important contributor to an instrument's performance in accuracy counting. The smaller the representation of a particle size within a solution, the more difficulty the instruments had in providing an accurate count. These findings correlate with a recent study examining the principal factors influencing the precision of the subvisible particle measurements. A more thorough understanding of the capabilities of the different particle characterization methods provided here will help guide the application of these methods and the interpretation of results in subvisible particle characterization studies.

  16. Factors Governing the Accuracy of Subvisible Particle Counting Methods.

    PubMed

    Ríos Quiroz, Anacelia; Finkler, Christof; Huwyler, Joerg; Mahler, Hanns-Christian; Schmidt, Roland; Koulov, Atanas V

    2016-07-01

    A number of new techniques for subvisible particle characterization in biotechnological products have emerged in the last decade. Although the pharmaceutical community is actively using them, the current knowledge about the analytical performance of some of these tools is still inadequate to support their routine use in the development of biopharmaceuticals (especially in the case of submicron methods). With the aim of increasing this knowledge and our understanding of the most prominent techniques for subvisible particle characterization, this study reports the results of a systematic evaluation of their accuracy. Our results showed a marked overcounting effect especially for low concentrated samples and particles fragile in nature. Furthermore, we established the relative sample size distribution as the most important contributor to an instrument's performance in accuracy counting. The smaller the representation of a particle size within a solution, the more difficulty the instruments had in providing an accurate count. These findings correlate with a recent study examining the principal factors influencing the precision of the subvisible particle measurements. A more thorough understanding of the capabilities of the different particle characterization methods provided here will help guide the application of these methods and the interpretation of results in subvisible particle characterization studies. PMID:27287519

  17. Hyaluronic acid as an internal phase additive to obtain ofloxacin/PLGA microsphere by double emulsion method.

    PubMed

    Wu, Gang; Chen, Long; Li, Hong; Wang, Ying-jun

    2014-01-01

    Hyaluronic acid (HA) was used as an internal phase additive to improve the loading efficiency of ofloxacin, a hydrophilic drug encapsulated by hydrophobic polylactic-co-glycolic acid (PLGA) materials, through a double emulsion (water-in-oil-in-water) solvent extraction/evaporation method. Results from laser distribution analysis show that polyelectrolyte additives have low impact on the average particle size and distribution of the microspheres. The negatively charged HA increases the drug loading efficiency as well as the amount of HA in microspheres. Burst release can be observed in the groups with the polyelectrolyte additives. The release rate decreases with the amount of HA inside the microspheres in all negatively charged polyelectrolyte-added microsphere groups.

  18. [Denoising and assessing method of additive noise in the ultraviolet spectrum of SO2 in flue gas].

    PubMed

    Zhou, Tao; Sun, Chang-Ku; Liu, Bin; Zhao, Yu-Mei

    2009-11-01

    The problem of denoising and assessing method of the spectrum of SO2 in flue gas was studied based on DOAS. The denoising procedure of the additive noise in the spectrum was divided into two parts: reducing the additive noise and enhancing the useful signal. When obtaining the absorption feature of measured gas, a multi-resolution preprocessing method of original spectrum was adopted for denoising by DWT (discrete wavelet transform). The signal energy operators in different scales were used to choose the denoising threshold and separate the useful signal from the noise. On the other hand, because there was no sudden change in the spectra of flue gas in time series, the useful signal component was enhanced according to the signal time dependence. And the standard absorption cross section was used to build the ideal absorption spectrum with the measured gas temperature and pressure. This ideal spectrum was used as the desired signal instead of the original spectrum in the assessing method to modify the SNR (signal-noise ratio). There were two different environments to do the proof test-in the lab and at the scene. In the lab, SO2 was measured several times with the system using this method mentioned above. The average deviation was less than 1.5%, while the repeatability was less than 1%. And the short range experiment data were better than the large range. In the scene of a power plant whose concentration of flue gas had a large variation range, the maximum deviation of this method was 2.31% in the 18 groups of contrast data. The experimental results show that the denoising effect of the scene spectrum was better than that of the lab spectrum. This means that this method can improve the SNR of the spectrum effectively, which is seriously polluted by additive noise. PMID:20101989

  19. Effect of amine addition on the synthesis of CdSe nanocrystals in liquid paraffin via one-pot method

    NASA Astrophysics Data System (ADS)

    Jia, Jinqian; Tian, Jintao; Tian, Weiguo; Mi, Wen; Liu, Xiaoyun; Dai, Jinhui; Wang, Xin

    2014-02-01

    The effect of n-octylamine (OA) and octadecylamine (ODA) addition on the synthesis of CdSe nanocrystals in liquid paraffin via one-pot method is investigated via the measurements of their ultraviolet-visible absorption and fluorescence emission spectra. Our results showed that the in situ added amines can activate the formation reaction of Cd precursor and, as a result, substantially decrease the initial reaction temperature and accelerate the particle growth. By adding OA at high temperature of 200 °C, remarkable improvement on particle quality is achieved, giving relatively narrow size distribution of 33.1 nm and high photoluminescence quantum yield (PLQY) of 81.9% for the CdSe nanoparticles. OA addition at low temperature shows also good quality improvement for the nanoparticles. With regard to the primary amine of ODA, it may be inappropriate for quality improvement of the CdSe nanoparticles from liquid paraffin via one-pot method.

  20. Factorization method and new potentials from the inverted oscillator

    SciTech Connect

    Bermudez, David Fernández C, David J.

    2013-06-15

    In this article we will apply the first- and second-order supersymmetric quantum mechanics to obtain new exactly-solvable real potentials departing from the inverted oscillator potential. This system has some special properties; in particular, only very specific second-order transformations produce non-singular real potentials. It will be shown that these transformations turn out to be the so-called complex ones. Moreover, we will study the factorization method applied to the inverted oscillator and the algebraic structure of the new Hamiltonians. -- Highlights: •We apply supersymmetric quantum mechanics to the inverted oscillator potential. •The complex second-order transformations allow us to build new non-singular potentials. •The algebraic structure of the initial and final potentials is analyzed. •The initial potential is described by a complex-deformed Heisenberg–Weyl algebra. •The final potentials are described by polynomial Heisenberg algebras.

  1. Quantifying uncertainty of determination by standard additions and serial dilutions methods taking into account standard uncertainties in both axes.

    PubMed

    Hyk, Wojciech; Stojek, Zbigniew

    2013-06-18

    The analytical expressions for the calculation of the standard uncertainty of the predictor variable either extrapolated or interpolated from a calibration line that takes into account uncertainties in both axes have been derived and successfully verified using the Monte Carlo modeling. These expressions are essential additions to the process of the analyte quantification realized with either the method of standard additions (SAM) or the method of serial dilutions (MSD). The latter one has been proposed as an alternative approach to the SAM procedure. In the MSD approach instead of the sequence of standard additions, the sequence of solvent additions to the spiked sample is performed. The comparison of the calculation results based on the expressions derived to their equivalents obtained from the Monte Carlo simulation, applied to real experimental data sets, confirmed that these expressions are valid in real analytical practice. The estimation of the standard uncertainty of the analyte concentration, quantified via either SAM or MSD or simply a calibration curve, is of great importance for the construction of the uncertainty budget of an analytical procedure. The correct estimation of the standard uncertainty of the analyte concentration is a key issue in the quality assurance in the instrumental analysis.

  2. Determination of propranolol enantiomers in plasma and urine by spectrofluorimetry and second-order standard addition method.

    PubMed

    Valderrama, Patrícia; Poppi, Ronei Jesus

    2009-09-28

    The determination of propranolol enantiomers in human plasma and urine by spectrofluorimetry and a second-order standard addition method is described. The methodology is based on chiral recognition of propranolol by formation of an inclusion complex with beta-cyclodextrin, a chiral auxiliary, in the presence of 1-butanol. The adopted strategy combines the use of PARAFAC, for extraction of the pure analyte signal, with the standard addition method, for determinations in the presence of an individual matrix effect caused by the quenching action of the proteins present in the plasma and urine. A specific PARAFAC model was built for each sample, in triplicate, and the scores were related to (R)-propranolol mole fraction using a linear regression in the standard addition method. Using a propranolol with concentration of 260 ng mL(-1), good results were obtained for determinations in the mole fraction range from 50 to 80% of (R)-propranolol, providing absolute errors between 0.4 and 3.6% for plasma and between 0.9 and 6.0% for urine.

  3. Novel real function based method to construct heterogeneous porous scaffolds and additive manufacturing for use in medical engineering.

    PubMed

    Yang, Nan; Tian, Yanling; Zhang, Dawei

    2015-11-01

    Heterogeneous porous scaffolds have important applications in biomedical engineering, as they can mimic the structures of natural tissues to achieve the corresponding properties. Here, we introduce a new and easy to implement real function based method for constructing complex, heterogeneous porous structures, including hybrid structures, stochastic structures, functionally gradient structures, and multi-scale structures, or their combinations (e.g., hybrid multi-scale structures). Based on micro-CT data, a femur-mimetic structure with gradient morphology was constructed using our method and fabricated using stereolithography. Results showed that our method could generate gradient porosity or gradient specific surfaces and be sufficiently flexible for use with micro-CT data and additive manufacturing (AM) techniques.

  4. Spectrophotometric study of complexation equilibria with H-point standard addition and H-point curve isolation methods.

    PubMed

    Abdollahi, H; Zeinali, S

    2004-01-01

    The use of H-point curve isolation (HPCIM) and H-point standard addition methods (HPSAM) for spectrophotometric studies of complex formation equilibria are proposed. One step complex formation, two successive stepwise and mononuclear complex formation systems, and competitive complexation systems are studied successfully by the proposed methods. HPCIM is used for extracting the spectrum of complex or sum of complex species and HPSAM is used for calculation of equilibrium concentrations of ligand for each sample. The outputs of these procedures are complete concentration profiles of equilibrium system, spectral profile of intermediate components, and good estimation of conditional formation constants. The reliability of the method is evaluated using model data. Spectrophotometric studies of murexide-calcium, dithizone-nickel, methyl thymol blue (MTB)-copper, and competition of murexide and sulfate ions for complexation with zinc, are used as experimental model systems with different complexation stoichiometries and spectral overlapping of involved components.

  5. Symbolic integration of a product of two spherical Bessel functions with an additional exponential and polynomial factor

    NASA Astrophysics Data System (ADS)

    Gebremariam, B.; Duguet, T.; Bogner, S. K.

    2010-06-01

    We present a Mathematica package that performs the symbolic calculation of integrals of the form ∫0∞exj(x)j(x)dx where j(x) and j(x) denote spherical Bessel functions of integer orders, with ν⩾0 and μ⩾0. With the real parameter u>0 and the integer n, convergence of the integral requires that n+ν+μ⩾0. The package provides analytical result for the integral in its most simplified form. In cases where direct Mathematica implementations succeed in evaluating these integrals, the novel symbolic method implemented in this work obtains the same result and in general, it takes a fraction of the time required for the direct implementation. We test the accuracy of such analytical expressions by comparing the results with their numerical counterparts. Program summaryProgram title: SymbBesselJInteg Catalogue identifier: AEFY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 275 934 No. of bytes in distributed program, including test data, etc.: 399 705 Distribution format: tar.gz Programming language: Mathematica 7.1 Computer: Any computer running Mathematica 6.0 and later versions. Operating system: Windows Xp, Linux/Unix. RAM: 256 Mb Classification: 5. Nature of problem: Integration, both analytical and numerical, of products of two spherical Bessel functions with an exponential and polynomial multiplying factor can be a very complex task depending on the orders of the spherical Bessel functions. The Mathematica package discussed in this paper solves this problem using a novel symbolic approach. Solution method: The problem is first cast into a related limit problem which can be broken into two related subproblems involving exponential and exponential integral functions. Solving the cores of each

  6. Mexican American First-Generation Students' Perceptions of Siblings and Additional Factors Influencing Their College Choice Process

    ERIC Educational Resources Information Center

    Elias McAllister, Dora

    2012-01-01

    The purpose of this study was to understand the factors influencing the college choice process of Mexican American first-generation students who had an older sibling with college experience. While a considerable amount of research exists on factors influencing the college choice process of first-generation college students, and a few studies…

  7. A statistical method for studying correlated rare events and their risk factors

    PubMed Central

    Xue, Xiaonan; Kim, Mimi Y; Wang, Tao; Kuniholm, Mark H; Strickler, Howard D

    2016-01-01

    Longitudinal studies of rare events such as cervical high-grade lesions or colorectal polyps that can recur often involve correlated binary data. Risk factor for these events cannot be reliably examined using conventional statistical methods. For example, logistic regression models that incorporate generalized estimating equations often fail to converge or provide inaccurate results when analyzing data of this type. Although exact methods have been reported, they are complex and computationally difficult. The current paper proposes a mathematically straightforward and easy-to-use two-step approach involving (i) an additive model to measure associations between a rare or uncommon correlated binary event and potential risk factors and (ii) a permutation test to estimate the statistical significance of these associations. Simulation studies showed that the proposed method reliably tests and accurately estimates the associations of exposure with correlated binary rare events. This method was then applied to a longitudinal study of human leukocyte antigen (HLA) genotype and risk of cervical high grade squamous intraepithelial lesions (HSIL) among HIV-infected and HIV-uninfected women. Results showed statistically significant associations of two HLA alleles among HIV-negative but not HIV-positive women, suggesting that immune status may modify the HLA and cervical HSIL association. Overall, the proposed method avoids model nonconvergence problems and provides a computationally simple, accurate, and powerful approach for the analysis of risk factor associations with rare/uncommon correlated binary events. PMID:25854937

  8. An upscaling method for cover-management factor and its application in the loess Plateau of China.

    PubMed

    Zhao, Wenwu; Fu, Bojie; Qiu, Yang

    2013-10-09

    The cover-management factor (C-factor) is important for studying soil erosion. In addition, it is important to use sampling plot data to estimate the regional C-factor when assessing erosion and soil conservation. Here, the loess hill and gully region in Ansai County, China, was studied to determine a method for computing the C-factor. This C-factor is used in the Universal Soil Loss Equation (USLE) at a regional scale. After upscaling the slope-scale computational equation, the C-factor for Ansai County was calculated by using the soil loss ratio, precipitation and land use/cover type. The multi-year mean C-factor for Ansai County was 0.36. The C-factor values were greater in the eastern region of the county than in the western region. In addition, the lowest C-factor values were found in the southern region of the county near its southern border. These spatial differences were consistent with the spatial distribution of the soil loess ratios across areas with different land uses. Additional research is needed to determine the effects of seasonal vegetation growth changes on the C-factor, and the C-factor upscaling uncertainties at a regional scale.

  9. Research on design method of the full form ship with minimum thrust deduction factor

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-ji; Miao, Ai-qin; Zhang, Zhu-xin

    2015-04-01

    In the preliminary design stage of the full form ships, in order to obtain a hull form with low resistance and maximum propulsion efficiency, an optimization design program for a full form ship with the minimum thrust deduction factor has been developed, which combined the potential flow theory and boundary layer theory with the optimization technique. In the optimization process, the Sequential Unconstrained Minimization Technique (SUMT) interior point method of Nonlinear Programming (NLP) was proposed with the minimum thrust deduction factor as the objective function. An appropriate displacement is a basic constraint condition, and the boundary layer separation is an additional one. The parameters of the hull form modification function are used as design variables. At last, the numerical optimization example for lines of after-body of 50000 DWT product oil tanker was provided, which indicated that the propulsion efficiency was improved distinctly by this optimal design method.

  10. An automated Monte-Carlo based method for the calculation of cascade summing factors

    NASA Astrophysics Data System (ADS)

    Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.

    2016-10-01

    A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.

  11. Method and apparatus for determining the presence or absence of a pour point depressant additive in hydrocarbon liquids

    SciTech Connect

    Rummel, J.D.

    1986-07-29

    A method is described of determining the presence or absence of a pour point depressant additive in a hydrocarbon liquid derived from petroleum, the liquid containing paraffin wax, comprising the steps of: (a) cooling a sample of the liquid at a predetermined cooling rate from a temperature substantially above the cloud point temperature to a temperature substantially below the cloud point temperature; (b) monitoring the slope of the cooling rate curve and noting the points at which a deflection in the curve begins and ends; (c) determining the time interval between the beginning and ending points of the deflection of the curve, and (d) comparing the determined time interval to a reference time interval, associated with the predetermined cooling rate, so as to establish whether the determined time interval is less than or greater than the reference time interval thereby establishing the presence or absence, respectively, of a pour point depressant additive.

  12. Influence of additives on the increase of the heating value of Bayah's coal with upgrading brown coal (UBC) method

    NASA Astrophysics Data System (ADS)

    Heriyanto, Heri; Widya Ernayati, K.; Umam, Chairul; Margareta, Nita

    2015-12-01

    UBC (upgrading brown coal) is a method of improving the quality of coal by using oil as an additive. Through processing in the oil media, not just the calories that increase, but there is also water repellent properties and a decrease in the tendency of spontaneous combustion of coal products produced. The results showed a decrease in the water levels of natural coal bayah reached 69%, increase in calorific value reached 21.2%. Increased caloric value and reduced water content caused by the water molecules on replacing seal the pores of coal by oil and atoms C on the oil that is bound to increase the percentage of coal carbon. As a result of this experiment is, the produced coal has better calorific value, the increasing of this new calorific value up to 23.8% with the additive waste lubricant, and the moisture content reduced up to 69.45%.

  13. Compact integration factor methods for complex domains and adaptive mesh refinement.

    PubMed

    Liu, Xinfeng; Nie, Qing

    2010-08-10

    Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed.

  14. Compact integration factor methods for complex domains and adaptive mesh refinement

    PubMed Central

    Liu, Xinfeng; Nie, Qing

    2010-01-01

    Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed. PMID:20543883

  15. Effect of Ag nanowire addition into nanoparticle paste on the conductivity of Ag patterns printed by gravure offset method.

    PubMed

    Ok, Ki-Hun; Lee, Chan-Jae; Kwak, Min-Gi; Choi, Duck-Kyun; Kim, Kwang-Seok; Jung, Seung-Boo; Kim, Jong-Woong

    2014-11-01

    This paper focuses on the effect of Ag nanowire addition into a commercial Ag nanopaste and the printability evaluation of the mixed paste by the gravure offset printing methodology. Ag nanowires were synthesized by a modified polyol method, and a small amount of them was added into a commercial metallic paste based on Ag nanoparticles of 50 nm in diameter. Two annealing temperatures were selected for comparison, and electrical conductivity was measured by four point probe method. As a result, the hybrid mixture could be printed by the gravure offset method for patterning fine lines up to 15 μm width with sharp edges and scarce spreading. The addition of the Ag nanowires was significantly efficient for enhancement of electrical conductivity of the printed lines annealed at a low temperature (150 degrees C), while the effect was somewhat diluted in case of high temperature annealing (200 degrees C). The experimental results were discussed with the conduction mechanism in the printed conductive circuits with a schematic description of the electron flows in the printed lines.

  16. Stretching human mesenchymal stromal cells on stiffness-customized collagen type I generates a smooth muscle marker profile without growth factor addition

    PubMed Central

    Rothdiener, Miriam; Hegemann, Miriam; Uynuk-Ool, Tatiana; Walters, Brandan; Papugy, Piruntha; Nguyen, Phong; Claus, Valentin; Seeger, Tanja; Stoeckle, Ulrich; Boehme, Karen A.; Aicher, Wilhelm K.; Stegemann, Jan P.; Hart, Melanie L.; Kurz, Bodo; Klein, Gerd; Rolauffs, Bernd

    2016-01-01

    Using matrix elasticity and cyclic stretch have been investigated for inducing mesenchymal stromal cell (MSC) differentiation towards the smooth muscle cell (SMC) lineage but not in combination. We hypothesized that combining lineage-specific stiffness with cyclic stretch would result in a significantly increased expression of SMC markers, compared to non-stretched controls. First, we generated dense collagen type I sheets by mechanically compressing collagen hydrogels. Atomic force microscopy revealed a nanoscale stiffness range known to support myogenic differentiation. Further characterization revealed viscoelasticity and stable biomechanical properties under cyclic stretch with >99% viable adherent human MSC. MSCs on collagen sheets demonstrated a significantly increased mRNA but not protein expression of SMC markers, compared to on culture flasks. However, cyclic stretch of MSCs on collagen sheets significantly increased both mRNA and protein expression of α-smooth muscle actin, transgelin, and calponin versus plastic and non-stretched sheets. Thus, lineage-specific stiffness and cyclic stretch can be applied together for inducing MSC differentiation towards SMCs without the addition of recombinant growth factors or other soluble factors. This represents a novel stimulation method for modulating the phenotype of MSCs towards SMCs that could easily be incorporated into currently available methodologies to obtain a more targeted control of MSC phenotype. PMID:27775041

  17. EVIDENCE ON THE SIMPLE STRUCTURE AND FACTOR INVARIANCE ACHIEVED BY FIVE ROTATIONAL METHODS ON FOUR TYPES OF DATA.

    PubMed

    Dielman, T E; Cattell, R B; Wagner, A

    1972-01-01

    Five methods of factor rotation-Maxplane, Oblimax, Promax, Harris- Kaiser, and Varimax-were applied to four types of data-questionnaire, objeckive test, a physical problem, and a plasmode. In addition, the Maxplane procedure was followed in each case by Rotoplot-assisted visual robations. The results were compared with respect to simple structure (hyperplane percentages) and factor invaniance (congruence coefficient). It was concluded that, in general, the oblique methods were superior to Varimax in terms of simple structure although not consistently in terms of factor invariance. Among the oblique methods, the Rotoplot-assisted Maxplane usually resulted in the maximum simple structure at the f .10 hyperplane width but not consistently at either of the other two arbitrarily chosen widths. The unassisted Maxplane was generally excelled by the less expensive oblique methods both wiith respect to hyperplane count and factor invariance. The Harris-Kaiser method was generally more satisfadory in terms of the two criteria combined.

  18. In Situ Hybridization Methods for Mouse Whole Mounts and Tissue Sections with and Without Additional β-Galactosidase Staining

    PubMed Central

    Komatsu, Yoshihiro; Kishigami, Satoshi; Mishina, Yuji

    2014-01-01

    In situ hybridization is a powerful method for detecting endogenous mRNA sequences in morphologically preserved samples. We provide in situ hybridization methods, which are specifically optimized for mouse embryonic samples as whole mounts and section tissues. Additionally, β-Galactosidase (β-gal) is a popular reporter for detecting the expression of endogenous or exogenous genes. We reveal that 6-chloro-3-indoxyl-β-D-galactopyranoside (S-gal) is a more sensitive substrate for β-gal activity than 5-bromo-4-chloro-3-indolyl-β-D-galactoside (X-gal). S-gal is advantageous where β-gal activity is limited including early stage mouse embryos. As a result of the increased sensitivity as well as the color compatibility of S-gal, we successfully combined β-gal staining using S-gal with in situ hybridization using DIG-labeled probes in both whole mounts and sections. PMID:24318810

  19. Short-term salivary acetaldehyde increase due to direct exposure to alcoholic beverages as an additional cancer risk factor beyond ethanol metabolism

    PubMed Central

    2011-01-01

    Background An increasing body of evidence now implicates acetaldehyde as a major underlying factor for the carcinogenicity of alcoholic beverages and especially for oesophageal and oral cancer. Acetaldehyde associated with alcohol consumption is regarded as 'carcinogenic to humans' (IARC Group 1), with sufficient evidence available for the oesophagus, head and neck as sites of carcinogenicity. At present, research into the mechanistic aspects of acetaldehyde-related oral cancer has been focused on salivary acetaldehyde that is formed either from ethanol metabolism in the epithelia or from microbial oxidation of ethanol by the oral microflora. This study was conducted to evaluate the role of the acetaldehyde that is found as a component of alcoholic beverages as an additional factor in the aetiology of oral cancer. Methods Salivary acetaldehyde levels were determined in the context of sensory analysis of different alcoholic beverages (beer, cider, wine, sherry, vodka, calvados, grape marc spirit, tequila, cherry spirit), without swallowing, to exclude systemic ethanol metabolism. Results The rinsing of the mouth for 30 seconds with an alcoholic beverage is able to increase salivary acetaldehyde above levels previously judged to be carcinogenic in vitro, with levels up to 1000 μM in cases of beverages with extreme acetaldehyde content. In general, the highest salivary acetaldehyde concentration was found in all cases in the saliva 30 sec after using the beverages (average 353 μM). The average concentration then decreased at the 2-min (156 μM), 5-min (76 μM) and 10-min (40 μM) sampling points. The salivary acetaldehyde concentration depends primarily on the direct ingestion of acetaldehyde contained in the beverages at the 30-sec sampling, while the influence of the metabolic formation from ethanol becomes the major factor at the 2-min sampling point. Conclusions This study offers a plausible mechanism to explain the increased risk for oral cancer associated with

  20. Changes in diet, cardiovascular risk factors and modelled cardiovascular risk following diagnosis of diabetes: 1-year results from the ADDITION-Cambridge trial cohort

    PubMed Central

    Savory, L A; Griffin, S J; Williams, K M; Prevost, A T; Kinmonth, A-L; Wareham, N J; Simmons, R K

    2014-01-01

    Aims To describe change in self-reported diet and plasma vitamin C, and to examine associations between change in diet and cardiovascular disease risk factors and modelled 10-year cardiovascular disease risk in the year following diagnosis of Type 2 diabetes. Methods Eight hundred and sixty-seven individuals with screen-detected diabetes underwent assessment of self-reported diet, plasma vitamin C, cardiovascular disease risk factors and modelled cardiovascular disease risk at baseline and 1 year (n = 736) in the ADDITION-Cambridge trial. Multivariable linear regression was used to quantify the association between change in diet and cardiovascular disease risk at 1 year, adjusting for change in physical activity and cardio-protective medication. Results Participants reported significant reductions in energy, fat and sodium intake, and increases in fruit, vegetable and fibre intake over 1 year. The reduction in energy was equivalent to an average-sized chocolate bar; the increase in fruit was equal to one plum per day. There was a small increase in plasma vitamin C levels. Increases in fruit intake and plasma vitamin C were associated with small reductions in anthropometric and metabolic risk factors. Increased vegetable intake was associated with an increase in BMI and waist circumference. Reductions in fat, energy and sodium intake were associated with reduction in HbA1c, waist circumference and total cholesterol/modelled cardiovascular disease risk, respectively. Conclusions Improvements in dietary behaviour in this screen-detected population were associated with small reductions in cardiovascular disease risk, independently of change in cardio-protective medication and physical activity. Dietary change may have a role to play in the reduction of cardiovascular disease risk following diagnosis of diabetes. PMID:24102972

  1. A Risk Score with Additional Four Independent Factors to Predict the Incidence and Recovery from Metabolic Syndrome: Development and Validation in Large Japanese Cohorts

    PubMed Central

    Obokata, Masaru; Negishi, Kazuaki; Ohyama, Yoshiaki; Okada, Haruka; Imai, Kunihiko; Kurabayashi, Masahiko

    2015-01-01

    Background Although many risk factors for Metabolic syndrome (MetS) have been reported, there is no clinical score that predicts its incidence. The purposes of this study were to create and validate a risk score for predicting both incidence and recovery from MetS in a large cohort. Methods Subjects without MetS at enrollment (n = 13,634) were randomly divided into 2 groups and followed to record incidence of MetS. We also examined recovery from it in rest 2,743 individuals with prevalent MetS. Results During median follow-up of 3.0 years, 878 subjects in the derivation and 757 in validation cohorts developed MetS. Multiple logistic regression analysis identified 12 independent variables from the derivation cohort and initial score for subsequent MetS was created, which showed good discrimination both in the derivation (c-statistics 0.82) and validation cohorts (0.83). The predictability of the initial score for recovery from MetS was tested in the 2,743 MetS population (906 subjects recovered from MetS), where nine variables (including age, sex, γ-glutamyl transpeptidase, uric acid and five MetS diagnostic criteria constituents.) remained significant. Then, the final score was created using the nine variables. This score significantly predicted both the recovery from MetS (c-statistics 0.70, p<0.001, 78% sensitivity and 54% specificity) and incident MetS (c-statistics 0.80) with an incremental discriminative ability over the model derived from five factors used in the diagnosis of MetS (continuous net reclassification improvement: 0.35, p < 0.001 and integrated discrimination improvement: 0.01, p<0.001). Conclusions We identified four additional independent risk factors associated with subsequent MetS, developed and validated a risk score to predict both incident and recovery from MetS. PMID:26230621

  2. Influences of synthesis methods and modifier addition on the properties of Ni-based catalysts supported on reticulated ceramic foams

    NASA Astrophysics Data System (ADS)

    Nikolić, Vesna; Kamberović, Željko; Anđić, Zoran; Korać, Marija; Sokić, Miroslav; Maksimović, Vesna

    2014-08-01

    A method of synthesizing Ni-based catalysts supported on α-Al2O3-based foams was developed. The foams were impregnated with aqueous solutions of metal chlorides under an air atmosphere using an aerosol route. Separate procedures involved calcination to form oxides and drying to obtain chlorides on the foam surface. The synthesized samples were subsequently reduced with hydrogen. With respect to the Ni/Al2O3 catalysts, the chloride reduction route enabled the formation of a Ni coating without agglomerates or cracks. Further research included catalyst modification by the addition of Pd, Cu, and Fe. The influences of the additives on the degree of reduction and on the low-temperature reduction effectiveness (533 and 633 K) were examined and compared for the catalysts obtained from oxides and chlorides. Greater degrees of reduction were achieved with chlorides, whereas Pd was the most effective modifier among those investigated. The reduction process was nearly complete at 533 K in the sample that contained 0.1wt% Pd. A lower reduction temperature was utilized, and the calcination step was avoided, which may enhance the economical and technological aspects of the developed catalyst production method.

  3. Standard addition method for the determination of pharmaceutical residues in drinking water by SPE-LC-MS/MS.

    PubMed

    Cimetiere, Nicolas; Soutrel, Isabelle; Lemasle, Marguerite; Laplanche, Alain; Crocq, André

    2013-01-01

    The study of the occurrence and fate of pharmaceutical compounds in drinking or waste water processes has become very popular in recent years. Liquid chromatography with tandem mass spectrometry is a powerful analytical tool often used to determine pharmaceutical residues at trace level in water. However, many steps may disrupt the analytical procedure and bias the results. A list of 27 environmentally relevant molecules, including various therapeutic classes and (cardiovascular, veterinary and human antibiotics, neuroleptics, non-steroidal anti-inflammatory drugs, hormones and other miscellaneous pharmaceutical compounds), was selected. In this work, a method was developed using ultra performance liquid chromatography coupled to tandem mass spectrometry (UPLC-MS/MS) and solid-phase extraction to determine the concentration of the 27 targeted pharmaceutical compounds at the nanogram per litre level. The matrix effect was evaluated from water sampled at different treatment stages. Conventional methods with external calibration and internal standard correction were compared with the standard addition method (SAM). An accurate determination of pharmaceutical compounds in drinking water was obtained by the SAM associated with UPLC-MS/MS. The developed method was used to evaluate the occurrence and fate of pharmaceutical compounds in some drinking water treatment plants in the west of France.

  4. Genomic-scale comparison of sequence- and structure-based methods of function prediction: Does structure provide additional insight?

    PubMed Central

    Fetrow, Jacquelyn S.; Siew, Naomi; Di Gennaro, Jeannine A.; Martinez-Yamout, Maria; Dyson, H. Jane; Skolnick, Jeffrey

    2001-01-01

    A function annotation method using the sequence-to-structure-to-function paradigm is applied to the identification of all disulfide oxidoreductases in the Saccharomyces cerevisiae genome. The method identifies 27 sequences as potential disulfide oxidoreductases. All previously known thioredoxins, glutaredoxins, and disulfide isomerases are correctly identified. Three of the 27 predictions are probable false-positives. Three novel predictions, which subsequently have been experimentally validated, are presented. Two additional novel predictions suggest a disulfide oxidoreductase regulatory mechanism for two subunits (OST3 and OST6) of the yeast oligosaccharyltransferase complex. Based on homology, this prediction can be extended to a potential tumor suppressor gene, N33, in humans, whose biochemical function was not previously known. Attempts to obtain a folded, active N33 construct to test the prediction were unsuccessful. The results show that structure prediction coupled with biochemically relevant structural motifs is a powerful method for the function annotation of genome sequences and can provide more detailed, robust predictions than function prediction methods that rely on sequence comparison alone. PMID:11316881

  5. Applicability of a carbamate insecticide multiresidue method for determining additional types of pesticides in fruits and vegetables.

    PubMed

    Krause, R T; August, E M

    1983-03-01

    Several fruits and vegetables were fortified at a low (0.02-0.5 ppm) and at a high (0.1-5 ppm) level with pesticides and with a synergist, and recoveries were determined. Analyses were performed by using 3 steps of a multiresidue method for determining N-methylcarbamates in crops: methanol extraction followed by removal of plant co-extractives by solvent partitioning and chromatography with a charcoal-silanized Celite column. Eleven compounds were determined by using a high performance liquid chromatograph equipped with a reverse phase column and a fluorescence detector. Twelve additional compounds were determined by using a gas-liquid chromatograph equipped with a nonpolar packed column and an electron capture or flame photometric detector. Recoveries of 10 pesticides (azinphos ethyl, azinphos methyl, azinphos methyl oxygen analog, carbaryl, carbofuran, naphthalene acetamide, naphthalene acetic acid methyl ester, napropamide, phosalone, and phosalone oxygen analog) and the synergist piperonyl butoxide, which were determined by high performance liquid chromatography, averaged 100% (range 86-117) at the low fortification level and 102% (range 93-115) at the high fortification level. Quantitative recovery of naphthalene acetamide through the method required that an additional portion of eluting solution be passed through the charcoal column. Recoveries of 7 additional pesticides (dimethoate, malathion, methyl parathion, mevinphos, parathion, phorate oxygen analog, and pronamide), which were determined by gas-liquid chromatography (GLC), averaged 108% (range 100-120) at the low fortification level and 107% (range 99-122) at the high fortification level. DDT, diazinon, dieldrin, phorate, and pirimiphos ethyl, which were determined by GLC, were not quantitatively recovered. PMID:6853408

  6. An Additional Potential Factor for Kidney Stone Formation during Space Flights: Calcifying Nanoparticles (Nanobacteria): A Case Report

    NASA Technical Reports Server (NTRS)

    Jones, Jeffrey A.; Ciftcioglu, Neva; Schmid, Joseph; Griffith, Donald

    2007-01-01

    Spaceflight-induced microgravity appears to be a risk factor for the development of urinary calculi due to skeletal calcium liberation and other undefined factors, resulting in stone disease in crewmembers during and after spaceflight. Calcifying nanoparticles, or nanobacteria, reproduce at a more rapid rate in simulated microgravity conditions and create external shells of calcium phosphate in the form of apatite. The questions arises whether calcifying nanoparticles are niduses for calculi and contribute to the development of clinical stone disease in humans, who possess environmental factors predisposing to the development of urinary calculi and potentially impaired immunological defenses during spaceflight. A case of a urinary calculus passed from an astronaut post-flight with morphological characteristics of calcifying nanoparticles and staining positive for a calcifying nanoparticle unique antigen, is presented.

  7. Method for interpolating in Bondarenko factor tables and other functions

    SciTech Connect

    Greene, N M

    1982-01-01

    A simple, monotonic interpolation procedure is presented which has several advantages. In addition to its generality, its simplicity makes it particularly easy to use in typical computer applications.

  8. Stability of Q-Factors across Two Data Collection Methods.

    ERIC Educational Resources Information Center

    Daniel, Larry G.

    The purpose of the present study was to determine how two different data collection techniques would affect the Q-factors derived from several factor analytic procedures. Faculty members (N=146) from seven middle schools responded to 61 items taken from an instrument designed to measure aspects of an idealized middle school culture; the instrument…

  9. Employing Lead Thiocyanate Additive to Reduce the Hysteresis and Boost the Fill Factor of Planar Perovskite Solar Cells.

    PubMed

    Ke, Weijun; Xiao, Chuanxiao; Wang, Changlei; Saparov, Bayrammurad; Duan, Hsin-Sheng; Zhao, Dewei; Xiao, Zewen; Schulz, Philip; Harvey, Steven P; Liao, Weiqiang; Meng, Weiwei; Yu, Yue; Cimaroli, Alexander J; Jiang, Chun-Sheng; Zhu, Kai; Al-Jassim, Mowafak; Fang, Guojia; Mitzi, David B; Yan, Yanfa

    2016-07-01

    Lead thiocyanate in the perovskite precursor can increase the grain size of a perovskite thin film and reduce the conductivity of the grain boundaries, leading to perovskite solar cells with reduced hysteresis and enhanced fill factor. A planar perovskite solar cell with grain boundary and interface passivation achieves a steady-state efficiency of 18.42%.

  10. Determination of small-field correction factors for cylindrical ionization chambers using a semiempirical method

    NASA Astrophysics Data System (ADS)

    Park, Kwangwoo; Bak, Jino; Park, Sungho; Choi, Wonhoon; Park, Suk Won

    2016-02-01

    A semiempirical method based on the averaging effect of the sensitive volumes of different air-filled ionization chambers (ICs) was employed to approximate the correction factors for beam quality produced from the difference in the sizes of the reference field and small fields. We measured the output factors using several cylindrical ICs and calculated the correction factors using a mathematical method similar to deconvolution; in the method, we modeled the variable and inhomogeneous energy fluence function within the chamber cavity. The parameters of the modeled function and the correction factors were determined by solving a developed system of equations as well as on the basis of the measurement data and the geometry of the chambers. Further, Monte Carlo (MC) computations were performed using the Monaco® treatment planning system to validate the proposed method. The determined correction factors (k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} ) were comparable to the values derived from the MC computations performed using Monaco®. For example, for a 6 MV photon beam and a field size of 1  ×  1 cm2, k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} was calculated to be 1.125 for a PTW 31010 chamber and 1.022 for a PTW 31016 chamber. On the other hand, the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values determined from the MC computations were 1.121 and 1.031, respectively; the difference between the proposed method and the MC computation is less than 2%. In addition, we determined the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values for PTW 30013, PTW 31010, PTW 31016, IBA FC23-C, and IBA CC13 chambers as well. We devised a method for determining k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} from both the measurement of the output factors and model-based mathematical computation. The proposed method can be useful in case the MC simulation would not be applicable for the clinical settings.

  11. Confirmatory factor analysis and factorial invariance analysis of the adolescent self-report Strengths and Difficulties Questionnaire: how important are method effects and minor factors?

    PubMed

    van de Looij-Jansen, Petra M; Goedhart, Arnold W; de Wilde, Erik J; Treffers, Philip D A

    2011-06-01

    OBJECTIVES. This study examined the factor structure of the self-report Strengths and Difficulties Questionnaire, paying special attention to the number of factors and to negative effects of reverse-worded items and minor factors within the subscales on model fit. Furthermore, factorial invariance across gender, age, level of education, and ethnicity was investigated. DESIGN. Data were obtained from the Youth Health Monitor Rotterdam, a community-based health surveillance system. METHODS. The sample consisted of 11,881 pupils of 11-16 years old. Next to the original five-factor model, a factor model with the number of factors based on parallel analysis and scree test was investigated. Confirmatory factor analysis for ordered-categorical measures was applied to examine the goodness-of-fit and factorial invariance of the factor models. RESULTS. After allowing reverse-worded items to cross-load on the prosocial behaviour factor and adding error correlations, a good fit to the data was found for the original five-factor model (emotional symptoms, conduct problems, hyperactivity-inattention, peer problems, prosocial behaviour) and a model with four factors (emotional symptoms and peer problems, conduct problems, hyperactivity-inattention, prosocial behaviour). Factorial invariance across gender, age, level of education, and ethnicity was found for the final five- and four-factor model, except for the prosocial factor of the four-factor model that showed partial invariance across gender. Conclusions. While support was found for both models, the final five-factor model is theoretically more plausible and gained additional support as the original scales emotional problems and peer problems showed different relations with gender, educational level, and ethnicity. PMID:21545447

  12. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    SciTech Connect

    Hayes, R.B.; Haskell, E.H.; Kenner, G.H.

    1996-01-01

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput.

  13. The Monte Carlo method as a tool for statistical characterisation of differential and additive phase shifting algorithms

    NASA Astrophysics Data System (ADS)

    Miranda, M.; Dorrío, B. V.; Blanco, J.; Diz-Bugarín, J.; Ribas, F.

    2011-01-01

    Several metrological applications base their measurement principle in the phase sum or difference between two patterns, one original s(r,phi) and another modified t(r,phi+Δphi). Additive or differential phase shifting algorithms directly recover the sum 2phi+Δphi or the difference Δphi of phases without requiring prior calculation of the individual phases. These algorithms can be constructed, for example, from a suitable combination of known phase shifting algorithms. Little has been written on the design, analysis and error compensation of these new two-stage algorithms. Previously we have used computer simulation to study, in a linear approach or with a filter process in reciprocal space, the response of several families of them to the main error sources. In this work we present an error analysis that uses Monte Carlo simulation to achieve results in good agreement with those obtained with spatial and temporal methods.

  14. Ionic liquids as mobile phase additives for feasible assay of naphazoline in pharmaceutical formulation by HPTLC-UV-densitometric method.

    PubMed

    Marszałł, Michał Piotr; Sroka, Wiktor Dariusz; Balinowska, Aleksandra; Mieszkowski, Dominik; Koba, Marcin; Kaliszan, Roman

    2013-07-01

    A specific and reliable high-performance thin layer chromatography method with densitometry detection has been developed for the determination of naphazoline nitrate in nasal drops. The best separation of the basic analyte, without spot tailing, was achieved by using a mobile phase composed of acetonitrile-water (60:40, v/v), adding 1.5 % (v/v) imidazolium-class ionic liquid and covering the plates with a stationary phase based on RP-18 with F254S (10 × 20 cm). The presented results confirm that imidazolium tetrafluoroborate ionic liquids are efficient suppressors of free silanols, which are considered to be responsible for troublesome and irreproducible chromatographic determinations of basic compounds. The developed chromatographic system was found to be convenient in use and to provide a repeatable assay of naphazoline nitrate in nasal drops, which could not be obtained with the use of standard silanol suppressing mobile phase additives such as triethylamine or dimethyloctylamine.

  15. Undeniable Confirmation of the syn-Addition Mechanism for Metal-Free Diboration by Using the Crystalline Sponge Method.

    PubMed

    Cuenca, Ana B; Zigon, Nicolas; Duplan, Vincent; Hoshino, Manabu; Fujita, Makoto; Fernández, Elena

    2016-03-24

    The stereochemical outcome of the recently developed metal-free 1,2-diboration of aliphatic alkenes has, until now, only been elucidated by indirect means (e.g. derivatization). This is because classical conformational analysis of the resulting 1,2-diboranes is not viable; in the (1)H NMR spectrum the relevant (1)H resonances are broadened by (11)B, and the occurrence of the products as oily compounds precludes X-ray crystallographic analysis. Herein, the crystalline sponge method is used to display the crystal structures of the diboronic esters formed from internal E and Z olefins, evidencing the stereospecific syn addition mechanism of the reaction, which is fully consistent with the prediction from DFT calculations.

  16. Factors affecting the microbial and chemical composition of silage. III. Effect of urea additions on maize silage.

    PubMed

    Mahmoud, S A; Abd-el-Hafez, A; Zaki, M M; Saleh, E A

    1978-01-01

    The effect of urea additions on the microbiological and chemical properties of silage, produced from young maize plants (Darawa stage), was studied. Urea treatments, i.e., 0.25%, 0.50%, 0.75%, and 1.00%, stimulated higher densities of the desired microorganisms than the control, while undesired organisms showed lower counts (proteolytic and saccharolytic anaerobes). Addition of 0.25 to 0.50% or urea resulted in the production of high quality silage with pleasant small and high nutritive value, as confirmed by the various microbiological and chemical analyses conducted. Higher levels (0.75 and 1.00%) of urea decreased the quality of the product. PMID:29417

  17. Effects of the method of apatite seed crystals addition on setting reaction of α-tricalcium phosphate based apatite cement.

    PubMed

    Tsuru, Kanji; Ruslin; Maruta, Michito; Matsuya, Shigeki; Ishikawa, Kunio

    2015-10-01

    Appropriate setting time is an important parameter that determines the effectiveness of apatite cement (AC) for clinical application, given the issues of crystalline inflammatory response phenomena if AC fails to set. To this end, the present study analyzes the effects of the method of apatite seed crystals addition on the setting reaction of α-tricalcium phosphate (α-TCP) based AC. Two ACs, both consisting of α-TCP and calcium deficient hydroxyapatite (cdHAp), were analyzed in this study. In one AC, cdHAp was added externally to α-TCP and this AC was abbreviated as AC(EA). In the other AC, α-TCP was partially hydrolyzed to form cdHAp on the surface of α-TCP. This AC was referred to as AC(PH). Results indicate a decrease in the setting time of both ACs with the addition of cdHAp. Among them, for the given amount of added cdHAp, AC(PH) showed relatively shorter setting time than AC(EA). Besides, the mechanical strength of the set AC(PH) was also higher than that of set AC(EA). These properties of AC(PH) were attributed to the predominant crystal growth of cdHAp in the vicinity of the α-TCP particle surface. Accordingly, it can be concluded that the partial hydrolysis of α-TCP may be a better approach to add low crystalline cdHAp onto α-TCP based AC.

  18. A square-wave adsorptive stripping voltammetric method for the determination of Amaranth, a food additive dye.

    PubMed

    Alghamdi, Ahmad H

    2005-01-01

    Square-wave adsorptive stripping voltammetric (AdSV) determinations of trace concentrations of the azo coloring agent Amaranth are described. The analytical methodology used was based on the adsorptive preconcentration of the dye on the hanging mercury drop electrode, followed by initiation of a negative sweep. In a pH 10 carbonate supporting electrolyte, Amaranth gave a well-defined and sensitive AdSV peak at -518 mV. The electroanalytical determination of this azo dye was found to be optimal in carbonate buffer (pH 10) under the following experimental conditions: accumulation time, 120 s; accumulation potential, 0.0 V; scan rate, 600 mV/s; pulse amplitude, 90 mV; and frequency, 50 Hz. Under these optimized conditions the AdSV peak current was proportional over the concentration range 1 x 10(-8)-1.1 x 10(-7) mol/L (r = 0.999) with a detection limit of 1.7 x 10(-9) mol/L (1.03 ppb). This analytical approach possessed enhanced sensitivity, compared with conventional liquid chromatography or spectrophotometry and it was simple and fast. The precision of the method, expressed as the relative standard deviation, was 0.23%, whereas the accuracy, expressed as the mean recovery, was 104%. Possible interferences by several substances usually present as food additive azo dyes (E110, E102), gelatin, natural and artificial sweeteners, preservatives, and antioxidants were also investigated. The developed electroanalyticals method was applied to the determination of Amaranth in soft drink samples, and the results were compared with those obtained by a reference spectrophotometric method. Statistical analysis (paired t-test) of these data showed that the results of the 2 methods compared favorably.

  19. Developmental Testing of Habitability and Human Factors Tools and Methods During Neemo 15

    NASA Technical Reports Server (NTRS)

    Thaxton, S. S.; Litaker, H. L., Jr.; Holden, K. L.; Adolf, J. A.; Pace, J.; Morency, R. M.

    2011-01-01

    Currently, no established methods exist to collect real-time human factors and habitability data while crewmembers are living aboard the International Space Station (ISS), traveling aboard other space vehicles, or living in remote habitats. Currently, human factors and habitability data regarding space vehicles and habitats are acquired at the end of missions during postflight crew debriefs. These debriefs occur weeks or often longer after events have occurred, which forces a significant reliance on incomplete human memory, which is imperfect. Without a means to collect real-time data, small issues may have a cumulative effect and continue to cause crew frustration and inefficiencies. Without timely and appropriate reporting methodologies, issues may be repeated or lost. TOOL DEVELOPMENT AND EVALUATION: As part of a directed research project (DRP) aiming to develop and validate tools and methods for collecting near real-time human factors and habitability data, a preliminary set of tools and methods was developed. These tools and methods were evaluated during the NASA Extreme Environments Mission Operations (NEEMO) 15 mission in October 2011. Two versions of a software tool were used to collect observational data from NEEMO crewmembers that also used targeted strategies for using video cameras to collect observations. Space habitability observation reporting tool (SHORT) was created based on a tool previously developed by NASA to capture human factors and habitability issues during spaceflight. SHORT uses a web-based interface that allows users to enter a text description of any observations they wish to report and assign a priority level if changes are needed. In addition to the web-based format, a mobile Apple (iOS) format was implemented, referred to as iSHORT. iSHORT allows users to provide text, audio, photograph, and video data to report observations. iSHORT can be deployed on an iPod Touch, iPhone, or iPad; for NEEMO 15, the app was provided on an iPad2.

  20. c-Fos: an AP-1 transcription factor with an additional cytoplasmic, non-genomic lipid synthesis activation capacity.

    PubMed

    Caputto, Beatriz L; Cardozo Gizzi, Andrés M; Gil, Germán A

    2014-09-01

    The mechanisms that co-ordinately activate lipid synthesis when high rates of membrane biogenesis are needed to support cell growth are largely unknown. c-Fos, a well known AP-1 transcription factor, has emerged as a unique protein with the capacity to associate to specific enzymes of the pathway of synthesis of phospholipids at the endoplasmic reticulum and activate their synthesis to accompany genomic decisions of growth. Herein, we discuss this cytoplasmic, non-genomic effect of c-Fos in the context of other mechanisms that have been proposed to regulate lipid synthesis.

  1. Aerosol based direct-write micro-additive fabrication method for sub-mm 3D metal-dielectric structures

    NASA Astrophysics Data System (ADS)

    Rahman, Taibur; Renaud, Luke; Heo, Deuk; Renn, Michael; Panat, Rahul

    2015-10-01

    The fabrication of 3D metal-dielectric structures at sub-mm length scale is highly important in order to realize low-loss passives and GHz wavelength antennas with applications in wearable and Internet-of-Things (IoT) devices. The inherent 2D nature of lithographic processes severely limits the available manufacturing routes to fabricate 3D structures. Further, the lithographic processes are subtractive and require the use of environmentally harmful chemicals. In this letter, we demonstrate an additive manufacturing method to fabricate 3D metal-dielectric structures at sub-mm length scale. A UV curable dielectric is dispensed from an Aerosol Jet system at 10-100 µm length scale and instantaneously cured to build complex 3D shapes at a length scale  <1 mm. A metal nanoparticle ink is then dispensed over the 3D dielectric using a combination of jetting action and tilted dispense head, also using the Aerosol Jet technique and at a length scale 10-100 µm, followed by the nanoparticle sintering. Simulation studies are carried out to demonstrate the feasibility of using such structures as mm-wave antennas. The manufacturing method described in this letter opens up the possibility of fabricating an entirely new class of custom-shaped 3D structures at a sub-mm length scale with potential applications in 3D antennas and passives.

  2. Development and Validation of HPLC Method for the Simultaneous Determination of Five Food Additives and Caffeine in Soft Drinks.

    PubMed

    Aşçı, Bürge; Dinç Zor, Şule; Aksu Dönmez, Özlem

    2016-01-01

    Box-Behnken design was applied to optimize high performance liquid chromatography (HPLC) conditions for the simultaneous determination of potassium sorbate, sodium benzoate, carmoisine, allura red, ponceau 4R, and caffeine in commercial soft drinks. The experimental variables chosen were pH (6.0-7.0), flow rate (1.0-1.4 mL/min), and mobile phase ratio (85-95% acetate buffer). Resolution values of all peak pairs were used as a response. Stationary phase was Inertsil OctaDecylSilane- (ODS-) 3V reverse phase column (250 × 4.6 mm, 5 μm) dimensions. The detection was performed at 230 nm. Optimal values were found 6.0 pH, 1.0 mL/min flow rate, and 95% mobile phase ratio for the method which was validated by calculating the linearity (r (2) > 0.9962), accuracy (recoveries ≥ 95.75%), precision (intraday variation ≤ 1.923%, interday variation ≤ 1.950%), limits of detection (LODs), and limits of quantification (LOQs) parameters. LODs and LOQs for analytes were in the range of 0.10-0.19 μg/mL and 0.33-0.63 μg/mL, respectively. The proposed method was applied successfully for the simultaneous determination of the mixtures of five food additives and caffeine in soft drinks. PMID:26989415

  3. Development and Validation of HPLC Method for the Simultaneous Determination of Five Food Additives and Caffeine in Soft Drinks.

    PubMed

    Aşçı, Bürge; Dinç Zor, Şule; Aksu Dönmez, Özlem

    2016-01-01

    Box-Behnken design was applied to optimize high performance liquid chromatography (HPLC) conditions for the simultaneous determination of potassium sorbate, sodium benzoate, carmoisine, allura red, ponceau 4R, and caffeine in commercial soft drinks. The experimental variables chosen were pH (6.0-7.0), flow rate (1.0-1.4 mL/min), and mobile phase ratio (85-95% acetate buffer). Resolution values of all peak pairs were used as a response. Stationary phase was Inertsil OctaDecylSilane- (ODS-) 3V reverse phase column (250 × 4.6 mm, 5 μm) dimensions. The detection was performed at 230 nm. Optimal values were found 6.0 pH, 1.0 mL/min flow rate, and 95% mobile phase ratio for the method which was validated by calculating the linearity (r (2) > 0.9962), accuracy (recoveries ≥ 95.75%), precision (intraday variation ≤ 1.923%, interday variation ≤ 1.950%), limits of detection (LODs), and limits of quantification (LOQs) parameters. LODs and LOQs for analytes were in the range of 0.10-0.19 μg/mL and 0.33-0.63 μg/mL, respectively. The proposed method was applied successfully for the simultaneous determination of the mixtures of five food additives and caffeine in soft drinks.

  4. LV wall segmentation using the variational level set method (LSM) with additional shape constraint for oedema quantification

    NASA Astrophysics Data System (ADS)

    Kadir, K.; Gao, H.; Payne, A.; Soraghan, J.; Berry, C.

    2012-10-01

    In this paper an automatic algorithm for the left ventricle (LV) wall segmentation and oedema quantification from T2-weighted cardiac magnetic resonance (CMR) images is presented. The extent of myocardial oedema delineates the ischaemic area-at-risk (AAR) after myocardial infarction (MI). Since AAR can be used to estimate the amount of salvageable myocardial post-MI, oedema imaging has potential clinical utility in the management of acute MI patients. This paper presents a new scheme based on the variational level set method (LSM) with additional shape constraint for the segmentation of T2-weighted CMR image. In our approach, shape information of the myocardial wall is utilized to introduce a shape feature of the myocardial wall into the variational level set formulation. The performance of the method is tested using real CMR images (12 patients) and the results of the automatic system are compared to manual segmentation. The mean perpendicular distances between the automatic and manual LV wall boundaries are in the range of 1-2 mm. Bland-Altman analysis on LV wall area indicates there is no consistent bias as a function of LV wall area, with a mean bias of -121 mm2 between individual investigator one (IV1) and LSM, and -122 mm2 between individual investigator two (IV2) and LSM when compared to two investigators. Furthermore, the oedema quantification demonstrates good correlation when compared to an expert with an average error of 9.3% for 69 slices of short axis CMR image from 12 patients.

  5. Development and Validation of HPLC Method for the Simultaneous Determination of Five Food Additives and Caffeine in Soft Drinks

    PubMed Central

    Aşçı, Bürge; Dinç Zor, Şule; Aksu Dönmez, Özlem

    2016-01-01

    Box-Behnken design was applied to optimize high performance liquid chromatography (HPLC) conditions for the simultaneous determination of potassium sorbate, sodium benzoate, carmoisine, allura red, ponceau 4R, and caffeine in commercial soft drinks. The experimental variables chosen were pH (6.0–7.0), flow rate (1.0–1.4 mL/min), and mobile phase ratio (85–95% acetate buffer). Resolution values of all peak pairs were used as a response. Stationary phase was Inertsil OctaDecylSilane- (ODS-) 3V reverse phase column (250 × 4.6 mm, 5 μm) dimensions. The detection was performed at 230 nm. Optimal values were found 6.0 pH, 1.0 mL/min flow rate, and 95% mobile phase ratio for the method which was validated by calculating the linearity (r2 > 0.9962), accuracy (recoveries ≥ 95.75%), precision (intraday variation ≤ 1.923%, interday variation ≤ 1.950%), limits of detection (LODs), and limits of quantification (LOQs) parameters. LODs and LOQs for analytes were in the range of 0.10–0.19 μg/mL and 0.33–0.63 μg/mL, respectively. The proposed method was applied successfully for the simultaneous determination of the mixtures of five food additives and caffeine in soft drinks. PMID:26989415

  6. 42 CFR 136.408 - What are other factors, in addition to the minimum standards of character, that may be considered...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false What are other factors, in addition to the minimum standards of character, that may be considered in determining placement of an individual in a position that involves regular contact with or control over Indian children? 136.408 Section 136.408 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT...

  7. 42 CFR 136.408 - What are other factors, in addition to the minimum standards of character, that may be considered...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false What are other factors, in addition to the minimum standards of character, that may be considered in determining placement of an individual in a position that involves regular contact with or control over Indian children? 136.408 Section 136.408 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT...

  8. Ameliorative effects of telmisartan on the inflammatory response and impaired spatial memory in a rat model of Alzheimer's disease incorporating additional cerebrovascular disease factors.

    PubMed

    Shindo, Taro; Takasaki, Kotaro; Uchida, Kanako; Onimura, Rika; Kubota, Kaori; Uchida, Naoki; Irie, Keiichi; Katsurabayashi, Shutaro; Mishima, Kenichi; Nishimura, Ryoji; Fujiwara, Michihiro; Iwasaki, Katsunori

    2012-01-01

    Telmisartan, an angiotensin type 1 receptor blocker, is used in the management of hypertension to control blood pressure. In addition, telmisartan has a partial agonistic effect on peroxisome proliferator activated receptor γ (PPARγ). Recently, the effects of telmisartan on spatial memory or the inflammatory response were monitored in a mouse model of Alzheimer's disease (AD). However, to date, no studies have investigated the ameliorative effects of telmisartan on impaired spatial memory and the inflammatory response in an AD animal model incorporating additional cerebrovascular disease factors. In this study, we examined the effect of telmisartan on spatial memory impairment and the inflammatory response in a rat model of AD incorporating additional cerebrovascular disease factors. Rats were subjected to cerebral ischemia and an intracerebroventricular injection of oligomeric or aggregated amyloid-β (Aβ). Oral administration of telmisartan (0.3, 1, 3 mg/kg/d) seven days after ischemia and Aβ treatment resulted in better performance in the eight arm radial maze task in a dose-dependent manner. Telmisartan also reduced tumor necrosis factor α mRNA expression in the hippocampal region of rats with impaired spatial memory. These effects of telmisartan were antagonized by GW9662, an antagonist of PPARγ. These results suggest that telmisartan has ameliorative effects on the impairment of spatial memory in a rat model of AD incorporating additional cerebrovascular disease factors via its anti-inflammatory effect.

  9. Breeding site selection by coho salmon (Oncorhynchus kisutch) in relation to large wood additions and factors that influence reproductive success

    USGS Publications Warehouse

    Clark, Steven M.; Dunham, Jason B.; McEnroe, Jeffery R.; Lightcap, Scott W.

    2014-01-01

    The fitness of female Pacific salmon (Oncorhynchus spp.) with respect to breeding behavior can be partitioned into at least four fitness components: survival to reproduction, competition for breeding sites, success of egg incubation, and suitability of the local environment near breeding sites for early rearing of juveniles. We evaluated the relative influences of habitat features linked to these fitness components with respect to selection of breeding sites by coho salmon (Oncorhynchus kisutch). We also evaluated associations between breeding site selection and additions of large wood, as the latter were introduced into the study system as a means of restoring habitat conditions to benefit coho salmon. We used a model selection approach to organize specific habitat features into groupings reflecting fitness components and influences of large wood. Results of this work suggest that female coho salmon likely select breeding sites based on a wide range of habitat features linked to all four hypothesized fitness components. More specifically, model parameter estimates indicated that breeding site selection was most strongly influenced by proximity to pool-tail crests and deeper water (mean and maximum depths). Linkages between large wood and breeding site selection were less clear. Overall, our findings suggest that breeding site selection by coho salmon is influenced by a suite of fitness components in addition to the egg incubation environment, which has been the emphasis of much work in the past.

  10. Structural basis for the requirement of additional factors for MLL1 SET domain activity and recognition of epigenetic marks.

    PubMed

    Southall, Stacey M; Wong, Poon-Sheng; Odho, Zain; Roe, S Mark; Wilson, Jon R

    2009-01-30

    The mixed-lineage leukemia protein MLL1 is a transcriptional regulator with an essential role in early development and hematopoiesis. The biological function of MLL1 is mediated by the histone H3K4 methyltransferase activity of the carboxyl-terminal SET domain. We have determined the crystal structure of the MLL1 SET domain in complex with cofactor product AdoHcy and a histone H3 peptide. This structure indicates that, in order to form a well-ordered active site, a highly variable but essential component of the SET domain must be repositioned. To test this idea, we compared the effect of the addition of MLL complex members on methyltransferase activity and show that both RbBP5 and Ash2L but not Wdr5 stimulate activity. Additionally, we have determined the effect of posttranslational modifications on histone H3 residues downstream and upstream from the target lysine and provide a structural explanation for why H3T3 phosphorylation and H3K9 acetylation regulate activity. PMID:19187761

  11. Epidermal growth factor (EGF)-receptor is phosphorylated at threonine-654 in A431 cells following EGF addition

    SciTech Connect

    Whiteley, B.; Glaser, L.

    1986-05-01

    It has been shown that activation of protein kinase C by tumor-promoting phorbol diesters causes phorphorylation of the EGF-receptor at threonine-654 and is believed to thereby regulate the EGF receptor tyrosine kinase and EGF binding activity. In their present studies, /sup 32/P-labeled A431 cells were treated with and without 10 nM phorbol 12-myristate 13-acetate (PMA), or with 200 ng/ml EGF. Analysis of /sup 32/P-labeled EGF receptor tryptic phosphopeptides by reverse-phase HPLC confirmed the known effects of PMA and revealed that EGF caused phosphorylation at threonine-654 as well as various tyrosine residues. This effect occurred as early as 1 minute after EGF addition and was maximal after 5 minutes. The magnitude of the response appears to be 50% of a 15 minute treatment with 10 nM PMA. Direct measurement of diacylglycerol using an E. coli diacylglycerol kinase confirmed that EGF-stimulated phosphoinositide turnover could cause very rapid activation of protein kinase C. These results imply that protein kinase C is playing a role in negative modulation of EGF-receptor activity following EGF addition to A431 cells.

  12. Comparison of three methods for wind turbine capacity factor estimation.

    PubMed

    Ditkovich, Y; Kuperman, A

    2014-01-01

    Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation. PMID:24587755

  13. Comparison of Three Methods for Wind Turbine Capacity Factor Estimation

    PubMed Central

    Ditkovich, Y.; Kuperman, A.

    2014-01-01

    Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first “quasiexact” approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second “analytic” approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third “approximate” approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation. PMID:24587755

  14. Comparison of three methods for wind turbine capacity factor estimation.

    PubMed

    Ditkovich, Y; Kuperman, A

    2014-01-01

    Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.

  15. Pathophysiology, risk factors, and screening methods for prediabetes in women with polycystic ovary syndrome.

    PubMed

    Gourgari, Evgenia; Spanakis, Elias; Dobs, Adrian Sandra

    2016-01-01

    Polycystic ovary syndrome (PCOS) is a syndrome associated with insulin resistance (IR), obesity, infertility, and increased cardiometabolic risk. This is a descriptive review of several mechanisms that can explain the IR among women with PCOS, other risk factors for the development of diabetes, and the screening methods used for the detection of glucose intolerance in women with PCOS. Few mechanisms can explain IR in women with PCOS such as obesity, insulin receptor signaling defects, and inhibition of insulin-mediated glucose uptake in adipocytes. Women with PCOS have additional risk factors for the development of glucose intolerance such as family history of diabetes, use of oral contraceptives, anovulation, and age. The Androgen Society in 2007 and the Endocrine Society in 2013 recommended using oral glucose tolerance test as a screening tool for abnormal glucose tolerance in all women with PCOS. The approach to detection of glucose intolerance among women with PCOS varies among health care providers. Large prospective studies are still needed for the development of guidelines with strong evidence. When assessing risk of future diabetes in women with PCOS, it is important to take into account the method used for screening as well as other risk factors that these women might have. PMID:27570464

  16. Pathophysiology, risk factors, and screening methods for prediabetes in women with polycystic ovary syndrome

    PubMed Central

    Gourgari, Evgenia; Spanakis, Elias; Dobs, Adrian Sandra

    2016-01-01

    Polycystic ovary syndrome (PCOS) is a syndrome associated with insulin resistance (IR), obesity, infertility, and increased cardiometabolic risk. This is a descriptive review of several mechanisms that can explain the IR among women with PCOS, other risk factors for the development of diabetes, and the screening methods used for the detection of glucose intolerance in women with PCOS. Few mechanisms can explain IR in women with PCOS such as obesity, insulin receptor signaling defects, and inhibition of insulin-mediated glucose uptake in adipocytes. Women with PCOS have additional risk factors for the development of glucose intolerance such as family history of diabetes, use of oral contraceptives, anovulation, and age. The Androgen Society in 2007 and the Endocrine Society in 2013 recommended using oral glucose tolerance test as a screening tool for abnormal glucose tolerance in all women with PCOS. The approach to detection of glucose intolerance among women with PCOS varies among health care providers. Large prospective studies are still needed for the development of guidelines with strong evidence. When assessing risk of future diabetes in women with PCOS, it is important to take into account the method used for screening as well as other risk factors that these women might have. PMID:27570464

  17. Concept Discovery in Youtube.com Using Factorization Method

    NASA Astrophysics Data System (ADS)

    Leung, Janice Kwan-Wai; Li, Chun Hung

    Social media are not limited to text but also multimedia. Dailymotion, YouTube, and MySpace are examples of successful sites which allow users to share videos and interact among themselves. Due to the huge amount of videos, categorizing videos with similar contents can help users to search videos more efficiently. Unlike the traditional approach to group videos into some predefined categories, we propose to facilitate video searching with clustering from comment-based matrix factorization and to improve indexing via the generation of new concept words. Factorized component entropies are introduced for handling the difficult problem of vocabulary construction for concept discovery in social media. Since the categorization is learnt from users feedback, it can accurately represent the user sentiment on the videos. Experiments conducted by using empirical data collected from YouTube shows the effectiveness of our proposed methodologies.

  18. A method for the production of rheumatoid factor in rabbits

    PubMed Central

    Biro, C. E.

    1968-01-01

    Rabbits rendered immunologically unresponsive to native human IgG and then injected with a single large dose of heat-aggregated human IgG produce an antibody which resembles rheumatoid factor in all its properties that were tested. It is an exclusively IgM antibody which reacts with both human and rabbit (autologous) aggregated IgG, but not with either protein in the native state. PMID:5303049

  19. An automatic ordering method for incomplete factorization iterative solvers

    SciTech Connect

    Forsyth, P.A.; Tang, W.P. . Dept. of Computer Science); D'Azevedo, E.F.D. )

    1991-01-01

    The minimum discarded fill (MDF) ordering strategy for incomplete factorization iterative solvers is developed. MDF ordering is demonstrated for several model son-symmetric problems, as well as a water-flooding simulation which uses an unstructured grid. The model problems show a three to five fold decrease in the number of iterations compared to natural orderings. Greater than twofold improvement was observed for the waterflooding simulation. 26 refs., 7 figs., 3 tabs.

  20. LV wall segmentation using the variational level set method (LSM) with additional shape constraint for oedema quantification.

    PubMed

    Kadir, K; Gao, H; Payne, A; Soraghan, J; Berry, C

    2012-10-01

    In this paper an automatic algorithm for the left ventricle (LV) wall segmentation and oedema quantification from T2-weighted cardiac magnetic resonance (CMR) images is presented. The extent of myocardial oedema delineates the ischaemic area-at-risk (AAR) after myocardial infarction (MI). Since AAR can be used to estimate the amount of salvageable myocardial post-MI, oedema imaging has potential clinical utility in the management of acute MI patients. This paper presents a new scheme based on the variational level set method (LSM) with additional shape constraint for the segmentation of T2-weighted CMR image. In our approach, shape information of the myocardial wall is utilized to introduce a shape feature of the myocardial wall into the variational level set formulation. The performance of the method is tested using real CMR images (12 patients) and the results of the automatic system are compared to manual segmentation. The mean perpendicular distances between the automatic and manual LV wall boundaries are in the range of 1-2 mm. Bland-Altman analysis on LV wall area indicates there is no consistent bias as a function of LV wall area, with a mean bias of -121 mm(2) between individual investigator one (IV1) and LSM, and -122 mm(2) between individual investigator two (IV2) and LSM when compared to two investigators. Furthermore, the oedema quantification demonstrates good correlation when compared to an expert with an average error of 9.3% for 69 slices of short axis CMR image from 12 patients.

  1. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    ERIC Educational Resources Information Center

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  2. Influence of physico-chemical factors on leaching of chemical additives from aluminium foils used for packaging of food materials.

    PubMed

    Ojha, Priyanka; Ojha, C S; Sharma, V P

    2007-01-01

    In recent years, the use of aluminium foils to wrap foodstuff and commodities has been increased to a great extent. Aluminium was found to leach out from the foil in different simulants particularly in distilled water, acidic and alkaline medium at 60 +/- 2 degrees C for 2 hours and 40 +/- 2 degrees C for 24 hours. The migration was found to be above the permissible limit as laid down by WHO guidelines, that is of 0.2 mg/L of water. The protocol used for this study was based on the recommendation of Bureau of Indian Standard regarding the migration of chemical additives from packaging materials used to pack food items. Migration of the aluminium metal was found significantly higher in acidic and aqueous medium in comparison to alcoholic and saline medium. Higher temperature conditions also enhanced the rate of migration of aluminium in acidic and aqueous medium. Leaching of aluminium metal occurred in double distilled water, acetic acid 3%, normal saline and sodium carbonate, except ethanol 8%, in which aluminium migration was below the detection limit of the instrument where three brands of the aluminium foil samples studied.

  3. Chloride ion addition for controlling shapes and properties of silver nanorods capped by polyvinyl alcohol synthesized using polyol method

    NASA Astrophysics Data System (ADS)

    Junaidi, Yunus, Muhammad; Triyana, Kuwat; Harsojo, Suharyadi, Edi

    2016-04-01

    We report our investigation on the effect of chloride ions on controlling the shapes and properties of silver nanorods (AgNRs) synthesized using a polyol method. In this study, we used polyvinyl alcohol (PVA) as a capping agent and sodium chloride (NaCl) as a salt precursor and performed at the oil bath temperature of 140°C. The chloride ions originating from the NaCl serve to control the growth of the silver nanorods. Furthermore, the synthesized silver nanorods were characterized using SEM and XRD. The results showed that besides being able to control the growth of AgCl atoms, the chloride ions were also able to control the growth of multi-twinned-particles into the single crystalline of silver nanorods by micrometer-length. At an appropriate concentration of NaCl, the diameter of silver nanorods decreased significantly compared to that of without chloride ion addition. This technique may be useful since a particular diameter of silver nanorods affects a particular application in the future.

  4. Chloride ion addition for controlling shapes and properties of silver nanorods capped by polyvinyl alcohol synthesized by polyol method

    NASA Astrophysics Data System (ADS)

    Junaidi, Triyana, Kuwat; Harsojo, Suharyadi, Edi

    2016-04-01

    We report our investigation on the effect of chloride ions oncontrolling the shapes and properties of silver nanorods(AgNRs) synthesized using a polyol method. In this study, we used polyvinyl alcohol (PVA) as a capping agent and sodium chloride (NaCl) as asalt precursor and performed at the oilbath temperature of 140 °C. The chloride ions originating from the NaCl serve to control the growth of the silver nanorods. Furthermore, the synthesized silver nanorodswere characterized using UV-VIS, XRD, SEM and TEM. The results showed that besides being able to control the growth of AgCl atoms, the chloride ions were also able to control the growth of multi-twinned-particles into the single crystalline silver nanorods by micrometer-length. At an appropriate concentration of NaCl, the diameter of silver nanorodsdecreased significantly compared to that of without chloride ion addition. This technique may be useful since a particular diameter of silver nanorods affects a particular application in the future.

  5. The severity of retinal pathology in homozygous Crb1rd8/rd8 mice is dependent on additional genetic factors.

    PubMed

    Luhmann, Ulrich F O; Carvalho, Livia S; Holthaus, Sophia-Martha Kleine; Cowing, Jill A; Greenaway, Simon; Chu, Colin J; Herrmann, Philipp; Smith, Alexander J; Munro, Peter M G; Potter, Paul; Bainbridge, James W B; Ali, Robin R

    2015-01-01

    Understanding phenotype-genotype correlations in retinal degeneration is a major challenge. Mutations in CRB1 lead to a spectrum of autosomal recessive retinal dystrophies with variable phenotypes suggesting the influence of modifying factors. To establish the contribution of the genetic background to phenotypic variability associated with the Crb1(rd8/rd8) mutation, we compared the retinal pathology of Crb1(rd8/rd8)/J inbred mice with that of two Crb1(rd8/rd8) lines backcrossed with C57BL/6JOlaHsd mice. Topical endoscopic fundal imaging and scanning laser ophthalmoscopy fundus images of all three Crb1(rd8/rd8) lines showed a significant increase in the number of inferior retinal lesions that was strikingly variable between the lines. Optical coherence tomography, semithin, ultrastructural morphology and assessment of inflammatory and vascular marker by immunohistochemistry and quantitative reverse transcriptase-polymerase chain reaction revealed that the lesions were associated with photoreceptor death, Müller and microglia activation and telangiectasia-like vascular remodelling-features that were stable in the inbred, variable in the second, but virtually absent in the third Crb1(rd8/rd8) line, even at 12 months of age. This suggests that the Crb1(rd8/rd8) mutation is necessary, but not sufficient for the development of these degenerative features. By whole-genome SNP analysis of the genotype-phenotype correlation, a candidate region on chromosome 15 was identified. This may carry one or more genetic modifiers for the manifestation of the retinal pathology associated with mutations in Crb1. This study also provides insight into the nature of the retinal vascular lesions that likely represent a clinical correlate for the formation of retinal telangiectasia or Coats-like vasculopathy in patients with CRB1 mutations that are thought to depend on such genetic modifiers.

  6. Evaluation of soybean lines and environmental stratification using the AMMI, GGE biplot, and factor analysis methods.

    PubMed

    Sousa, L B; Hamawaki, O T; Nogueira, A P O; Batista, R O; Oliveira, V M; Hamawaki, R L

    2015-01-01

    In the final phases of new soybean cultivar development, lines are cultivated in several locations across multiple seasons with the intention of identifying and selecting superior genotypes for quantitative traits. In this context, this study aimed to study the genotype-by-environment interaction for the trait grain yield (kg/ha), and to evaluate the adaptability and stability of early-cycle soybean genotypes using the additive main effects and multiplicative interaction (AMMI) analysis, genotype main effects and genotype x environment interaction (GGE) biplot, and factor analysis methods. Additionally, the efficiency of these methods was compared. The experiments were carried out in five cities in the State of Mato Grosso: Alto Taquari, Lucas do Rio Verde, Sinop, Querência, and Rondonópolis, in the 2011/2012 and 2012/2013 seasons. Twenty-seven early-cycle soybean genotypes were evaluated, consisting of 22 lines developed by Universidade Federal de Uberlândia (UFU) soybean breeding program, and five controls: UFUS Carajás, MSOY 6101, MSOY 7211, UFUS Guarani, and Riqueza. Significant and complex genotype-by-environment interactions were observed. The AMMI model presented greater efficiency by retaining most of the variation in the first two main components (61.46%), followed by the GGE biplot model (57.90%), and factor analysis (54.12%). Environmental clustering among the methodologies was similar, and was composed of one environmental group from one location but from different seasons. Genotype G5 presented an elevated grain yield, and high adaptability and stability as determined by the AMMI, factor analysis, and GGE biplot methodologies. PMID:26505417

  7. The New Factor Structure of the Korean Version of the Difficulties in Emotion Regulation Scale (K-DERS) Incorporating Method Factor

    ERIC Educational Resources Information Center

    Cho, Yongrae; Hong, Sehee

    2013-01-01

    The factor structure of the Korean version of the Difficulties in Emotion Regulation Scale was examined. Rather than the six-factor model, the five-factor model with a method factor was supported. This result suggests that the AWARENESS and CLARITY factors can be combined into one construct, controlling for the method factor. (Contains 1 figure.)

  8. Physiological basis of tolerance to complete submergence in rice involves genetic factors in addition to the SUB1 gene.

    PubMed

    Singh, Sudhanshu; Mackill, David J; Ismail, Abdelbagi M

    2014-01-01

    1 lines. This suggests the possibility of further improvements in submergence tolerance by incorporating additional traits present in FR13A or other similar landraces. PMID:25281725

  9. New detection methods of growth hormone and growth factors.

    PubMed

    Bidlingmaier, Martin

    2012-01-01

    Human growth hormone (GH), but also GH related growth factors like the insulin-like growth factor-1 (IGF-1) are known to be abused in sports. Although the scientific evidence supporting a distinct effect of GH on performance in healthy trained subjects is limited, it has been repeatedly found with athletes or trainers, and the recent introduction of a first test to detect GH doping has led to a number of positive cases. Currently, there is no test for the detection of IGF-1 introduced worldwide, but confiscation of the drug from sports teams can be taken as indirect evidence for its abuse. The major biochemical difficulty for the detection of GH is that the recombinant form is identical in physicochemical properties to the endogenous GH secreted by the pituitary gland. Furthermore, the very short half-life of GH in circulation inherently shortens the window of opportunity where the drug can be detected. Two strategies have been followed for more than a decade to develop a test to detect the application of recombinant GH: the marker approach, which is based on the elevation of GH-dependent markers above the level seen under physiological conditions evoked by administration of recombinant GH, and the isoform approach, which is based on a change in the pattern of GH isoforms in circulation following the injection of recombinant GH.

  10. Post Processing Methods used to Improve Surface Finish of Products which are Manufactured by Additive Manufacturing Technologies: A Review

    NASA Astrophysics Data System (ADS)

    Kumbhar, N. N.; Mulay, A. V.

    2016-08-01

    The Additive Manufacturing (AM) processes open the possibility to go directly from Computer-Aided Design (CAD) to a physical prototype. These prototypes are used as test models before it is finalized as well as sometimes as a final product. Additive Manufacturing has many advantages over the traditional process used to develop a product such as allowing early customer involvement in product development, complex shape generation and also save time as well as money. Additive manufacturing also possess some special challenges that are usually worth overcoming such as Poor Surface quality, Physical Properties and use of specific raw material for manufacturing. To improve the surface quality several attempts had been made by controlling various process parameters of Additive manufacturing and also applying different post processing techniques on components manufactured by Additive manufacturing. The main objective of this work is to document an extensive literature review in the general area of post processing techniques which are used in Additive manufacturing.

  11. WOODSTOVE EMISSION MEASUREMENT METHODS COMPARISON AND EMISSION FACTORS UPDATE

    EPA Science Inventory

    This paper compares various field and laboratory woodstove emission measurement methods. n 1988, the U.S. EPA promulgated performance standards for residential wood heaters (woodstoves). ver the past several years, a number of field studies have been undertaken to determine the a...

  12. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R826238)

    EPA Science Inventory

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard methods that we ...

  13. Phosphazene additives

    DOEpatents

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  14. Social and Demographic Factors Associated with Morbidities in Young Children in Egypt: A Bayesian Geo-Additive Semi-Parametric Multinomial Model

    PubMed Central

    Khatab, Khaled; Adegboye, Oyelola; Mohammed, Taofeeq Ibn

    2016-01-01

    Background Globally, the burden of mortality in children, especially in poor developing countries, is alarming and has precipitated concern and calls for concerted efforts in combating such health problems. Examples of diseases that contribute to this burden of mortality include diarrhoea, cough, fever, and the overlap between these illnesses, causing childhood morbidity and mortality. Methods To gain insight into these health issues, we employed the 2008 Demographic and Health Survey Data of Egypt, which recorded details from 10,872 children under five. This data focused on the demographic and socio-economic characteristics of household members. We applied a Bayesian multinomial model to assess the area-specific spatial effects and risk factors of co-morbidity of fever, diarrhoea and cough for children under the age of five. Results The results showed that children under 20 months of age were more likely to have the three diseases (OR: 6.8; 95% CI: 4.6–10.2) than children between 20 and 40 months (OR: 2.14; 95% CI: 1.38–3.3). In multivariate Bayesian geo-additive models, the children of mothers who were over 20 years of age were more likely to have only cough (OR: 1.2; 95% CI: 0.9–1.5) and only fever (OR: 1.2; 95% CI: 0.91–1.51) compared with their counterparts. Spatial results showed that the North-eastern region of Egypt has a higher incidence than most of other regions. Conclusions This study showed geographic patterns of Egyptian governorates in the combined prevalence of morbidity among Egyptian children. It is obvious that the Nile Delta, Upper Egypt, and south-eastern Egypt have high rates of diseases and are more affected. Therefore, more attention is needed in these areas. PMID:27442018

  15. Slip and Slide Method of Factoring Trinomials with Integer Coefficients over the Integers

    ERIC Educational Resources Information Center

    Donnell, William A.

    2012-01-01

    In intermediate and college algebra courses there are a number of methods for factoring quadratic trinomials with integer coefficients over the integers. Some of these methods have been given names, such as trial and error, reversing FOIL, AC method, middle term splitting method and slip and slide method. The purpose of this article is to discuss…

  16. Hypothesis Testing Using Factor Score Regression: A Comparison of Four Methods

    ERIC Educational Resources Information Center

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2016-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…

  17. Advanced glycation end products, physico-chemical and sensory characteristics of cooked lamb loins affected by cooking method and addition of flavour precursors.

    PubMed

    Roldan, Mar; Loebner, Jürgen; Degen, Julia; Henle, Thomas; Antequera, Teresa; Ruiz-Carrascal, Jorge

    2015-02-01

    The influence of the addition of a flavour enhancer solution (FES) (d-glucose, d-ribose, l-cysteine and thiamin) and of sous-vide cooking or roasting on moisture, cooking loss, instrumental colour, sensory characteristics and formation of Maillard reaction (MR) compounds in lamb loins was studied. FES reduced cooking loss and increased water content in sous-vide samples. FES and cooking method showed a marked effect on browning development, both on the meat surface and within. FES led to tougher and chewier texture in sous-vide cooked lamb, and enhanced flavour scores of sous-vide samples more markedly than in roasted ones. FES added meat showed higher contents of furosine; 1,2-dicarbonyl compounds and 5-hydroxymethylfurfural did not reach detectable levels. N-ε-carboxymethyllysine amounts were rather low and not influenced by the studied factors. Cooked meat seems to be a minor dietary source of MR products, regardless the presence of reducing sugars and the cooking method. PMID:25172739

  18. Advanced glycation end products, physico-chemical and sensory characteristics of cooked lamb loins affected by cooking method and addition of flavour precursors.

    PubMed

    Roldan, Mar; Loebner, Jürgen; Degen, Julia; Henle, Thomas; Antequera, Teresa; Ruiz-Carrascal, Jorge

    2015-02-01

    The influence of the addition of a flavour enhancer solution (FES) (d-glucose, d-ribose, l-cysteine and thiamin) and of sous-vide cooking or roasting on moisture, cooking loss, instrumental colour, sensory characteristics and formation of Maillard reaction (MR) compounds in lamb loins was studied. FES reduced cooking loss and increased water content in sous-vide samples. FES and cooking method showed a marked effect on browning development, both on the meat surface and within. FES led to tougher and chewier texture in sous-vide cooked lamb, and enhanced flavour scores of sous-vide samples more markedly than in roasted ones. FES added meat showed higher contents of furosine; 1,2-dicarbonyl compounds and 5-hydroxymethylfurfural did not reach detectable levels. N-ε-carboxymethyllysine amounts were rather low and not influenced by the studied factors. Cooked meat seems to be a minor dietary source of MR products, regardless the presence of reducing sugars and the cooking method.

  19. A method for predicting DCT-based denoising efficiency for grayscale images corrupted by AWGN and additive spatially correlated noise

    NASA Astrophysics Data System (ADS)

    Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.

    2015-03-01

    Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.

  20. Birthweight Related Factors in Northwestern Iran: Using Quantile Regression Method

    PubMed Central

    Fallah, Ramazan; Kazemnejad, Anoshirvan; Zayeri, Farid; Shoghli, Alireza

    2016-01-01

    Introduction: Birthweight is one of the most important predicting indicators of the health status in adulthood. Having a balanced birthweight is one of the priorities of the health system in most of the industrial and developed countries. This indicator is used to assess the growth and health status of the infants. The aim of this study was to assess the birthweight of the neonates by using quantile regression in Zanjan province. Methods: This analytical descriptive study was carried out using pre-registered (March 2010 - March 2012) data of neonates in urban/rural health centers of Zanjan province using multiple-stage cluster sampling. Data were analyzed using multiple linear regressions andquantile regression method and SAS 9.2 statistical software. Results: From 8456 newborn baby, 4146 (49%) were female. The mean age of the mothers was 27.1±5.4 years. The mean birthweight of the neonates was 3104 ± 431 grams. Five hundred and seventy-three patients (6.8%) of the neonates were less than 2500 grams. In all quantiles, gestational age of neonates (p<0.05), weight and educational level of the mothers (p<0.05) showed a linear significant relationship with the i of the neonates. However, sex and birth rank of the neonates, mothers age, place of residence (urban/rural) and career were not significant in all quantiles (p>0.05). Conclusion: This study revealed the results of multiple linear regression and quantile regression were not identical. We strictly recommend the use of quantile regression when an asymmetric response variable or data with outliers is available. PMID:26925889

  1. Paleomagnetic intensity of Aso pyroclastic flows: Additional results with LTD-DHT Shaw method, Thellier method with pTRM-tail check

    NASA Astrophysics Data System (ADS)

    Maruuchi, T.; Shibuya, H.

    2009-12-01

    For the sake to calibrate the absolute value of the ’relative paleointensity variation curve’ drawn from sediment cores, Takai et al. (2002) proposed to use pyroclastic flows co-bearing with wide spread tephras. The pyroclastic flows prepare volcanic rocks with TRM, which let us determine absolute paleointensity, and the tephras prepare the correlation with sediment stratigraphy. While 4 out of 6 pyroclastic flows are consistent with Sint-800 paleointensity variation curve, two flows, Aso-2 and Aso-4, show weaker and stronger than Sint-800 beyond the error, respectively. We revisited the paleointensity study of Aso pyroclastic flows, adding LTD- DHT Shaw method, the pTRM-tail check in Thellier experiment, and LTD-DHT Shaw method by using volcanic glasses. We prepared 11 specimens from 3 sites of Aso-1 welded tuff for LTD-DHT Shaw method experiments, and obtained 6 paleointensities satisfied a set of strict criteria. They yield an average paleointensity of 21.3±5.8uT, which is smaller than 31.0±3.4uT provided by Takai et al. (2002). For Aso-2 welded tuff, 11 samples from 3 sites were submitted to Thellier experiments, and 6 passed a set of pretty stringent criteria including pTRM-tail check, which is not performed by Takai et al. (2002). They give an average paleointensity of 20.2±1.5uT, which is virtually identical to 20.2±1.0uT (27 samples) given by Takai et al. (2002). Although the success rate was not good in LTD-DHT Shaw method, 2 out of 12 specimens passed the criteria, and gave 25.8±3.4uT, which is consistent with Takai et al. (2002). In addition, we obtained a reliable paleointensity from a volcanic glass in LTD-DHT Shaw method, it gives a paleointensity of 23.6 uT. It is also consitent with Takai et al. (2002). For Aso-3 welded tuff, we performed only LTD-DHT Shaw method for one specimen from one site yet. It gives a paleointensity of 43.0uT, which is higher than 31.8±3.6uT given by Takai et al. (2002). Eight sites were set for Aso-4 welded tuff

  2. Methods to assess factors that influence grass seed yield

    NASA Astrophysics Data System (ADS)

    Louhaichi, Mounir

    A greater than 10-fold increase in Canada goose (Branta canadensis ) populations over the past several years has resulted in concerns over grazing impacts on grass seed production in the mid-Willamette Valley, Oregon. This study was designed to develop methods to quantify and statistically analyze goose-grazing impacts on seed yields of tall fescue (Festuca arundinacea Schreb.) and perennial ryegrass (Lolium perenne L.). Yield-mapping-system equipped combines, incorporating global positioning system (GPS) technology, were used to measure and map yields. Image processing of ground-level photography to estimate crop cover and other relevant observations were spatially located via GPS to establish spatial-temporal goose grazing patterns. We sampled each field semi-monthly from mid-winter through spring. Spatially located yield data, soils information, exclosure locations, and grazing patterns were integrated via geographical information system (GIS) technology. To avoid concerns about autocorrelation, a bootstrapping procedure for subsampling spatially contiguous seed yield data was used to organize the data for appropriate use of analysis of variance. The procedure was used to evaluate grazing impacts on seed yield for areas of fields with different soils and with differential timing and intensity of goose grazing activity. We also used a standard paired-plot procedure, involving exclosures and associated plots available for grazing. The combination of spatially explicit photography and yield mapping, integrated with GIS, proved effective in establishing cause-and-effect relationships between goose grazing and seed yield differences. Exclosures were essential for providing nongrazed controls. Both statistical approaches were effective in documenting goose-grazing impacts. Paired-plots were restricted by small size and few numbers and did not capture grazing impacts as effectively as comparison of larger areas to exclosures. Bootstrapping to subsample larger areas of

  3. Evaluation of Parallel Analysis Methods for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.

    2010-01-01

    Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…

  4. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  5. Analysis of Air Toxics From NOAA WP-3 Aircraft Measurements During the TexAQS 2006 Campaign: Comparison With Emission Inventories and Additive Inhalation Risk Factors

    NASA Astrophysics Data System (ADS)

    Del Negro, L. A.; Warneke, C.; de Gouw, J. A.; Atlas, E.; Lueb, R.; Zhu, X.; Pope, L.; Schauffler, S.; Hendershot, R.; Washenfelder, R.; Fried, A.; Richter, D.; Walega, J. G.; Weibring, P.

    2007-12-01

    Benzene and nine other air toxics classified as human carcinogens by the International Agency for Research on Cancer (IARC) were measured from the NOAA WP-3 aircraft during the TexAQS 2006 campaign. In-situ measurements of benzene, measured with a PTR-MS instrument, are used to estimate emission fluxes for comparison with point source emission inventories developed by the Texas Commission on Environmental Quality. Mean and median mixing ratios for benzene, acetaldehyde, formaldehyde, 1,3-butadiene, carbon tetrachloride, chloroform, 1,2-dichloroethane, dibromoethane, dichloromethane, and vinyl chloride, encountered over the city of Houston during the campaign, are combined with inhalation unit risk factor values developed by the California Environmental Protection Agency and the United States Environmental Protection Agency to estimate the additive inhalation risk factor. This additive risk factor represents the risk associated with lifetime (70 year) exposure at the levels measured and should not be used as an absolute indicator of risk to individuals. However, the results are useful for assessments of changing relative risk over time, and for identifying dominant contributions to the overall air toxic risk.

  6. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R825173)

    EPA Science Inventory

    Abstract

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard...

  7. A novel ion-pairing chromatographic method for the simultaneous determination of both nicarbazin components in feed additives: chemometric tools for improving the optimization and validation.

    PubMed

    De Zan, María M; Teglia, Carla M; Robles, Juan C; Goicoechea, Héctor C

    2011-07-15

    The development, optimization and validation of an ion-pairing high performance liquid chromatography method for the simultaneous determination of both nicarbazin (NIC) components: 4,4'-dinitrocarbanilide (DNC) and 2-hydroxy-4,6-dimethylpyrimidine (HDP) in bulk materials and feed additives are described. An experimental design was used for the optimization of the chromatographic system. Four variables, including mobile phase composition and oven temperature, were analyzed through a central composite design exploring their contribution to analyte separation. Five responses: peak resolutions, HDP capacity factor, HDP tailing and analysis time, were modelled by using the response surface methodology and were optimized simultaneously by implementing the desirability function. The optimum conditions resulted in a mobile phase consisting of 10.0 mmol L(-1) of 1-heptanesulfonate, 20.0 mmol L(-1) of sodium acetate, pH=3.30 buffer and acetonitrile in a gradient system at a flow rate of 1.00 mL min(-1). Column was an INERSTIL ODS-3 (4.6 mm×150 mm, 5 μm particle size) at 40.0°C. Detection was performed at 300 nm by a diode array detector. The validation results of the method indicated a high selectivity and good precision characteristics, with RSD less than 1.0% for both components, both in intra and inter-assay precision studies. Linearity was proved for a range of 32.0-50.0 μg mL(-1) of NIC in sample solution. The recovery, studied at three different fortification levels, varied from 98.0 to 101.4 for HDP and from 99.1 to 100.2 for DNC. The applicability of the method was demonstrated by determining DNC and HDP content in raw materials and commercial formulations used for coccidiosis prevention. Assays results on real samples showed that considerable differences in molecular ratio DNC:HDP exist among them.

  8. An Effective Method to Accurately Calculate the Phase Space Factors for β - β - Decay

    DOE PAGES

    Neacsu, Andrei; Horoi, Mihai

    2016-01-01

    Accurate calculations of the electron phase space factors are necessary for reliable predictions of double-beta decay rates and for the analysis of the associated electron angular and energy distributions. We present an effective method to calculate these phase space factors that takes into account the distorted Coulomb field of the daughter nucleus, yet it allows one to easily calculate the phase space factors with good accuracy relative to the most exact methods available in the recent literature.

  9. Testing for Additivity in Chemical Mixtures Using a Fixed-Ratio Ray Design and Statistical Equivalence Testing Methods

    EPA Science Inventory

    Fixed-ratio ray designs have been used for detecting and characterizing interactions of large numbers of chemicals in combination. Single chemical dose-response data are used to predict an “additivity curve” along an environmentally relevant ray. A “mixture curve” is estimated fr...

  10. [A factor analysis method for contingency table data with unlimited multiple choice questions].

    PubMed

    Toyoda, Hideki; Haiden, Reina; Kubo, Saori; Ikehara, Kazuya; Isobe, Yurie

    2016-02-01

    The purpose of this study is to propose a method of factor analysis for analyzing contingency tables developed from the data of unlimited multiple-choice questions. This method assumes that the element of each cell of the contingency table has a binominal distribution and a factor analysis model is applied to the logit of the selection probability. Scree plot and WAIC are used to decide the number of factors, and the standardized residual, the standardized difference between the sample, and the proportion ratio, is used to select items. The proposed method was applied to real product impression research data on advertised chips and energy drinks. Since the results of the analysis showed that this method could be used in conjunction with conventional factor analysis model, and extracted factors were fully interpretable, and suggests the usefulness of the proposed method in the study of psychology using unlimited multiple-choice questions.

  11. [A factor analysis method for contingency table data with unlimited multiple choice questions].

    PubMed

    Toyoda, Hideki; Haiden, Reina; Kubo, Saori; Ikehara, Kazuya; Isobe, Yurie

    2016-02-01

    The purpose of this study is to propose a method of factor analysis for analyzing contingency tables developed from the data of unlimited multiple-choice questions. This method assumes that the element of each cell of the contingency table has a binominal distribution and a factor analysis model is applied to the logit of the selection probability. Scree plot and WAIC are used to decide the number of factors, and the standardized residual, the standardized difference between the sample, and the proportion ratio, is used to select items. The proposed method was applied to real product impression research data on advertised chips and energy drinks. Since the results of the analysis showed that this method could be used in conjunction with conventional factor analysis model, and extracted factors were fully interpretable, and suggests the usefulness of the proposed method in the study of psychology using unlimited multiple-choice questions. PMID:26964368

  12. Slip and slide method of factoring trinomials with integer coefficients over the integers

    NASA Astrophysics Data System (ADS)

    Donnell, William A.

    2012-06-01

    In intermediate and college algebra courses there are a number of methods for factoring quadratic trinomials with integer coefficients over the integers. Some of these methods have been given names, such as trial and error, reversing FOIL, AC method, middle term splitting method and slip and slide method. The purpose of this article is to discuss the Slip and Slide Method and present a theoretical justification of why it works.

  13. Transmission of alien chromosomes from selfed progenies of a complete set of Allium monosomic additions: the development of a reliable method for the maintenance of a monosomic addition set.

    PubMed

    Shigyo, M; Wako, T; Kojima, A; Yamauchi, N; Tashiro, Y

    2003-12-01

    Selfed progeny of a complete set of Allium fistulosum - Allium cepa monosomic addition lines (2n = 2x + 1 = 17, FF+1A-FF+8A) were produced to examine the transmission rates of respective alien chromosomes. All eight types of the selfed monosomic additions set germinable seeds. The numbers of chromosomes (2n) in the seedlings were 16, 17, or 18. The eight extra chromosomes varied in transmission rate (%) from 9 (FF+2A) to 49 (FF+8A). The complete set of monosomic additions was reproduced successfully by self-pollination. A reliable way to maintain a set of Allium monosomic additions was developed using a combination of two crossing methods, selfing and female transmission. FF+8A produced two seedlings with 18 chromosomes. Cytogenetical analyses, including GISH, showed that the seedlings were disomic addition plants carrying two entire homologous chromosomes from A. cepa in an integral diploid background of A. fistulosum. Flow cytometry analysis showed that a double dose of the alien 8A chromosome caused fluorescence intensity values spurring in DNA content, and isozyme analysis showed increased glutamate dehydrogenase activity at the gene locus Gdh-1.

  14. Transmission of alien chromosomes from selfed progenies of a complete set of Allium monosomic additions: the development of a reliable method for the maintenance of a monosomic addition set.

    PubMed

    Shigyo, M; Wako, T; Kojima, A; Yamauchi, N; Tashiro, Y

    2003-12-01

    Selfed progeny of a complete set of Allium fistulosum - Allium cepa monosomic addition lines (2n = 2x + 1 = 17, FF+1A-FF+8A) were produced to examine the transmission rates of respective alien chromosomes. All eight types of the selfed monosomic additions set germinable seeds. The numbers of chromosomes (2n) in the seedlings were 16, 17, or 18. The eight extra chromosomes varied in transmission rate (%) from 9 (FF+2A) to 49 (FF+8A). The complete set of monosomic additions was reproduced successfully by self-pollination. A reliable way to maintain a set of Allium monosomic additions was developed using a combination of two crossing methods, selfing and female transmission. FF+8A produced two seedlings with 18 chromosomes. Cytogenetical analyses, including GISH, showed that the seedlings were disomic addition plants carrying two entire homologous chromosomes from A. cepa in an integral diploid background of A. fistulosum. Flow cytometry analysis showed that a double dose of the alien 8A chromosome caused fluorescence intensity values spurring in DNA content, and isozyme analysis showed increased glutamate dehydrogenase activity at the gene locus Gdh-1. PMID:14663528

  15. Quantification method analysis of the relationship between occupant injury and environmental factors in traffic accidents.

    PubMed

    Ju, Yong Han; Sohn, So Young

    2011-01-01

    Injury analysis following a vehicle crash is one of the most important research areas. However, most injury analyses have focused on one-dimensional injury variables, such as the AIS (Abbreviated Injury Scale) or the IIS (Injury Impairment Scale), at a time in relation to various traffic accident factors. However, these studies cannot reflect the various injury phenomena that appear simultaneously. In this paper, we apply quantification method II to the NASS (National Automotive Sampling System) CDS (Crashworthiness Data System) to find the relationship between the categorical injury phenomena, such as the injury scale, injury position, and injury type, and the various traffic accident condition factors, such as speed, collision direction, vehicle type, and seat position. Our empirical analysis indicated the importance of safety devices, such as restraint equipment and airbags. In addition, we found that narrow impact, ejection, air bag deployment, and higher speed are associated with more severe than minor injury to the thigh, ankle, and leg in terms of dislocation, abrasion, or laceration. PMID:21094332

  16. A direct method for calculating thermodynamic factors for liquid mixtures using the Permuted Widom test particle insertion method

    NASA Astrophysics Data System (ADS)

    Prasaad Balaji, Sayee; Schnell, Sondre K.; McGarrity, Erin S.; Vlugt, Thijs J. H.

    2013-01-01

    Understanding mass transport in liquids by mutual diffusion is an important topic for many applications in chemical engineering. The reason for this is that diffusion is often the rate limiting step in chemical reactors and separators. In multicomponent liquid mixtures, transport diffusion can be described by both generalized Fick's law and the Maxwell-Stefan theory. The Maxwell-Stefan and Fick approaches in an n-component system are related by the so-called thermodynamic factor [R. Taylor and H.A. Kooijman, Chem. Eng. Commun, 102, 87 (1991)]. As Fick diffusivities can be measured in experiments and Maxwell-Stefan diffusivities can be obtained from molecular simulations/theory, the thermodynamic factors bridge the gap between experiments and molecular simulations/theory. It is therefore desirable to be able to compute thermodynamic factors from molecular simulations. Unfortunately, presently used simulation techniques for computing thermodynamic factors are inefficient and often require numerical differentiation of simulation results. In this work, we propose a modified version of the Widom test-particle method to compute thermodynamic factors from a single simulation. This method is found to be more efficient than the conventional Widom test particle insertion method combined with numerical differentiation of simulation results. The approach is tested for binary systems consisting of Lennard-Jones particles. The thermodynamic factors computed from the simulation and from numerically differentiating the activity coefficients obtained from the conventional Widom test particle insertion method are in excellent agreement.

  17. Outpatient Management of Postbiopsy Pneumothorax with Small-Caliber Chest Tubes: Factors Affecting the Need for Prolonged Drainage and Additional Interventions

    SciTech Connect

    Gupta, Sanjay Hicks, Marshall E.; Wallace, Michael J.; Ahrar, Kamran; Madoff, David C.; Murthy, Ravi

    2008-03-15

    The aim of this study was to evaluate the efficacy of outpatient management of postbiopsy pneumothoraces with small-caliber chest tubes and to assess the factors that influence the need for prolonged drainage or additional interventions.We evaluated the medical records of patients who were treated with small-caliber chest tubes attached to Heimlich valves for pneumothoraces resulting from image-guided transthoracic needle biopsy to determine the hospital admission rates, the number of days the catheters were left in place, and the need for further interventions. We also evaluated the patient, lesion, and biopsy technique characteristics to determine their influence on the need for prolonged catheter drainage or additional interventions. Of the 191 patients included in our study, 178 (93.2%) were treated as outpatients. Ten patients (5.2%) were admitted for chest tube-related problems, either for underwater suction (n = 8) or for pain control (n = 2). No further interventions were required in 146 patients (76.4%), with successful removal of the chest tubes the day after the biopsy procedure. Prolonged catheter drainage (mean, 4.3 days) was required in 44 patients (23%). Nineteen patients (9.9%) underwent additional interventions for management of pneumothorax. Presence of emphysema was noted more frequently in patients who required additional interventions or prolonged chest tube drainage than in those who did not (51.1% vs. 24.7%; p = 0.001).We conclude that use of the Heimlich valve allows safe and successful outpatient treatment of most patients requiring chest tube placement for postbiopsy pneumothorax. Additional interventions or prolonged chest tube drainage are needed more frequently in patients with emphysema in the needle path.

  18. The Origins of the SPAR-H Method's Performance Shaping Factor Multipliers

    SciTech Connect

    Ronald L. Boring; Harold S. Blackman

    2007-08-01

    The Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method has proved to be a reliable, easy-to-use method for human reliability analysis. Calculation of human error probability (HEP) rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action oriented tasks, and incorporating performance shaping factor (PSF) multipliers upon those nominal error rates. SPAR-H uses eight PSFs with multipliers typically corresponding to nominal, degraded, and severely degraded human performance for individual PSFs. Additionally, some PSFs feature multipliers to reflect enhanced performance. Although SPAR-H enjoys widespread use among industry and regulators, current source documents on SPAR-H such as NUREG/CR-6883 do not provide a clear account of the origin of these multipliers. The present paper redresses this shortcoming and documents the historic development of the SPAR-H PSF multipliers, from the initial use of nominal error rates, to the selection of the eight PSFs, to the mapping of multipliers to available data sources such as a Technique for Human Error Rate Prediction (THERP). Where error rates were not readily derived from THERP and other sources, expert judgment was used to extrapolate appropriate values. In documenting key background information on the multipliers, this paper provides a much needed cross-reference for human reliability practitioners and researchers of SPAR-H to validate analyses and research findings.

  19. [Identification of the main risk factors for non infectious diseases: method of classification trees].

    PubMed

    Konstantinova, E D; Varaksin, A N; Zhovner, I V

    2013-01-01

    There is presented ideology of the application of one of the methods for assessment of the influence of multi-factor influence of risk factors on population health--the method of classification trees. The method of classification trees is a hierarchical procedure for constructing a decision rule that allows to divide the population into groups with higher and lower morbidity "in the coordinates of" risk factors. The main advantage of the method--the possibility of finding the complex of risk factors having the greatest impact on the health of the population (in contrast to common methods, analyzing only the single-factor effects). In the paper there are presented two possible variants of application of classification trees: 1) the finding of the complex of environmental risk factors (RF), which provides the maximum impact on the prevalence of non infectious diseases in preschool children) in Yekaterinburg (environmental risk factors--the pollution of air drinking water, in the presence of a gas stove in the child's flat, etc.). It is shown that, together with socio-economic risk factors environmental risk factors increase the prevalence of respiratory diseases in preschool children in Ekaterinburg in 2.5-4 times (depending on the list and the number of environmental RF), 2) finding the complex of non-environmental factors that most effectively compensating the negative effect of environmental pollution on human health. This posing of the problem is associated with the fact that pollution environmental factors are (usually) unmodified, while family, behavioral or social factors can be partially or completely eliminated Implementation of the recommendations presented in the paper can reduce the incidence of circulatory diseases in preschool children in Yekaterinburg more than 2 times.

  20. Empirical Assessment of Spatial Prediction Methods for Location Cost Adjustment Factors

    PubMed Central

    Migliaccio, Giovanni C.; Guindani, Michele; D'Incognito, Maria; Zhang, Linlin

    2014-01-01

    In the feasibility stage, the correct prediction of construction costs ensures that budget requirements are met from the start of a project's lifecycle. A very common approach for performing quick-order-of-magnitude estimates is based on using Location Cost Adjustment Factors (LCAFs) that compute historically based costs by project location. Nowadays, numerous LCAF datasets are commercially available in North America, but, obviously, they do not include all locations. Hence, LCAFs for un-sampled locations need to be inferred through spatial interpolation or prediction methods. Currently, practitioners tend to select the value for a location using only one variable, namely the nearest linear-distance between two sites. However, construction costs could be affected by socio-economic variables as suggested by macroeconomic theories. Using a commonly used set of LCAFs, the City Cost Indexes (CCI) by RSMeans, and the socio-economic variables included in the ESRI Community Sourcebook, this article provides several contributions to the body of knowledge. First, the accuracy of various spatial prediction methods in estimating LCAF values for un-sampled locations was evaluated and assessed in respect to spatial interpolation methods. Two Regression-based prediction models were selected, a Global Regression Analysis and a Geographically-weighted regression analysis (GWR). Once these models were compared against interpolation methods, the results showed that GWR is the most appropriate way to model CCI as a function of multiple covariates. The outcome of GWR, for each covariate, was studied for all the 48 states in the contiguous US. As a direct consequence of spatial non-stationarity, it was possible to discuss the influence of each single covariate differently from state to state. In addition, the article includes a first attempt to determine if the observed variability in cost index values could be, at least partially explained by independent socio-economic variables. PMID

  1. Transcriptional Regulation of Zein Gene Expression in Maize through the Additive and Synergistic Action of opaque2, Prolamine-Box Binding Factor, and O2 Heterodimerizing Proteins

    PubMed Central

    Zhang, Zhiyong; Yang, Jun; Wu, Yongrui

    2015-01-01

    Maize (Zea mays) zeins are some of the most abundant cereal seed storage proteins (SSPs). Their abundance influences kernel hardness but compromises its nutritional quality. Transcription factors regulating the expression of zein and other SSP genes in cereals are endosperm-specific and homologs of maize opaque2 (O2) and prolamine-box binding factor (PBF). This study demonstrates that the ubiquitously expressed transcription factors, O2 heterodimerizing proteins (OHPs), specifically regulate 27-kD γ-zein gene expression (through binding to an O2-like box in its promoter) and interact with PBF. The zein content of double mutants OhpRNAi;o2 and PbfRNAi;o2 and the triple mutant PbfRNAi;OhpRNAi;o2 is reduced by 83, 89, and 90%, respectively, compared with the wild type. The triple mutant developed the smallest zein protein bodies, which were merely one-tenth the wild type’s size. Total protein levels in these mutants were maintained in a relatively constant range through proteome rebalancing. These data show that OHPs, O2, and PBF are master regulators of zein storage protein synthesis, acting in an additive and synergistic mode. The differential expression patterns of OHP and O2 genes may cause the slight differences in the timing of 27-kD γ-zein and 22-kD α-zein accumulation during protein body formation. PMID:25901087

  2. The Role of Laser Additive Manufacturing Methods of Metals in Repair, Refurbishment and Remanufacturing - Enabling Circular Economy

    NASA Astrophysics Data System (ADS)

    Leino, Maija; Pekkarinen, Joonas; Soukka, Risto

    Circular economy is an economy model where products, components, and materials are aimed to be kept at their highest utility and value at all times. Repair, refurbishment and remanufacturing processes are procedures aiming at returning the value of the product during its life cycle. Additive manufacturing (AM) is expected to be an enabling technology in circular economy based business models. One of AM process that enables repair, refurbishment and remanufacturing is Directed Energy Deposition. Respectively Powder Bed Fusion enables manufacturing of replacement components on demand. The aim of this study is to identify the current research findings and state of art of utilizing AM in repair, refurbishment and remanufacturing processes of metallic products. The focus is in identifying possibilities of AM in promotion of circular economy and expected environmental benefits based on the found literature. Results of the study indicate significant potential in utilizing AM in repair, refurbishment and remanufacturing activities.

  3. Method of reduction of zeroth order intensity in computer generated holograms by use of phase addition technique

    NASA Astrophysics Data System (ADS)

    Wong, D. W. K.; Chen, G.

    2007-02-01

    Diffractive optical elements are commonly used to produce a regular array of spots or an arbitrary pattern from a single coherent source. A challenge in the use of diffractive elements is the zeroth order in the reconstructed image. An analysis of the zeroth order attributed to fabrication limitations is performed via simulation and the sensitivity of the zeroth order intensity to surface relief height is determined. Two methods are proposed to reduce the zeroth order by introducing a rectangular phase aperture to compensate for the zeroth order complex amplitude, and a checkerboard phase plate to decouple the zeroth order intensity from the central zeroth order and redistribute the energy away from the reconstructed image. The second method is found to be favourable in suppressing the zeroth order and a subsequent analysis is performed to determine the tolerance of the technique to fabrication accuracies.

  4. Quantitative EDXS analysis of organic materials using the ζ-factor method.

    PubMed

    Fladischer, Stefanie; Grogger, Werner

    2014-01-01

    In this study we successfully applied the ζ-factor method to perform quantitative X-ray analysis of organic thin films consisting of light elements. With its ability to intrinsically correct for X-ray absorption, this method significantly improved the quality of the quantification as well as the accuracy of the results compared to conventional techniques in particular regarding the quantification of light elements. We describe in detail the process of determining sensitivity factors (ζ-factors) using a single standard specimen and the involved parameter optimization for the estimation of ζ-factors for elements not contained in the standard. The ζ-factor method was then applied to perform quantitative analysis of organic semiconducting materials frequently used in organic electronics. Finally, the results were verified and discussed concerning validity and accuracy. PMID:24012932

  5. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  6. Optical factors determined by the T-matrix method in turbidity measurement of absolute coagulation rate constants.

    PubMed

    Xu, Shenghua; Liu, Jie; Sun, Zhiwei

    2006-12-01

    Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.

  7. Additive influence of genetic predisposition and conventional risk factors in the incidence of coronary heart disease: a population-based study in Greece

    PubMed Central

    Yiannakouris, Nikos; Katsoulis, Michail; Trichopoulou, Antonia; Ordovas, Jose M; Trichopoulos, Dimitrios

    2014-01-01

    Objectives An additive genetic risk score (GRS) for coronary heart disease (CHD) has previously been associated with incident CHD in the population-based Greek European Prospective Investigation into Cancer and nutrition (EPIC) cohort. In this study, we explore GRS-‘environment’ joint actions on CHD for several conventional cardiovascular risk factors (ConvRFs), including smoking, hypertension, type-2 diabetes mellitus (T2DM), body mass index (BMI), physical activity and adherence to the Mediterranean diet. Design A case–control study. Setting The general Greek population of the EPIC study. Participants and outcome measures 477 patients with medically confirmed incident CHD and 1271 controls participated in this study. We estimated the ORs for CHD by dividing participants at higher or lower GRS and, alternatively, at higher or lower ConvRF, and calculated the relative excess risk due to interaction (RERI) as a measure of deviation from additivity. Results The joint presence of higher GRS and higher risk ConvRF was in all instances associated with an increased risk of CHD, compared with the joint presence of lower GRS and lower risk ConvRF. The OR (95% CI) was 1.7 (1.2 to 2.4) for smoking, 2.7 (1.9 to 3.8) for hypertension, 4.1 (2.8 to 6.1) for T2DM, 1.9 (1.4 to 2.5) for lower physical activity, 2.0 (1.3 to 3.2) for high BMI and 1.5 (1.1 to 2.1) for poor adherence to the Mediterranean diet. In all instances, RERI values were fairly small and not statistically significant, suggesting that the GRS and the ConvRFs do not have effects beyond additivity. Conclusions Genetic predisposition to CHD, operationalised through a multilocus GRS, and ConvRFs have essentially additive effects on CHD risk. PMID:24500614

  8. Community shifts of actively growing lake bacteria after N-acetyl-glucosamine addition: improving the BrdU-FACS method

    PubMed Central

    Tada, Yuya; Grossart, Hans-Peter

    2014-01-01

    In aquatic environments, community dynamics of bacteria, especially actively growing bacteria (AGB), are tightly linked with dissolved organic matter (DOM) quantity and quality. We analyzed the community dynamics of DNA-synthesizing and accordingly AGB by linking an improved bromodeoxyuridine immunocytochemistry approach with fluorescence-activated cell sorting (BrdU-FACS). FACS-sorted cells of even oligotrophic ecosystems in winter were characterized by 16S rRNA gene analysis. In incubation experiments, we examined community shifts of AGB in response to the addition of N-acetyl-glucosamine (NAG), one of the most abundant aminosugars in aquatic systems. Our improved BrdU-FACS analysis revealed that AGB winter communities of oligotrophic Lake Stechlin (northeastern Germany) substantially differ from those of total bacteria and consist of Alpha-, Beta-, Gamma-, Deltaproteobacteria, Actinobacteria, Candidatus OP10 and Chloroflexi. AGB populations with different BrdU-fluorescence intensities and cell sizes represented different phylotypes suggesting that single-cell growth potential varies at the taxon level. NAG incubation experiments demonstrated that a variety of widespread taxa related to Alpha-, Beta-, Gammaproteobacteria, Bacteroidetes, Actinobacteria, Firmicutes, Planctomycetes, Spirochaetes, Verrucomicrobia and Chloroflexi actively grow in the presence of NAG. The BrdU-FACS approach enables detailed phylogenetic studies of AGB and, thus, to identify those phylotypes which are potential key players in aquatic DOM cycling. PMID:23985742

  9. Self-assembling peptide amphiphiles and related methods for growth factor delivery

    DOEpatents

    Stupp, Samuel I.; Donners, Jack J. J. M.; Silva, Gabriel A.; Behanna, Heather A.; Anthony, Shawn G.

    2009-06-09

    Amphiphilic peptide compounds comprising one or more epitope sequences for binding interaction with one or more corresponding growth factors, micellar assemblies of such compounds and related methods of use.

  10. Self-assembling peptide amphiphiles and related methods for growth factor delivery

    DOEpatents

    Stupp, Samuel I.; Donners, Jack J. J. M.; Silva, Gabriel A.; Behanna, Heather A.; Anthony, Shawn G.

    2012-03-20

    Amphiphilic peptide compounds comprising one or more epitope sequences for binding interaction with one or more corresponding growth factors, micellar assemblies of such compounds and related methods of use.

  11. Self-assembling peptide amphiphiles and related methods for growth factor delivery

    DOEpatents

    Stupp, Samuel I; Donners, Jack J.J.M.; Silva, Gabriel A; Behanna, Heather A; Anthony, Shawn G

    2013-11-12

    Amphiphilic peptide compounds comprising one or more epitope sequences for binding interaction with one or more corresponding growth factors, micellar assemblies of such compounds and related methods of use.

  12. RECEPTOR MODELING OF AMBIENT PARTICULATE MATTER DATA USING POSITIVE MATRIX FACTORIZATION REVIEW OF EXISTING METHODS

    EPA Science Inventory

    Methods for apportioning sources of ambient particulate matter (PM) using the positive matrix factorization (PMF) algorithm are reviewed. Numerous procedural decisions must be made and algorithmic parameters selected when analyzing PM data with PMF. However, few publications docu...

  13. 48 CFR 514.270-6 - Guidelines for using the weight factors method.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Guidelines for using the weight factors method. 514.270-6 Section 514.270-6 Federal Acquisition Regulations System GENERAL... bid for each item (unit price X quantity) by its weight factor. Then, add the subtotals together...

  14. Additive Manufacturing/Diagnostics via the High Frequency Induction Heating of Metal Powders: The Determination of the Power Transfer Factor for Fine Metallic Spheres

    SciTech Connect

    Rios, Orlando; Radhakrishnan, Balasubramaniam; Caravias, George; Holcomb, Matthew

    2015-03-11

    Grid Logic Inc. is developing a method for sintering and melting fine metallic powders for additive manufacturing using spatially-compact, high-frequency magnetic fields called Micro-Induction Sintering (MIS). One of the challenges in advancing MIS technology for additive manufacturing is in understanding the power transfer to the particles in a powder bed. This knowledge is important to achieving efficient power transfer, control, and selective particle heating during the MIS process needed for commercialization of the technology. The project s work provided a rigorous physics-based model for induction heating of fine spherical particles as a function of frequency and particle size. This simulation improved upon Grid Logic s earlier models and provides guidance that will make the MIS technology more effective. The project model will be incorporated into Grid Logic s power control circuit of the MIS 3D printer product and its diagnostics technology to optimize the sintering process for part quality and energy efficiency.

  15. Spectrophotometric determination of carminic acid in human plasma and fruit juices by second order calibration of the absorbance spectra-pH data matrices coupled with standard addition method.

    PubMed

    Samari, Fayezeh; Hemmateenejad, Bahram; Shamsipur, Mojtaba

    2010-05-14

    A simple analytical method based on the second-order calibration of the pH gradient spectrophotometric data was developed for assay of carminic acid (CA) in human plasma and orange juice over the concentration range of 1.5-14.0microM. The multi-way data analysis method was coupled with standard addition to encounter the significant effects of plasma and juices matrices on the acid-base behavior and UV-vis. absorbance spectra of CA. Thus, the standard addition three-way calibration data of plasma or fruit juices samples were analyzed by parallel factor analysis (PARAFAC) and the concentration related scores were used to derive a standard addition plot such as one obtained in univariate standard addition method. The number of PARAFAC components was obtained utilizing different criteria such as core consistency and residual errors through pf-test implementation. The applicability of the proposed method was evaluated by analysis of human plasma and fruit juices spiked with different levels of standard CA solutions. The results confirmed the success of the proposed method in the analysis of pH gradient spectrophotometric data for determination of CA. The recoveries were between 86.7 and 106.7. PMID:20441865

  16. Factors affecting the use of modern methods and materials in construction

    NASA Astrophysics Data System (ADS)

    Mesároš, P.; Mandičák, T.

    2015-01-01

    Sustainability of construction attracts much attention in construction industry. One of the factors driving this requirement is application of materials and components through modern methods and technologies. Modern methods of construction can be the way to obtain buildings assisting in minimizing the negative impact of construction industry on the environment. Article defines the factors affecting the use of these modern methods and materials of construction. At the same time it defines modern construction methods and materials that can be considered progressive in the construction process.

  17. An improved UPLC method for the detection of undeclared horse meat addition by using myoglobin as molecular marker.

    PubMed

    Di Giuseppe, Antonella M A; Giarretta, Nicola; Lippert, Martina; Severino, Valeria; Di Maro, Antimo

    2015-02-15

    In 2013, following the scandal of the presence of undeclared horse meat in various processed beef products across the Europe, several researches have been undertaken for the safety of consumer health. In this framework, an improved UPLC separation method has been developed to detect the presence of horse myoglobin in raw meat samples. The separation of both horse and beef myoglobins was achieved in only seven minutes. The methodology was improved by preparing mixtures with different composition percentages of horse and beef meat. By using myoglobin as marker, low amounts (0.50mg/0.50g, w/w; ∼0.1%) of horse meat can be detected and quantified in minced raw meat samples with high reproducibility and sensitivity, thus offering a valid alternative to conventional PCR techniques.

  18. Protective netting, an additional method for the integrated control of livestock trypanosomosis in KwaZulu-Natal Province, South Africa.

    PubMed

    Esterhuizen, J; Van den Bossche, P

    2006-12-01

    Studies were conducted in KwaZulu-Natal, South Africa, to evaluate the effectiveness of netting in preventing Glossina austeni and Glossina brevipalpis from entering H-traps. Results indicated that a net of 1.5 m in height was effective in reducing catches of G. austeni by 59.6% and catches of G. brevipalpis by 80.9%. Increasing the net height to 2.5 m, reduced catches by 96.6% and 100% for G. brevipalpis and G. austeni, respectively. Nets of this height also reduced catches of horse flies by 55%. Although the potential use of protective netting has limitations in tsetse-infested areas of rural northern KwaZulu-Natal, it is a low-technology method that can be used as part of integrated disease management strategies.

  19. A Comparison of Method Effects in Two Confirmatory Factor Models for Structurally Different Methods

    ERIC Educational Resources Information Center

    Geiser, Christian; Eid, Michael; West, Stephen G.; Lischetzke, Tanja; Nussbeck, Fridtjof W.

    2012-01-01

    Multimethod data analysis is a complex procedure that is often used to examine the degree to which different measures of the same construct converge in the assessment of this construct. Several authors have called for a greater understanding of the definition and meaning of method effects in different models for multimethod data. In this article,…

  20. The Application of Quasi-Mean-Element-Method to LEO under Additional Perturbation due to Change of Coordinate System

    NASA Astrophysics Data System (ADS)

    Tang, Jing-shi; Liu, Lin

    2010-10-01

    The perturbation caused by the oscillation of Earth's equator plane must be taken into account when working on the motion of satellite on a low Earth orbit (LEO) in the geocentric celestial coordinate system. Since 1960 s, an intermediate orbit coordinate system using true equator and mean equinox (TEME) is introduced. It effectively solves the problem and has been widely used in various applications till today. But this traditional reference frame is purely conceptual and has always been a headache when performing the transition between these systems especially for those who are unfamiliar with celestial frames. As proved in a previous paper, it is possible to avoid the intermediate TEME frame, and conversions between osculating elements and mean elements can be completed in a consistent geocentric celestial coordinate system where only short-period terms are required. In this paper, after including the improved secular and long-period terms, the quasi-mean-element-method is available to predict the orbit analytically, reaching the accuracy of 10 -6 in Earth's radius. And all these can be done in the same celestial frame. The results suggest that the celestial coordinate system (J2000.0 nowadays) can be used throughout any applications without having to introduce TEME system as intermediate frame any more.

  1. A numerical method for determining the strain rate intensity factor under plane strain conditions

    NASA Astrophysics Data System (ADS)

    Alexandrov, S.; Kuo, C.-Y.; Jeng, Y.-R.

    2016-07-01

    Using the classical model of rigid perfectly plastic solids, the strain rate intensity factor has been previously introduced as the coefficient of the leading singular term in a series expansion of the equivalent strain rate in the vicinity of maximum friction surfaces. Since then, many strain rate intensity factors have been determined by means of analytical and semi-analytical solutions. However, no attempt has been made to develop a numerical method for calculating the strain rate intensity factor. This paper presents such a method for planar flow. The method is based on the theory of characteristics. First, the strain rate intensity factor is derived in characteristic coordinates. Then, a standard numerical slip-line technique is supplemented with a procedure to calculate the strain rate intensity factor. The distribution of the strain rate intensity factor along the friction surface in compression of a layer between two parallel plates is determined. A high accuracy of this numerical solution for the strain rate intensity factor is confirmed by comparison with an analytic solution. It is shown that the distribution of the strain rate intensity factor is in general discontinuous.

  2. A form-factor method for determining the structure of distorted stars

    NASA Technical Reports Server (NTRS)

    Wolfe, R. H., Jr.; Kern, J. W.

    1979-01-01

    The equilibrium equations of a uniformly rotating and tidally distorted star are reduced to the same form as for a spherical star except for the inclusion of two form factors. One factor, expressing the buoyancy effects of centrifugal force, is determined directly from the integrated structure variables. The other factor, expressing the deviation from spherical shape, is shown to be relatively insensitive to errors in the assumed shape, so that accurate solutions are obtained in spite of the use of an a priori shape. The method is employed by adding computations for the factors to an existing spherical model program. Upper Main Sequence models determined by this method compare closely with results from the double approximation method even for critical rotation and tidal distortion.

  3. Franck-Condon Factors for Diatomics: Insights and Analysis Using the Fourier Grid Hamiltonian Method

    ERIC Educational Resources Information Center

    Ghosh, Supriya; Dixit, Mayank Kumar; Bhattacharyya, S. P.; Tembe, B. L.

    2013-01-01

    Franck-Condon factors (FCFs) play a crucial role in determining the intensities of the vibrational bands in electronic transitions. In this article, a relatively simple method to calculate the FCFs is illustrated. An algorithm for the Fourier Grid Hamiltonian (FGH) method for computing the vibrational wave functions and the corresponding energy…

  4. Observation of the Effectiveness of Drama Method in Helping to Acquire the Addition-Subtraction Skills by Children at Preschool Phase

    ERIC Educational Resources Information Center

    Soydan, Sema; Quadir, Seher Ersoy

    2013-01-01

    Principal aim of this study is to show the effectiveness of the program prepared by researchers in order to enable 6 year-old children attending pre-school educational institutions to effectively gain addition subtraction skills through a drama-related method. The work group in the research comprised of 80 kids who continued their education in…

  5. Estimating electric field enhancement factors on an aircraft utilizing a small scale model: A method evaluation

    NASA Technical Reports Server (NTRS)

    Easterbrook, Calvin C.; Rudolph, Terence; Easterbrook, Kevin

    1988-01-01

    A method for obtaining field enhancement factors at specific points on an aircraft utilizing a small scale model was evaluated by measuring several canonical shapes. Comparison of the form factors obtained by analytical means with measurements indicate that the experimental method has serious flaws. Errors of 200 to 300 percent were found between analytical values and measured values. As a result of the study, the analytical method is not recommended for calibration of field meters located on aircraft, and should not be relied upon in any application where the local spatial derivatives of the electric field on the model are large over the dimensions of the sensing probe.

  6. New experimental method for lidar overlap factor using a CCD side-scatter technique.

    PubMed

    Wang, Zhenzhu; Tao, Zongming; Liu, Dong; Wu, Decheng; Xie, Chenbo; Wang, Yingjian

    2015-04-15

    In theory, lidar overlap factor can be derived from the difference between the particle backscatter coefficient retrieved from lidar elastic signal without overlap correction and the actual particle backscatter coefficient, which can be obtained by other measured techniques. The side-scatter technique using a CCD camera is testified to be a powerful tool to detect the particle backscatter coefficient in near ground layer during night time. A new experiment approach to determine the overlap factor for vertically pointing lidar is presented in this study, which can be applied to Mie lidars. The effect of overlap factor on Mie lidar is corrected by an iteration algorithm combining the retrieved particle backscatter coefficient using CCD side-scatter method and Fernald method. This method has been successfully applied to Mie lidar measurements during a routine campaign, and the comparison of experimental results in different atmosphere conditions demonstrated that this method is available in practice.

  7. Non-negative Matrix Factorization as a Method for Studying Coronal Heating

    NASA Astrophysics Data System (ADS)

    Barnes, Will; Bradshaw, Stephen

    2015-04-01

    Many theoretical efforts have been made to model the response of coronal loops to nanoflare heating, but the theory has long suffered from a lack of direct observations. Nanoflares, originally proposed by Parker (1988), heat the corona through short, impulsive bursts of energy. Because of their short duration and comparatively low amplitude, emission signatures from nanoflare heating events are often difficult to detect. Past algorithms (e.g. Ugarte-Urra and Warren, 2014) for measuring the frequency of transient brightenings in active region cores have provided only a lower bound for such measurements. We present the use of non-negative matrix factorization (NMF) to analyze spectral data in active region cores in order to provide more accurate determinations of nanoflare heating properties. NMF, a matrix deconvolution technique, has a variety of applications , ranging from Raman spectroscopy to face recognition, but, to our knowledge, has not been applied in the field of solar physics. The strength of NMF lies in its ability to estimate sources (heating events) from measurements (observed spectral emission) without any knowledge of the mixing process (Cichocki et al., 2009). We apply our NMF algorithm to forward-modeled emission representative of that produced by nanoflare heating events in an active region core. The heating events are modeled using a state-of-the-art hydrodynamics code (Bradshaw and Cargill, 2013) and the emission and active regions are synthesized using advanced forward modeling and visualization software (Bradshaw and Klimchuk, 2011; Reep et al., 2013). From these active region visualizations, our NMF algorithm is then able to predict the heating event frequency and amplitudes. Improved methods of nanoflare detection will help to answer fundamental questions regarding the frequency of energy release in the solar corona and how the corona responds to such impulsive heating. Additionally, development of reliable, automated nanoflare detection

  8. Analysis of spectral radiative heat transfer using discrete exchange factor method

    NASA Astrophysics Data System (ADS)

    Zhang, Yinqiu; Naraghi, M. H. N.

    1993-09-01

    A solution technique is developed for spectral radiative heat-transfer problems. The formulation is based on the discrete exchange factor (DEF) method and uses Edward's (1976) wide band model to obtain spectral data. The results of the analyses of three cases were found to be in excellent agreement with those of the zonal method and differ by less than 5 percent from those of the discrete-ordinates method.

  9. Cisapride a green analytical reagent for rapid and sensitive determination of bromate in drinking water, bread and flour additives by oxidative coupling spectrophotometric methods.

    PubMed

    Al Okab, Riyad Ahmed

    2013-02-15

    Green analytical methods using Cisapride (CPE) as green analytical reagent was investigated in this work. Rapid, simple, and sensitive spectrophotometric methods for the determination of bromate in water sample, bread and flour additives were developed. The proposed methods based on the oxidative coupling between phenoxazine and Cisapride in the presence of bromate to form red colored product with max at 520 nm. Phenoxazine and Cisapride and its reaction products were found to be environmentally friendly under the optimum experimental condition. The method obeys beers law in concentration range 0.11-4.00 g ml(-1) and molar absorptivity 1.41 × 10(4) L mol(-1)cm(-1). All variables have been optimized and the presented reaction sequences were applied to the analysis of bromate in water, bread and flour additive samples. The performance of these method was evaluated in terms of Student's t-test and variance ratio F-test to find out the significance of proposed methods over the reference method. The combination of pharmaceutical drugs reagents with low concentration create some unique green chemical analyses.

  10. Cisapride a green analytical reagent for rapid and sensitive determination of bromate in drinking water, bread and flour additives by oxidative coupling spectrophotometric methods

    NASA Astrophysics Data System (ADS)

    Al Okab, Riyad Ahmed

    2013-02-01

    Green analytical methods using Cisapride (CPE) as green analytical reagent was investigated in this work. Rapid, simple, and sensitive spectrophotometric methods for the determination of bromate in water sample, bread and flour additives were developed. The proposed methods based on the oxidative coupling between phenoxazine and Cisapride in the presence of bromate to form red colored product with max at 520 nm. Phenoxazine and Cisapride and its reaction products were found to be environmentally friendly under the optimum experimental condition. The method obeys beers law in concentration range 0.11-4.00 g ml-1 and molar absorptivity 1.41 × 104 L mol-1 cm-1. All variables have been optimized and the presented reaction sequences were applied to the analysis of bromate in water, bread and flour additive samples. The performance of these method was evaluated in terms of Student's t-test and variance ratio F-test to find out the significance of proposed methods over the reference method. The combination of pharmaceutical drugs reagents with low concentration create some unique green chemical analyses.

  11. The Pabst's method: an effective and low-budget tool for the forensic comparison of opaque thermoplastics--part 1: Additional discrimination of black electrical tapes.

    PubMed

    Henning, Siegfried; Schönberger, Torsten; Simmross, Ulrich

    2013-12-10

    For many years now, Pabst's micro-press has been used in German forensic science laboratories as a valuable addition to methods of comparative analysis of plastic trace evidence. However, it is as yet hardly known in laboratories outside of Germany. The principal reproducibility is demonstrated by a homogeneity check of a raw backing material of defined origin. The illustrated results of a proficiency test emphasise the applicability of the Pabst method for forensic comparisons. The discrimination power of the Pabst method was tested by taking 90 black PVC-backings provided by the FBI Laboratory, i.e. those that could not be discriminated by standard methods. In this way further discriminations could be achieved. In the following, the Pabst method is therefore introduced as a straightforward, inexpensive and useful tool.

  12. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    SciTech Connect

    Ronald Laurids Boring

    2010-11-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  13. CFD-based method of determining form factor k for different ship types and different drafts

    NASA Astrophysics Data System (ADS)

    Wang, Jinbao; Yu, Hai; Zhang, Yuefeng; Xiong, Xiaoqing

    2016-07-01

    The value of form factor k at different drafts is important in predicting full-scale total resistance and speed for different types of ships. In the ITTC community, most organizations predict form factor k using a low-speed model test. However, this method is problematic for ships with bulbous bows and transom. In this article, a Computational Fluid Dynamics (CFD)-based method is introduced to obtain k for different type of ships at different drafts, and a comparison is made between the CFD method and the model test. The results show that the CFD method produces reasonable k values. A grid generating method and turbulence model are briefly discussed in the context of obtaining a consistent k using CFD.

  14. CFD-based method of determining form factor k for different ship types and different drafts

    NASA Astrophysics Data System (ADS)

    Wang, Jinbao; Yu, Hai; Zhang, Yuefeng; Xiong, Xiaoqing

    2016-09-01

    The value of form factor k at different drafts is important in predicting full-scale total resistance and speed for different types of ships. In the ITTC community, most organizations predict form factor k using a low-speed model test. However, this method is problematic for ships with bulbous bows and transom. In this article, a Computational Fluid Dynamics (CFD)-based method is introduced to obtain k for different type of ships at different drafts, and a comparison is made between the CFD method and the model test. The results show that the CFD method produces reasonable k values. A grid generating method and turbulence model are briefly discussed in the context of obtaining a consistent k using CFD.

  15. Investigation of M2 factor influence for paraxial computer generated hologram reconstruction using a statistical method

    NASA Astrophysics Data System (ADS)

    Flury, M.; Gérard, P.; Takakura, Y.; Twardworski, P.; Fontaine, J.

    2005-04-01

    In this paper, we study the influence of the M2 quality factor of an incident beam on the reconstruction performance of a computer generated hologram (CGH). We use a statistical method to analyze the evolution of different quality criteria such as diffraction efficiency, root mean square error, illumination uniformity or correlation coefficient calculated in the numerical reconstruction versus the increasing M2 quality factor. The simulation results show us that this factor must always be taken into account in the CGH design when the M2 value is bigger than 2.

  16. Fat and starch as additive risk factors for milk fat depression in dairy diets containing corn dried distillers grains with solubles.

    PubMed

    Ramirez Ramirez, H A; Castillo Lopez, E; Harvatine, K J; Kononoff, P J

    2015-03-01

    Two experiments were conducted to evaluate the additive effects of starch and fat as risk factors associated with milk fat depression in dairy diets containing corn dried distillers grains with solubles. In experiment 1, 4 multiparous ruminally cannulated Holstein cows, averaging 114±14 d in milk and 662±52 kg of body weight, were randomly assigned to 4 treatments in a 4×4 Latin square to determine the effect of these risk factors on rumen fermentation and milk fatty acid profile. In each 21-d period, cows were assigned to 1 of 4 dietary treatments: a control diet (CON; ether extract 5.2%, starch 19%); CON with added oil (OL; ether extract 6.4%, starch 18%); CON with added starch (STR; ether extract 5.5%, starch 22%); and CON with added oil and starch (COMBO; ether extract 6.5%, starch 23%). After completion of experiment 1, milk production response was evaluated in a second experiment with a similar approach to diet formulation. Twenty Holstein cows, 12 primiparous and 8 multiparous, averaging 117±17 d in milk and 641±82 kg, were used in replicated 4×4 Latin squares with 21-d periods. Results from experiment 1 showed that ruminal pH was not affected by treatment averaging 5.87±0.08. Molar proportion of propionate in rumen fluid was greatest on the COMBO diet, followed by OL and STR, and lowest for CON. The concentration of trans-10,cis-12 conjugated linoleic acid in milk fat increased with the COMBO diet. Adding oil, starch, or a combination of both resulted in lower concentration and yield of fatty acids<16 carbons. Compared with the control, OL and STR resulted in 13% lower concentration, whereas the COMBO diet resulted in a 27% reduction; similarly yield was reduced by 24% with the OL and STR treatments and 54% with the COMBO diet. In experiment 2, milk yield, milk protein percentage, and milk protein yield were similar across treatments, averaging 26.6±1.01 kg/d, 3.2±0.05%, and 0.84±0.03 kg/d, respectively. Fat-corrected milk was greatest for CON, 26

  17. A new method for testing the scale-factor performance of fiber optical gyroscope

    NASA Astrophysics Data System (ADS)

    Zhao, Zhengxin; Yu, Haicheng; Li, Jing; Li, Chao; Shi, Haiyang; Zhang, Bingxin

    2015-10-01

    Fiber optical gyro (FOG) is a kind of solid-state optical gyroscope with good environmental adaptability, which has been widely used in national defense, aviation, aerospace and other civilian areas. In some applications, FOG will experience environmental conditions such as vacuum, radiation, vibration and so on, and the scale-factor performance is concerned as an important accuracy indicator. However, the scale-factor performance of FOG under these environmental conditions is difficult to test using conventional methods, as the turntable can't work under these environmental conditions. According to the phenomenon that the physical effects of FOG produced by the sawtooth voltage signal under static conditions is consistent with the physical effects of FOG produced by a turntable in uniform rotation, a new method for the scale-factor performance test of FOG without turntable is proposed in this paper. In this method, the test system of the scale-factor performance is constituted by an external operational amplifier circuit and a FOG which the modulation signal and Y waveguied are disconnected. The external operational amplifier circuit is used to superimpose the externally generated sawtooth voltage signal and the modulation signal of FOG, and to exert the superimposed signal on the Y waveguide of the FOG. The test system can produce different equivalent angular velocities by changing the cycle of the sawtooth signal in the scale-factor performance test. In this paper, the system model of FOG superimposed with an externally generated sawtooth is analyzed, and a conclusion that the effect of the equivalent input angular velocity produced by the sawtooth voltage signal is consistent with the effect of input angular velocity produced by the turntable is obtained. The relationship between the equivalent angular velocity and the parameters such as sawtooth cycle and so on is presented, and the correction method for the equivalent angular velocity is also presented by

  18. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics.

    PubMed

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems. PMID:27176426

  19. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.

  20. Additive method for the prediction of protein-peptide binding affinity. Application to the MHC class I molecule HLA-A*0201.

    PubMed

    Doytchinova, Irini A; Blythe, Martin J; Flower, Darren R

    2002-01-01

    A method has been developed for prediction of binding affinities between proteins and peptides. We exemplify the method through its application to binding predictions of peptides with affinity to major histocompatibility complex class I molecule HLA-A*0201. The method is named "additive" because it is based on the assumption that the binding affinity of a peptide could be presented as a sum of the contributions of the amino acids at each position and the interactions between them. The amino acid contributions and the contributions of the interactions between adjacent side chains and every second side chain were derived using a partial least squares (PLS) statistical methodology using a training set of 420 experimental IC50 values. The predictive power of the method was assessed using rigorous cross-validation and using an independent test set of 89 peptides. The mean value of the residuals between the experimental and predicted pIC50 values was 0.508 for this test set. The additive method was implemented in a program for rapid T-cell epitope search. It is universal and can be applied to any peptide-protein interaction where binding data is known. PMID:12645903

  1. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  2. Factors that determine prevalence of use of contraceptive methods for men.

    PubMed

    Ringheim, K

    1993-01-01

    Globally, men have not shared equally with women the responsibility for fertility regulation. While family planning efforts have been directed almost exclusively toward women, the lack of male involvement may also reflect the limited options available to men. Current methods for men are either coitus-dependent, such as the condom or withdrawal, or permanent, such as vasectomy. The 20-year history of social science research on male contraceptive methods is examined here in terms of the human and method factors related to the acceptability of hypothetical methods and the prevalence of use of existing methods. New male methods, particularly if reversible, may alter men's willingness to accept or share responsibility for the control of fertility. Research opportunities in the areas of gender, decision-making, communication, health education, and service delivery will be enhanced when methods for women and men are comparable.

  3. Effect of PEG additive on anode microstructure and cell performance of anode-supported MT-SOFCs fabricated by phase inversion method

    NASA Astrophysics Data System (ADS)

    Ren, Cong; Liu, Tong; Maturavongsadit, Panita; Luckanagul, Jittima Amie; Chen, Fanglin

    2015-04-01

    Anode-supported micro-tubular solid oxide fuel cells (MT-SOFCs) have been fabricated by phase inversion method. For the anode support preparation, N-methyl-2-pyrrolidone (NMP), polyethersulfone (PESf) and poly ethylene glycol (PEG) were applied as solvent, polymer binder and additive, respectively. The effect of molecular weight and amount of PEG additive on the thermodynamics of the casting solutions was characterized by measuring the coagulation value. Viscosity of the casting slurries was also measured and the influence of PEG additive on viscosity was studied and discussed. The presence of PEG in the casting slurry can significantly influence the final anode support microstructure. Based on the microstructure result and the measured gas permeation value, two anode supports were selected for cell fabrication. For cell with the anode support fabricated using slurry with PEG additive, a maximum cell power density of 704 mW cm-2 is obtained at 750 °C with humidified hydrogen as fuel and ambient air as oxidant; cell fabricated without any PEG additive shows the peak cell power density of 331 mW cm-2. The relationship between anode microstructure and cell performance was discussed.

  4. Using Frequent Item Set Mining and Feature Selection Methods to Identify Interacted Risk Factors - The Atrial Fibrillation Case Study.

    PubMed

    Li, Xiang; Liu, Haifeng; Du, Xin; Hu, Gang; Xie, Guotong; Zhang, Ping

    2016-01-01

    Disease risk prediction is highly important for early intervention and treatment, and identification of predictive risk factors is the key point to achieve accurate prediction. In addition to original independent features in a dataset, some interacted features, such as comorbidities and combination therapies, may have non-additive influence on the disease outcome and can also be used in risk prediction to improve the prediction performance. However, it is usually difficult to manually identify the possible interacted risk factors due to the combination explosion of features. In this paper, we propose an automatic approach to identify predictive risk factors with interactions using frequent item set mining and feature selection methods. The proposed approach was applied in the real world case study of predicting ischemic stroke and thromboembolism for atrial fibrillation patients on the Chinese atrial fibrillation registry dataset, and the results show that our approach can not only improve the prediction performance, but also identify the comorbidities and combination therapies that have potential influences on TE occurrence for AF. PMID:27577446

  5. Quantum dynamical structure factor of liquid neon via a quasiclassical symmetrized method

    NASA Astrophysics Data System (ADS)

    Monteferrante, Michele; Bonella, Sara; Ciccotti, Giovanni

    2013-02-01

    We apply the phase integration method for quasiclassical quantum time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011), 10.1080/00268976.2011.619506] to compute the dynamic structure factor of liquid neon. So far the method had been tested only on model systems. By comparing our results for neon with experiments and previous calculations, we demonstrate that the scheme is accurate and efficient also for a realistic model of a condensed phase system showing quantum behavior.

  6. Quantum dynamical structure factor of liquid neon via a quasiclassical symmetrized method.

    PubMed

    Monteferrante, Michele; Bonella, Sara; Ciccotti, Giovanni

    2013-02-01

    We apply the phase integration method for quasiclassical quantum time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)] to compute the dynamic structure factor of liquid neon. So far the method had been tested only on model systems. By comparing our results for neon with experiments and previous calculations, we demonstrate that the scheme is accurate and efficient also for a realistic model of a condensed phase system showing quantum behavior.

  7. The effectiveness of the McKenzie method in addition to first-line care for acute low back pain: a randomized controlled trial

    PubMed Central

    2010-01-01

    Background Low back pain is a highly prevalent and disabling condition worldwide. Clinical guidelines for the management of patients with acute low back pain recommend first-line treatment consisting of advice, reassurance and simple analgesics. Exercise is also commonly prescribed to these patients. The primary aim of this study was to evaluate the short-term effect of adding the McKenzie method to the first-line care of patients with acute low back pain. Methods A multi-centre randomized controlled trial with a 3-month follow-up was conducted between September 2005 and June 2008. Patients seeking care for acute non-specific low back pain from primary care medical practices were screened. Eligible participants were assigned to receive a treatment programme based on the McKenzie method and first-line care (advice, reassurance and time-contingent acetaminophen) or first-line care alone, for 3 weeks. Primary outcome measures included pain (0-10 Numeric Rating Scale) over the first seven days, pain at 1 week, pain at 3 weeks and global perceived effect (-5 to 5 scale) at 3 weeks. Treatment effects were estimated using linear mixed models. Results One hundred and forty-eight participants were randomized into study groups, of whom 138 (93%) completed the last follow-up. The addition of the McKenzie method to first-line care produced statistically significant but small reductions in pain when compared to first-line care alone: mean of -0.4 points (95% confidence interval, -0.8 to -0.1) at 1 week, -0.7 points (95% confidence interval, -1.2 to -0.1) at 3 weeks, and -0.3 points (95% confidence interval, -0.5 to -0.0) over the first 7 days. Patients receiving the McKenzie method did not show additional effects on global perceived effect, disability, function or on the risk of persistent symptoms. These patients sought less additional health care than those receiving only first-line care (P = 0.002). Conclusions When added to the currently recommended first-line care of acute

  8. Finite Difference Methods for Option Pricing under Lévy Processes: Wiener-Hopf Factorization Approach

    PubMed Central

    2013-01-01

    In the paper, we consider the problem of pricing options in wide classes of Lévy processes. We propose a general approach to the numerical methods based on a finite difference approximation for the generalized Black-Scholes equation. The goal of the paper is to incorporate the Wiener-Hopf factorization into finite difference methods for pricing options in Lévy models with jumps. The method is applicable for pricing barrier and American options. The pricing problem is reduced to the sequence of linear algebraic systems with a dense Toeplitz matrix; then the Wiener-Hopf factorization method is applied. We give an important probabilistic interpretation based on the infinitely divisible distributions theory to the Laurent operators in the correspondent factorization identity. Notice that our algorithm has the same complexity as the ones which use the explicit-implicit scheme, with a tridiagonal matrix. However, our method is more accurate. We support the advantage of the new method in terms of accuracy and convergence by using numerical experiments. PMID:24489518

  9. Human factors analysis and design methods for nuclear waste retrieval systems. Human factors design methodology and integration plan

    SciTech Connect

    Casey, S.M.

    1980-06-01

    The purpose of this document is to provide an overview of the recommended activities and methods to be employed by a team of human factors engineers during the development of a nuclear waste retrieval system. This system, as it is presently conceptualized, is intended to be used for the removal of storage canisters (each canister containing a spent fuel rod assembly) located in an underground salt bed depository. This document, and the others in this series, have been developed for the purpose of implementing human factors engineering principles during the design and construction of the retrieval system facilities and equipment. The methodology presented has been structured around a basic systems development effort involving preliminary development, equipment development, personnel subsystem development, and operational test and evaluation. Within each of these phases, the recommended activities of the human engineering team have been stated, along with descriptions of the human factors engineering design techniques applicable to the specific design issues. Explicit examples of how the techniques might be used in the analysis of human tasks and equipment required in the removal of spent fuel canisters have been provided. Only those techniques having possible relevance to the design of the waste retrieval system have been reviewed. This document is intended to provide the framework for integrating human engineering with the rest of the system development effort. The activities and methodologies reviewed in this document have been discussed in the general order in which they will occur, although the time frame (the total duration of the development program in years and months) in which they should be performed has not been discussed.

  10. The effectiveness of power-generating complexes constructed on the basis of nuclear power plants combined with additional sources of energy determined taking risk factors into account

    NASA Astrophysics Data System (ADS)

    Aminov, R. Z.; Khrustalev, V. A.; Portyankin, A. V.

    2015-02-01

    The effectiveness of combining nuclear power plants equipped with water-cooled water-moderated power-generating reactors (VVER) with other sources of energy within unified power-generating complexes is analyzed. The use of such power-generating complexes makes it possible to achieve the necessary load pickup capability and flexibility in performing the mandatory selective primary and emergency control of load, as well as participation in passing the night minimums of electric load curves while retaining high values of the capacity utilization factor of the entire power-generating complex at higher levels of the steam-turbine part efficiency. Versions involving combined use of nuclear power plants with hydrogen toppings and gas turbine units for generating electricity are considered. In view of the fact that hydrogen is an unsafe energy carrier, the use of which introduces additional elements of risk, a procedure for evaluating these risks under different conditions of implementing the fuel-and-hydrogen cycle at nuclear power plants is proposed. Risk accounting technique with the use of statistical data is considered, including the characteristics of hydrogen and gas pipelines, and the process pipelines equipment tightness loss occurrence rate. The expected intensities of fires and explosions at nuclear power plants fitted with hydrogen toppings and gas turbine units are calculated. In estimating the damage inflicted by events (fires and explosions) occurred in nuclear power plant turbine buildings, the US statistical data were used. Conservative scenarios of fires and explosions of hydrogen-air mixtures in nuclear power plant turbine buildings are presented. Results from calculations of the introduced annual risk to the attained net annual profit ratio in commensurable versions are given. This ratio can be used in selecting projects characterized by the most technically attainable and socially acceptable safety.

  11. Linear Ordinary Differential Equations with Constant Coefficients. Revisiting the Impulsive Response Method Using Factorization

    ERIC Educational Resources Information Center

    Camporesi, Roberto

    2011-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of…

  12. The Critical Success Factors Method: Its Application in a Special Library Environment.

    ERIC Educational Resources Information Center

    Borbely, Jack

    1981-01-01

    Discusses the background and theory of the Critical Success Factors (CSF) management method, as well as its application in an information center or other special library environment. CSF is viewed as a management tool that can enhance the viability of the special library within its parent organization. (FM)

  13. Exploring Task- and Student-Related Factors in the Method of Propositional Manipulation (MPM)

    ERIC Educational Resources Information Center

    Leppink, Jimmie; Broers, Nick J.; Imbos, Tjaart; van der Vleuten, Cees P. M.; Berger, Martijn P. F.

    2011-01-01

    The method of propositional manipulation (MPM) aims to help students develop conceptual understanding of statistics by guiding them into self-explaining propositions. To explore task- and student-related factors influencing students' ability to learn from MPM, twenty undergraduate students performed six learning tasks while thinking aloud. The…

  14. Methods and Measures: Confirmatory Factor Analysis and Multidimensional Scaling for Construct Validation of Cognitive Abilities

    ERIC Educational Resources Information Center

    Tucker-Drob, Elliot M.; Salthouse, Timothy A.

    2009-01-01

    Although factor analysis is the most commonly-used method for examining the structure of cognitive variable interrelations, multidimensional scaling (MDS) can provide visual representations highlighting the continuous nature of interrelations among variables. Using data (N = 8,813; ages 17-97 years) aggregated across 38 separate studies, MDS was…

  15. Stored grain pack factors for wheat: comparison of three methods to field measurements

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Storing grain in bulk storage units results in grain packing from overbearing pressure, which increases grain bulk density and storage-unit capacity. This study compared pack factors of hard red winter (HRW) wheat in vertical storage bins using different methods: the existing packing model (WPACKING...

  16. The Effect of Missing Data Handling Methods on Goodness of Fit Indices in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Köse, Alper

    2014-01-01

    The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…

  17. Understanding the Impact of School Factors on School Counselor Burnout: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Bardhoshi, Gerta; Schweinle, Amy; Duncan, Kelly

    2014-01-01

    This mixed-methods study investigated the relationship between burnout and performing noncounseling duties among a national sample of professional school counselors, while identifying school factors that could attenuate this relationship. Results of regression analyses indicate that performing noncounseling duties significantly predicted burnout…

  18. Analysis of Social Cohesion in Health Data by Factor Analysis Method: The Ghanaian Perspective

    ERIC Educational Resources Information Center

    Saeed, Bashiru I. I.; Xicang, Zhao; Musah, A. A. I.; Abdul-Aziz, A. R.; Yawson, Alfred; Karim, Azumah

    2013-01-01

    We investigated the study of the overall social cohesion of Ghanaians. In this study, we considered the paramount interest of the involvement of Ghanaians in their communities, their views of other people and institutions, and their level of interest in both local and national politics. The factor analysis method was employed for analysis using R…

  19. Kinetic spectrophotometric H-point standard addition method for the simultaneous determination of diloxanide furoate and metronidazole in binary mixtures and biological fluids.

    PubMed

    Issa, Mahmoud Mohamed; Nejem, R'afat Mahmoud; Abu Shanab, Alaa Mohamed; Shaat, Nahed Talab

    2013-10-01

    Simple, reliable, and sensitive kinetic spectrophotometric method has been developed for the simultaneous determination of diloxanide furoate and metronidazole using H-point standard addition method (HPSAM). The method is based on the oxidation rate difference of diloxanide and metronidazole by potassium permanganate in basic medium. A green color has been developed and measured at 610 nm. Different experimental parameters were carefully optimized. The limiting logarithmic and the initial-rate methods were adopted for the construction of the calibration curve of each individual reaction with potassium permanganate. Under the optimum conditions, Beer's law was obeyed in the range of 1.0-20.0 and 5.0-25.0 μg ml(-1) for diloxanide furoate and metronidazole, respectively. The detection limits were 0.22 μg ml(-1) for diloxanide furoate and 0.83 μg ml(-1) for metronidazole. Correlation coefficients of the regression equations were greater than 0.9970 in all cases. The precision of the method was satisfactory; the maximum value of relative standard deviation did not exceed 1.06% (n=5). The accuracy, expressed as recovery was between 99.4% and 101.4% with relative error of 0.12 and 0.14 for diloxanide furoate and metronidazole, respectively. The proposed method was successfully applied for the simultaneous determination of both drugs in pharmaceutical dosage forms and human urine samples and compared with alternative HPLC method.

  20. H-point standard addition method applied to simultaneous kinetic determination of antimony(III) and antimony(V) by adsorptive linear sweep voltammetry.

    PubMed

    Zarei, K; Atabati, M; Karami, M

    2010-07-15

    In this work, the applicability of H-point standard addition method (HPSAM) to the kinetic voltammetry data is verified. For this purpose, a procedure is described for the determination of Sb(III) and Sb(V) by adsorptive linear sweep voltammetry using pyrogallol as a complexing agent. The method is based on the differences between the rate of complexation of pyrogallol with Sb(V) and Sb(III) at pH 1.2. The results show that the H-point standard addition method is suitable for the speciation of antimony. Sb(III) and Sb(V) can be determined in the ranges of 0.003-0.120 and 0.010-0.240 microg mL(-1), respectively. Moreover, the solution is analyzed for any possible effects of foreign ions. The obtained results show that the HPSAM in combination to electroanalytical techniques is a powerful method with high sensitivity and selectivity. The procedure is successfully applied to the speciation of antimony in water samples.

  1. Empirical Calibration of the P-Factor for Cepheid Radii Determined Using the IR Baade-Wesselink Method

    NASA Astrophysics Data System (ADS)

    Joner, Michael D.; Laney, C. D.

    2012-05-01

    We have used 41 galactic Cepheids for which parallax or cluster/association distances are available, and for which pulsation parallaxes can be calculated, to calibrate the p-factor to be used in K-band Baade-Wesselink radius calculations. Our sample includes the 10 Cepheids from Benedict et al. (2007), and three additional Cepheids with Hipparcos parallaxes derived from van Leeuwen et al. (2007). Turner and Burke (2002) list cluster distances for 33 Cepheids for which radii have been or (in a few cases) can be calculated. Revised cluster distances from Turner (2010), Turner and Majaess (2008, 2012), and Majaess and Turner (2011, 2012a, 2012b) have been used where possible. Radii have been calculated using the methods described in Laney and Stobie (1995) and converted to K-band absolute magnitudes using the methods described in van Leeuwen et al. (2007), Feast et al. (2008), and Laney and Joner (2009). The resulting pulsation parallaxes have been used to estimate the p-factor for each Cepheid. These new results stand in contradiction to those derived by Storm et al. (2011), but are in good agreement with theoretical predictions by Nardetto et al. (2009) and with interferometric estimates of the p-factor, as summarized in Groenewegen (2007). We acknowledge the Brigham Young University College of Physical and Mathematical Sciences for continued support of research done using the facilities and personnel at the West Mountain Observatory. This support is connected with NSF/AST grant #0618209.

  2. HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD

    SciTech Connect

    Harold S. Blackman; David I. Gertman; Ronald L. Boring

    2008-09-01

    This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.

  3. Heat Transfer and Friction-Factor Methods Turbulent Flow Inside Pipes 3d Rough

    1994-01-21

    Three-dimensional roughened internally enhanced tubes have been shown to be one of the most energy efficient for turbulent, forced convection applications. However, there is only one prediction method presented in the open literature and that is restricted to three-dimensional sand-grain roughness. Other roughness types are being proposed: hemispherical sectors, truncated cones, and full and truncated pyramids. There are no validated heat-transfer and friction-factor prediction methods for these different roughness shapes that can be used inmore » the transition and fully rough region. This program calculates the Nusselt number and friction factor values, for a broad range of three-dimensional roughness types such as hemispherical sectors, truncated cones, and full and truncated pyramids. Users of this program are heat-exchangers designers, enhanced tubing suppliers, and research organizations or academia who are developing or validating prediction methods.« less

  4. Effect of uncontrolled factors in a validated liquid chromatography-tandem mass spectrometry method question its use as a reference method for marine toxins: major causes for concern.

    PubMed

    Otero, Paz; Alfonso, Amparo; Alfonso, Carmen; Rodríguez, Paula; Vieytes, Mercedes R; Botana, Luis M

    2011-08-01

    Chromatographic techniques coupled to mass spectrometry is the method of choice to replace the mouse bioassay (MBA) to detect marine toxins. This paper evaluates the influence of different parameters such as toxin solvents, mass spectrometric detection method, mobile-phase-solvent brands and equipment on okadaic acid (OA), dinophysistoxin-1 (DTX-1), and dinophysistoxin-2 (DTX-2) quantification. In addition, the study compares the results obtained when a toxin is quantified against its own calibration curve and with the calibration curve of the other analogues. The experiments were performed by liquid chromatography (LC) and ultraperformance liquid chromatography (UPLC) with tandem mass spectrometry detection (MS/MS). Three acetonitrile brands and two toxin solvents were employed, and three mass spectrometry detection methods were checked. One method that contains the transitions for azaspiracid-1 (AZA-1), azaspiracid-2 (AZA-2), azaspiracid-3(AZA-3), gimnodimine (GYM), 13-desmethyl spirolide C (SPX-1), pectenotoxin-2 (PTX-2), OA, DTX-1, DTX-2, yessotoxin (YTX), homoYTX, and 45-OH-YTX was compared in both instruments. This method operated in simultaneous positive and negative ionization mode. The other two mass methods operated only in negative ionization mode, one contains transitions to detect DTX-1, OA DTX-2, YTX, homoYTX, and 45-OH-YTX and the other only the transitions for the toxins under study OA, DTX-1, and DTX-2. With dependence on the equipment and mobile phase used, the amount of toxin quantified can be overestimated or underestimated, up to 44% for OA, 46% for DTX-1, and 48% for DTX-2. In addition, when a toxin was quantified using the calibration curve of the other analogues, the toxin amount obtained is different. The maximum variability was obtained when DTX-2 was quantified using either OA or a DTX-1 calibration curve. In this case, the overestimation was up to 88% using the OA calibration curve and up to 204% using the DTX-1 calibration curve. In

  5. Deriving grassland management factors for a carbon accounting method developed by the Intergovernmental Panel on Climate Change.

    PubMed

    Ogle, Stephen M; Conant, Richard T; Paustian, Keith

    2004-04-01

    Grassland management affects soil organic carbon (SOC) storage and can be used to mitigate greenhouse gas emissions. However, for a country to assess emission reductions due to grassland management, there must be an inventory method for estimating the change in SOC storage. The Intergovernmental Panel on Climate Change (IPCC) has developed a simple carbon accounting approach for this purpose, and here we derive new grassland management factors that represent the effect of changing management on carbon storage for this method. Our literature search identified 49 studies dealing with effects of management practices that either degraded or improved conditions relative to nominally managed grasslands. On average, degradation reduced SOC storage to 95% +/- 0.06 and 97% +/- 0.05 of carbon stored under nominal conditions in temperate and tropical regions, respectively. In contrast, improving grasslands with a single management activity enhanced SOC storage by 14% +/- 0.06 and 17% +/- 0.05 in temperate and tropical regions, respectively, and with an additional improvement(s), storage increased by another 11% +/- 0.04. We applied the newly derived factor coefficients to analyze C sequestration potential for managed grasslands in the U.S., and found that over a 20-year period changing management could sequester from 5 to 142 Tg C yr(-1) or 0.1 to 0.9 Mg C ha(-1) yr(-1), depending on the level of change. This analysis provides revised factor coefficients for the IPCC method that can be used to estimate impacts of management; it also provides a methodological framework for countries to derive factor coefficients specific to conditions in their region. PMID:15453401

  6. DIAGNOSIS-GUIDED METHOD FOR IDENTIFYING MULTI-MODALITY NEUROIMAGING BIOMARKERS ASSOCIATED WITH GENETIC RISK FACTORS IN ALZHEIMER'S DISEASE.

    PubMed

    Hao, Xiaoke; Yan, Jingwen; Yao, Xiaohui; Risacher, Shannon L; Saykin, Andrew J; Zhang, Daoqiang; Shen, Li

    2016-01-01

    Many recent imaging genetic studies focus on detecting the associations between genetic markers such as single nucleotide polymorphisms (SNPs) and quantitative traits (QTs). Although there exist a large number of generalized multivariate regression analysis methods, few of them have used diagnosis information in subjects to enhance the analysis performance. In addition, few of models have investigated the identification of multi-modality phenotypic patterns associated with interesting genotype groups in traditional methods. To reveal disease-relevant imaging genetic associations, we propose a novel diagnosis-guided multi-modality (DGMM) framework to discover multi-modality imaging QTs that are associated with both Alzheimer's disease (AD) and its top genetic risk factor (i.e., APOE SNP rs429358). The strength of our proposed method is that it explicitly models the priori diagnosis information among subjects in the objective function for selecting the disease-relevant and robust multi-modality QTs associated with the SNP. We evaluate our method on two modalities of imaging phenotypes, i.e., those extracted from structural magnetic resonance imaging (MRI) data and fluorodeoxyglucose positron emission tomography (FDG-PET) data in the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The experimental results demonstrate that our proposed method not only achieves better performances under the metrics of root mean squared error and correlation coefficient but also can identify common informative regions of interests (ROIs) across multiple modalities to guide the disease-induced biological interpretation, compared with other reference methods.

  7. Influence of additives on the increase of the heating value of Bayah’s coal with upgrading brown coal (UBC) method

    SciTech Connect

    Heriyanto, Heri; Widya Ernayati, K.; Umam, Chairul; Margareta, Nita

    2015-12-29

    UBC (upgrading brown coal) is a method of improving the quality of coal by using oil as an additive. Through processing in the oil media, not just the calories that increase, but there is also water repellent properties and a decrease in the tendency of spontaneous combustion of coal products produced. The results showed a decrease in the water levels of natural coal bayah reached 69%, increase in calorific value reached 21.2%. Increased caloric value and reduced water content caused by the water molecules on replacing seal the pores of coal by oil and atoms C on the oil that is bound to increase the percentage of coal carbon. As a result of this experiment is, the produced coal has better calorific value, the increasing of this new calorific value up to 23.8% with the additive waste lubricant, and the moisture content reduced up to 69.45%.

  8. Enhancement on wettability and intermetallic compound formation with an addition of Al on Sn-0.7Cu lead-free solder fabricated via powder metallurgy method

    NASA Astrophysics Data System (ADS)

    Adli, Nisrin; Razak, Nurul Razliana Abdul; Saud, Norainiza

    2016-07-01

    Due to the toxicity of lead (Pb), the exploration of another possibility for lead-free solder is necessary. Nowadays, SnCu alloys are being established as one of the lead-free solder alternatives. In this study, Sn-0.7Cu lead-free solder with an addition of 1wt% and 5wt% Al were investigated by using powder metallurgy method. The effect of Al addition on the wettability and intermetallic compound thickness (IMC) of Sn-0.7Cu-Al lead-free solder were appraised. Results showed that Al having a high potential to enhance Sn-0.7Cu lead-free solder due to its good wetting and reduction of IMC thickness. The contact angle and IMC of the Sn-0.7Cu-Al lead-free solder were decreased by 14.32% and 40% as the Al content increased from 1 wt% to 5 wt%.

  9. Iodine speciation in coastal and inland bathing waters and seaweeds extracts using a sequential injection standard addition flow-batch method.

    PubMed

    Santos, Inês C; Mesquita, Raquel B R; Bordalo, Adriano A; Rangel, António O S S

    2015-02-01

    The present work describes the development of a sequential injection standard addition method for iodine speciation in bathing waters and seaweeds extracts without prior sample treatment. Iodine speciation was obtained by assessing the iodide and iodate content, the two inorganic forms of iodine in waters. For the determination of iodide, an iodide ion selective electrode (ISE) was used. The indirect determination of iodate was based on the spectrophotometric determination of nitrite (Griess reaction). For the iodate measurement, a mixing chamber was employed (flow batch approach) to explore the inherent efficient mixing, essential for the indirect determination of iodate. The application of the standard addition method enabled detection limits of 0.14 µM for iodide and 0.02 µM for iodate, together with the direct introduction of the target water samples, coastal and inland bathing waters. The results obtained were in agreement with those obtained by ICP-MS and a colorimetric reference procedure. Recovery tests also confirmed the accuracy of the developed method which was effectively applied to bathing waters and seaweed extracts.

  10. An Integration Factor Method for Stochastic and Stiff Reaction-Diffusion Systems

    PubMed Central

    Ta, Catherine; Wang, Dongyong; Nie, Qing

    2015-01-01

    Stochastic effects are often present in the biochemical systems involving reactions and diffusions. When the reactions are stiff, existing numerical methods for stochastic reaction diffusion equations require either very small time steps for any explicit schemes or solving large nonlinear systems at each time step for the implicit schemes. Here we present a class of semi-implicit integration factor methods that treat the diffusion term exactly and reaction implicitly for a system of stochastic reaction-diffusion equations. Our linear stability analysis shows the advantage of such methods for both small and large amplitudes of noise. Direct use of the method to solving several linear and nonlinear stochastic reaction-diffusion equations demonstrates good accuracy, efficiency, and stability properties. This new class of methods, which are easy to implement, will have broader applications in solving stochastic reaction-diffusion equations arising from models in biology and physical sciences. PMID:25983341

  11. An integration factor method for stochastic and stiff reaction–diffusion systems

    SciTech Connect

    Ta, Catherine; Wang, Dongyong; Nie, Qing

    2015-08-15

    Stochastic effects are often present in the biochemical systems involving reactions and diffusions. When the reactions are stiff, existing numerical methods for stochastic reaction diffusion equations require either very small time steps for any explicit schemes or solving large nonlinear systems at each time step for the implicit schemes. Here we present a class of semi-implicit integration factor methods that treat the diffusion term exactly and reaction implicitly for a system of stochastic reaction–diffusion equations. Our linear stability analysis shows the advantage of such methods for both small and large amplitudes of noise. Direct use of the method to solving several linear and nonlinear stochastic reaction–diffusion equations demonstrates good accuracy, efficiency, and stability properties. This new class of methods, which are easy to implement, will have broader applications in solving stochastic reaction–diffusion equations arising from models in biology and physical sciences.

  12. An integration factor method for stochastic and stiff reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Ta, Catherine; Wang, Dongyong; Nie, Qing

    2015-08-01

    Stochastic effects are often present in the biochemical systems involving reactions and diffusions. When the reactions are stiff, existing numerical methods for stochastic reaction diffusion equations require either very small time steps for any explicit schemes or solving large nonlinear systems at each time step for the implicit schemes. Here we present a class of semi-implicit integration factor methods that treat the diffusion term exactly and reaction implicitly for a system of stochastic reaction-diffusion equations. Our linear stability analysis shows the advantage of such methods for both small and large amplitudes of noise. Direct use of the method to solving several linear and nonlinear stochastic reaction-diffusion equations demonstrates good accuracy, efficiency, and stability properties. This new class of methods, which are easy to implement, will have broader applications in solving stochastic reaction-diffusion equations arising from models in biology and physical sciences.

  13. An efficient method of measuring the 4 mm helmet output factor for the Gamma Knife

    NASA Astrophysics Data System (ADS)

    Ma, Lijun; Li, X. Allen; Yu, Cedric X.

    2000-03-01

    It is essential to have accurate measurements of the 4 mm helmet output factor in the treatment of trigeminal neuralgia patients using the Gamma Knife. Because of the small collimator size and the sharp dose gradient at the beam focus, this measurement is generally tedious and difficult. We have developed an efficient method of measuring the 4 mm helmet output factor using regular radiographic films. The helmet output factor was measured by exposing a single Kodak XV film in the standard Leksell spherical phantom using the 18 mm helmet with 30-40 of its plug collimators replaced by the 4 mm plug collimators. The 4 mm helmet output factor was measured to be 0.876 ± 0.009. This is in excellent agreement with our EGS4 Monte Carlo simulated value of 0.876 ± 0.005. This helmet output factor value also agrees with more tedious TLD, diode and radiochromic film measurements that were each obtained using two separate measurements with the 18 mm helmet and the 4 mm helmet respectively. The 4 mm helmet output factor measured by the diode was 0.884 ± 0.016, and the TLD measurement was 0.890 ± 0.020. The radiochromic film measured value was 0.870 ± 0.018. Because a single-exposure measurement was performed instead of a double-exposure measurement, most of the systematic errors that appeared in the double-exposure measurements due to experimental setup variations were cancelled out. Consequently, the 4 mm helmet output factor is more precisely determined by the single-exposure approach. Therefore, routine measurement and quality assurance of the 4 mm helmet output factor of the Gamma Knife could be efficiently carried out using the proposed single-exposure technique.

  14. Narrow-duct-streaming calculations using generalized configuration factors with the discrete ordinates method

    SciTech Connect

    Brockmann, H.

    1999-05-01

    In calculating neutral particle transport through elongated voids with the discrete ordinates method, the problem of ray effect may occur if standard angular quadrature sets are used. To mitigate this ray effect, the configuration-factor concept developed in the theory of thermal radiation for calculating the radiation exchange among surfaces is applied here. The common configuration-factor concept is extended in such a way that the angular dependence of the radiation emitted from the surfaces can be considered. The method is applied to regular and annular cylinders with r-z geometry and incorporated into a two-dimensional discrete ordinates transport code. Calculations on a narrow-duct-streaming problem show that the ray effect is strongly reduced by this method. The new method gives results equivalent to or even better than a standard discrete ordinates calculation using a biased angular quadrature set with 166 directions at computing times for one inner iteration that are about a factor of 2 less.

  15. Semi-implicit Integration Factor Methods on Sparse Grids for High-Dimensional Systems

    PubMed Central

    Wang, Dongyong; Chen, Weitao; Nie, Qing

    2015-01-01

    Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method. PMID:25897178

  16. Comparison of Seven Methods for Boolean Factor Analysis and Their Evaluation by Information Gain.

    PubMed

    Frolov, Alexander A; Húsek, Dušan; Polyakov, Pavel Yu

    2016-03-01

    An usual task in large data set analysis is searching for an appropriate data representation in a space of fewer dimensions. One of the most efficient methods to solve this task is factor analysis. In this paper, we compare seven methods for Boolean factor analysis (BFA) in solving the so-called bars problem (BP), which is a BFA benchmark. The performance of the methods is evaluated by means of information gain. Study of the results obtained in solving BP of different levels of complexity has allowed us to reveal strengths and weaknesses of these methods. It is shown that the Likelihood maximization Attractor Neural Network with Increasing Activity (LANNIA) is the most efficient BFA method in solving BP in many cases. Efficacy of the LANNIA method is also shown, when applied to the real data from the Kyoto Encyclopedia of Genes and Genomes database, which contains full genome sequencing for 1368 organisms, and to text data set R52 (from Reuters 21578) typically used for label categorization.

  17. Using a fuzzy DEMATEL method for analyzing the factors influencing subcontractors selection

    NASA Astrophysics Data System (ADS)

    Kozik, Renata

    2016-06-01

    Subcontracting is a long-standing practice in the construction industry. This form of project organization, if manage properly, could provide the better quality, reduction in project time and costs. Subcontractors selection is a multi-criterion problem and can be determined by many factors. Identifying the importance of each of them as well as the direction of cause-effect relations between various types of factors can improve the management process. Their values could be evaluated on the basis of the available expert opinions with the application of a fuzzy multi-stage grading scale. In this paper it is recommended to use fuzzy DEMATEL method to analyze the relationship between factors affecting subcontractors selection.

  18. Effects of buffer additives and thermal processing methods on the solubility of shrimp (Penaeus monodon) proteins and the immunoreactivity of its major allergen.

    PubMed

    Lasekan, Adeseye O; Nayak, Balunkeswar

    2016-06-01

    This study examines the potential of two buffer additives (Tween 20 and DTT) to improve the solubility of proteins from shrimp subjected to different heat treatments and the allergenicity of tropomyosin in the extracts. The concentration of soluble proteins extracted by all the buffers from processed shrimp was significantly reduced compared with untreated samples. The concentration of total soluble proteins from heat treated shrimp increased significantly when phosphate buffer containing both surfactant and reducing agent was used as the extraction buffer. However, the concentrations of heat-stable proteins in the buffers were mostly similar. The electrophoretic profile of extracted proteins showed that tropomyosin is very stable under the different heat treatment methods used in this study except for high pressure steaming where the intensity of tropomyosin band was reduced. Competitive inhibition ELISA showed that high pressure steaming reduced the allergenicity of tropomyosin compared with other heat treatments methods.

  19. Determination of Slope Safety Factor with Analytical Solution and Searching Critical Slip Surface with Genetic-Traversal Random Method

    PubMed Central

    2014-01-01

    In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679

  20. Effect of additive gases and injection methods on chemical dry etching of silicon nitride, silicon oxynitride, and silicon oxide layers in F{sub 2} remote plasmas

    SciTech Connect

    Yun, Y. B.; Park, S. M.; Kim, D. J.; Lee, N.-E.; Kim, K. S.; Bae, G. H.

    2007-07-15

    The authors investigated the effects of various additive gases and different injection methods on the chemical dry etching of silicon nitride, silicon oxynitride, and silicon oxide layers in F{sub 2} remote plasmas. N{sub 2} and N{sub 2}+O{sub 2} gases in the F{sub 2}/Ar/N{sub 2} and F{sub 2}/Ar/N{sub 2}/O{sub 2} remote plasmas effectively increased the etch rate of the layers. The addition of direct-injected NO gas increased the etch rates most significantly. NO radicals generated by the addition of N{sub 2} and N{sub 2}+O{sub 2} or direct-injected NO molecules contributed to the effective removal of nitrogen and oxygen in the silicon nitride and oxide layers, by forming N{sub 2}O and NO{sub 2} by-products, respectively, and thereby enhancing SiF{sub 4} formation. As a result of the effective removal of the oxygen, nitrogen, and silicon atoms in the layers, the chemical dry etch rates were enhanced significantly. The process regime for the etch rate enhancement of the layers was extended at elevated temperature.

  1. Replace-approximation method for ambiguous solutions in factor analysis of ultrasonic hepatic perfusion

    NASA Astrophysics Data System (ADS)

    Zhang, Ji; Ding, Mingyue; Yuchi, Ming; Hou, Wenguang; Ye, Huashan; Qiu, Wu

    2010-03-01

    Factor analysis is an efficient technique to the analysis of dynamic structures in medical image sequences and recently has been used in contrast-enhanced ultrasound (CEUS) of hepatic perfusion. Time-intensity curves (TICs) extracted by factor analysis can provide much more diagnostic information for radiologists and improve the diagnostic rate of focal liver lesions (FLLs). However, one of the major drawbacks of factor analysis of dynamic structures (FADS) is nonuniqueness of the result when only the non-negativity criterion is used. In this paper, we propose a new method of replace-approximation based on apex-seeking for ambiguous FADS solutions. Due to a partial overlap of different structures, factor curves are assumed to be approximately replaced by the curves existing in medical image sequences. Therefore, how to find optimal curves is the key point of the technique. No matter how many structures are assumed, our method always starts to seek apexes from one-dimensional space where the original high-dimensional data is mapped. By finding two stable apexes from one dimensional space, the method can ascertain the third one. The process can be continued until all structures are found. This technique were tested on two phantoms of blood perfusion and compared to the two variants of apex-seeking method. The results showed that the technique outperformed two variants in comparison of region of interest measurements from phantom data. It can be applied to the estimation of TICs derived from CEUS images and separation of different physiological regions in hepatic perfusion.

  2. Lichenoid Reactions in Association with Tumor Necrosis Factor Alpha Inhibitors: A Review of the Literature and Addition of a Fourth Lichenoid Reaction.

    PubMed

    McCarty, Morgan; Basile, Amy; Bair, Brooke; Fivenson, David

    2015-06-01

    In this manuscript, a clinical case of a patient treated with adalimumab for Behcet's disease develops lichen planopilaris. A variety of mucocutaneous lichenoid eruptions have recently been described in association with tumor necrosis factor alpha inhibitors. The authors briefly discuss the clinical and pathological presentation of lichen planopilaris as well as a potential pathogenesis of cutaneous adverse effects seen as the result of tumor necrosis factor alpha inhibitor therapy. They review all case reports of lichen planopilaris occurring on tumor necrosis factor alpha inhibitors and suggest its classification as a fourth recognized pattern on this therapy.

  3. A Method for the Study of Human Factors in Aircraft Operations

    NASA Technical Reports Server (NTRS)

    Barnhart, W.; Billings, C.; Cooper, G.; Gilstrap, R.; Lauber, J.; Orlady, H.; Puskas, B.; Stephens, W.

    1975-01-01

    A method for the study of human factors in the aviation environment is described. A conceptual framework is provided within which pilot and other human errors in aircraft operations may be studied with the intent of finding out how, and why, they occurred. An information processing model of human behavior serves as the basis for the acquisition and interpretation of information relating to occurrences which involve human error. A systematic method of collecting such data is presented and discussed. The classification of the data is outlined.

  4. Linear ordinary differential equations with constant coefficients. Revisiting the impulsive response method using factorization

    NASA Astrophysics Data System (ADS)

    Camporesi, Roberto

    2011-06-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and the variation of constants method. The approach presented here can be used in a first course on differential equations for science and engineering majors.

  5. Factorization and reduction methods for optimal control of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Powers, R. K.

    1985-01-01

    A Chandrasekhar-type factorization method is applied to the linear-quadratic optimal control problem for distributed parameter systems. An aeroelastic control problem is used as a model example to demonstrate that if computationally efficient algorithms, such as those of Chandrasekhar-type, are combined with the special structure often available to a particular problem, then an abstract approximation theory developed for distributed parameter control theory becomes a viable method of solution. A numerical scheme based on averaging approximations is applied to hereditary control problems. Numerical examples are given.

  6. The unit cost factors and calculation methods for decommissioning - Cost estimation of nuclear research facilities

    SciTech Connect

    Kwan-Seong Jeong; Dong-Gyu Lee; Chong-Hun Jung; Kune-Woo Lee

    2007-07-01

    Available in abstract form only. Full text of publication follows: The uncertainties of decommissioning costs increase high due to several conditions. Decommissioning cost estimation depends on the complexity of nuclear installations, its site-specific physical and radiological inventories. Therefore, the decommissioning costs of nuclear research facilities must be estimated in accordance with the detailed sub-tasks and resources by the tasks of decommissioning activities. By selecting the classified activities and resources, costs are calculated by the items and then the total costs of all decommissioning activities are reshuffled to match with its usage and objectives. And the decommissioning cost of nuclear research facilities is calculated by applying a unit cost factor method on which classification of decommissioning works fitted with the features and specifications of decommissioning objects and establishment of composition factors are based. Decommissioning costs of nuclear research facilities are composed of labor cost, equipment and materials cost. Of these three categorical costs, the calculation of labor costs are very important because decommissioning activities mainly depend on labor force. Labor costs in decommissioning activities are calculated on the basis of working time consumed in decommissioning objects and works. The working times are figured out of unit cost factors and work difficulty factors. Finally, labor costs are figured out by using these factors as parameters of calculation. The accuracy of decommissioning cost estimation results is much higher compared to the real decommissioning works. (authors)

  7. A screening method based on UV-Visible spectroscopy and multivariate analysis to assess addition of filler juices and water to pomegranate juices.

    PubMed

    Boggia, Raffaella; Casolino, Maria Chiara; Hysenaj, Vilma; Oliveri, Paolo; Zunin, Paola

    2013-10-15

    Consumer demand for pomegranate juice has considerably grown, during the last years, for its potential health benefits. Since it is an expensive functional food, cheaper fruit juices addition (i.e., grape and apple juices) or its simple dilution, or polyphenols subtraction are deceptively used. At present, time-consuming analyses are used to control the quality of this product. Furthermore these analyses are expensive and require well-trained analysts. Thus, the purpose of this study was to propose a high-speed and easy-to-use shortcut. Based on UV-VIS spectroscopy and chemometrics, a screening method is proposed to quickly screening some common fillers of pomegranate juice that could decrease the antiradical scavenging capacity of pure products. The analytical method was applied to laboratory prepared juices, to commercial juices and to representative experimental mixtures at different levels of water and filler juices. The outcomes were evaluated by means of multivariate exploratory analysis. The results indicate that the proposed strategy can be a useful screening tool to assess addition of filler juices and water to pomegranate juices. PMID:23692760

  8. Sensitive and cost-effective LC-MS/MS method for quantitation of CVT-6883 in human urine using sodium dodecylbenzenesulfonate additive to eliminate adsorptive losses.

    PubMed

    Chen, Chungwen; Bajpai, Lakshmikant; Mollova, Nevena; Leung, Kwan

    2009-04-01

    CVT-6883, a novel selective A(2B) adenosine receptor antagonist currently under clinical development, is highly lipophilic and exhibits high affinity for non-specific binding to container surfaces, resulting in very low recovery in urine assays. Our study showed the use of sodium dodecylbenzenesulfonate (SDBS), a low-cost additive, eliminated non-specific binding problems in the analysis of CVT-6883 in human urine without compromising sensitivity. A new sensitive and selective LC-MS/MS method for quantitation of CVT-6883 in the range of 0.200-80.0ng/mL using SDBS additive was therefore developed and validated for the analysis of human urine samples. The recoveries during sample collection, handling and extraction for the analyte and internal standard (d(5)-CVT-6883) were higher than 87%. CVT-6883 was found stable under the following conditions: in extract - at ambient temperature for 3 days, under refrigeration (5 degrees C) for 6 days; in human urine (containing 4mM SDBS) - after three freeze/thaw cycles, at ambient temperature for 26h, under refrigeration (5 degrees C) for 94h, and in a freezer set to -20 degrees C for at least 2 months. The results demonstrated that the validated method is sufficiently sensitive, specific, and cost-effective for the analysis of CVT-6883 in human urine and will provide a powerful tool to support the clinical programs for CVT-6883.

  9. Food additives and preschool children.

    PubMed

    Martyn, Danika M; McNulty, Breige A; Nugent, Anne P; Gibney, Michael J

    2013-02-01

    Food additives have been used throughout history to perform specific functions in foods. A comprehensive framework of legislation is in place within Europe to control the use of additives in the food supply and ensure they pose no risk to human health. Further to this, exposure assessments are regularly carried out to monitor population intakes and verify that intakes are not above acceptable levels (acceptable daily intakes). Young children may have a higher dietary exposure to chemicals than adults due to a combination of rapid growth rates and distinct food intake patterns. For this reason, exposure assessments are particularly important in this age group. The paper will review the use of additives and exposure assessment methods and examine factors that affect dietary exposure by young children. One of the most widely investigated unfavourable health effects associated with food additive intake in preschool-aged children are suggested adverse behavioural effects. Research that has examined this relationship has reported a variety of responses, with many noting an increase in hyperactivity as reported by parents but not when assessed using objective examiners. This review has examined the experimental approaches used in such studies and suggests that efforts are needed to standardise objective methods of measuring behaviour in preschool children. Further to this, a more holistic approach to examining food additive intakes by preschool children is advisable, where overall exposure is considered rather than focusing solely on behavioural effects and possibly examining intakes of food additives other than food colours.

  10. Through-electromigration: a new method of investigating pore connectivity and obtaining formation factors.

    PubMed

    Löfgren, Martin; Neretnieks, Ivars

    2006-10-10

    The retardation of radionuclides and other contaminants in fractured crystalline rock is strongly associated with the diffusive properties of the rock matrix. At present, the scientific community is divided concerning the question of long-range pore connectivity in intrusive igneous rock. This paper presents a fast new method, called the through-electromigration method, of obtaining formation factors and investigating pore connectivity. The method involves the migration of an ionic tracer through a rock sample with an electrical potential gradient as the main driving force. The method is analogous to the through-diffusion method but the experimental time is reduced by orders of magnitude. This enables investigations of pore connectivity, as measurements can be made on longer samples. In a preliminary investigation, the new method is compared to the traditional through-diffusion method as well as to rock resistivity methods. The diffusive properties of nine granitic rock samples from Laxemar in Sweden, ranging from 15 to 121 mm in length, have been investigated and the results are compared.

  11. High levels of acute phase proteins and soluble 70 kDa heat shock proteins are independent and additive risk factors for mortality in colorectal cancer

    PubMed Central

    Kocsis, Judit; Mészáros, Tamás; Madaras, Balázs; Tóth, Éva Katalin; Kamondi, Szilárd; Gál, Péter; Varga, Lilian; Prohászka, Zoltán

    2010-01-01

    Recently, we reported that high soluble Hsp70 (sHsp70) level was a significant predictor of mortality during an almost 3-year-long follow-up period in patients with colorectal cancer. This association was the strongest in the group of <70-year-old female patients as well as in those who were in a less advanced stage of the disease at baseline. According to these observations, measurement of the serum level of sHsp70 is a useful, stage-independent prognostic marker in colorectal cancer, especially in patients without distant metastasis. Since many literature data indicated that measurement of C-reactive protein (CRP) and other acute phase proteins (APPs) may also be suitable for predicting the mortality of patients with colorectal cancer, it seemed reasonable to study whether the effect of sHsp70 and other APPs are related or independent. In order to answer this question, we measured the concentrations of CRP as well as of other complement-related APPs (C1 inhibitor, C3, and C9) along with that of the MASP-2 complement component in the sera of 175 patients with colorectal cancer and known levels of sHsp70, which have been used in our previous study. High (above median) levels of CRP, C1 esterase inhibitor (C1-INH), and sHsp70 were found to be independently associated with poor patient survival, whereas no such association was observed with the other proteins tested. According to the adjusted Cox proportional hazards analysis, the additive effect of high sHsp70, CRP, and C1-INH levels on the survival of patients exceeded that of high sHsp70 alone, with a hazard ratio (HR) of 2.83 (1.13–70.9). In some subgroups of patients, such as in females [HR 4.80 (1.07–21.60)] or in ≤70-year-old patients [HR 11.53 (2.78–47.70)], even greater differences were obtained. These findings indicate that the clinical mortality–prediction value of combined measurements of sHsp70, CRP, and C1-INH with inexpensive methods can be very high, especially in specific subgroups of

  12. Methods for detrending success metrics to account for inflationary and deflationary factors*

    NASA Astrophysics Data System (ADS)

    Petersen, A. M.; Penner, O.; Stanley, H. E.

    2011-01-01

    Time-dependent economic, technological, and social factors can artificially inflate or deflate quantitative measures for career success. Here we develop and test a statistical method for normalizing career success metrics across time dependent factors. In particular, this method addresses the long standing question: how do we compare the career achievements of professional athletes from different historical eras? Developing an objective approach will be of particular importance over the next decade as major league baseball (MLB) players from the "steroids era" become eligible for Hall of Fame induction. Some experts are calling for asterisks (*) to be placed next to the career statistics of athletes found guilty of using performance enhancing drugs (PED). Here we address this issue, as well as the general problem of comparing statistics from distinct eras, by detrending the seasonal statistics of professional baseball players. We detrend player statistics by normalizing achievements to seasonal averages, which accounts for changes in relative player ability resulting from a range of factors. Our methods are general, and can be extended to various arenas of competition where time-dependent factors play a key role. For five statistical categories, we compare the probability density function (pdf) of detrended career statistics to the pdf of raw career statistics calculated for all player careers in the 90-year period 1920-2009. We find that the functional form of these pdfs is stationary under detrending. This stationarity implies that the statistical regularity observed in the right-skewed distributions for longevity and success in professional sports arises from both the wide range of intrinsic talent among athletes and the underlying nature of competition. We fit the pdfs for career success by the Gamma distribution in order to calculate objective benchmarks based on extreme statistics which can be used for the identification of extraordinary careers.

  13. A novel edge-preserving nonnegative matrix factorization method for spectral unmixing

    NASA Astrophysics Data System (ADS)

    Bao, Wenxing; Ma, Ruishi

    2015-12-01

    Spectral unmixing technique is one of the key techniques to identify and classify the material in the hyperspectral image processing. A novel robust spectral unmixing method based on nonnegative matrix factorization(NMF) is presented in this paper. This paper used an edge-preserving function as hypersurface cost function to minimize the nonnegative matrix factorization. To minimize the hypersurface cost function, we constructed the updating functions for signature matrix of end-members and abundance fraction respectively. The two functions are updated alternatively. For evaluation purpose, synthetic data and real data have been used in this paper. Synthetic data is used based on end-members from USGS digital spectral library. AVIRIS Cuprite dataset have been used as real data. The spectral angle distance (SAD) and abundance angle distance(AAD) have been used in this research for assessment the performance of proposed method. The experimental results show that this method can obtain more ideal results and good accuracy for spectral unmixing than present methods.

  14. Innovative self-calibration method for accelerometer scale factor of the missile-borne RINS with fiber optic gyro.

    PubMed

    Zhang, Qian; Wang, Lei; Liu, Zengjun; Zhang, Yiming

    2016-09-19

    The calibration of an inertial measurement unit (IMU) is a key technique to improve the preciseness of the inertial navigation system (INS) for missile, especially for the calibration of accelerometer scale factor. Traditional calibration method is generally based on the high accuracy turntable, however, it leads to expensive costs and the calibration results are not suitable to the actual operating environment. In the wake of developments in multi-axis rotational INS (RINS) with optical inertial sensors, self-calibration is utilized as an effective way to calibrate IMU on missile and the calibration results are more accurate in practical application. However, the introduction of multi-axis RINS causes additional calibration errors, including non-orthogonality errors of mechanical processing and non-horizontal errors of operating environment, it means that the multi-axis gimbals could not be regarded as a high accuracy turntable. As for its application on missiles, in this paper, after analyzing the relationship between the calibration error of accelerometer scale factor and non-orthogonality and non-horizontal angles, an innovative calibration procedure using the signals of fiber optic gyro and photoelectric encoder is proposed. The laboratory and vehicle experiment results validate the theory and prove that the proposed method relaxes the orthogonality requirement of rotation axes and eliminates the strict application condition of the system. PMID:27661867

  15. Fully automated standard addition method for the quantification of 29 polar pesticide metabolites in different water bodies using LC-MS/MS.

    PubMed

    Kowal, Sebastian; Balsaa, Peter; Werres, Friedrich; Schmidt, Torsten C

    2013-07-01

    A reliable quantification by LC-ESI-MS/MS as the most suitable analytical method for polar substances in the aquatic environment is usually hampered by matrix effects from co-eluting compounds, which are unavoidably present in environmental samples. The standard addition method (SAM) is the most appropriate method to compensate matrix effects. However, when performed manually, this method is too labour- and time-intensive for routine analysis. In the present work, a fully automated SAM using a multi-purpose sample manager "Open Architecture UPLC®-MS/MS" (ultra-performance liquid chromatography tandem mass spectrometry) was developed for the sensitive and reliable determination of 29 polar pesticide metabolites in environmental samples. A four-point SAM was conducted parallel to direct-injection UPLC-ESI-MS/MS determination that was followed by a work flow to calculate the analyte concentrations including monitoring of required quality criteria. Several parameters regarding the SAM, chromatography and mass spectrometry conditions were optimised in order to obtain a fast as well as reliable analytical method. The matrix effects were examined by comparison of the SAM with an external calibration method. The accuracy of the SAM was investigated by recovery tests in samples of different catchment areas. The method detection limit was estimated to be between 1 and 10 ng/L for all metabolites by direct injection of a 10-μL sample. The relative standard deviation values were between 2 and 10% at the end of calibration range (30 ng/L). About 200 samples from different water bodies were examined with this method in the Rhine and Ruhr region of North Rhine-Westphalia (Germany). Approximately 94% of the analysed samples contained measurable amounts of metabolites. For most metabolites, low concentrations ≤0.10 μg/L were determined. Only for three metabolites were the concentrations in ground water significantly higher (up to 20 μg/L). In none of the examined drinking

  16. Analysis and Modeling of Threatening Factors of Workforce’s Health in Large-Scale Workplaces: Comparison of Four-Fitting Methods to select optimum technique

    PubMed Central

    Mohammadfam, Iraj; Soltanzadeh, Ahmad; Moghimbeigi, Abbas; Savareh, Behrouz Alizadeh

    2016-01-01

    Introduction Workforce is one of the pillars of development in any country. Therefore, the workforce’s health is very important, and analyzing its threatening factors is one of the fundamental steps for health planning. This study was the first part of a comprehensive study aimed at comparing the fitting methods to analyze and model the factors threatening health in occupational injuries. Methods In this study, 980 human occupational injuries in 10 Iranian large-scale workplaces within 10 years (2005–2014) were analyzed and modeled based on the four fitting methods: linear regression, regression analysis, generalized linear model, and artificial neural networks (ANN) using IBM SPSS Modeler 14.2. Results Accident Severity Rate (ASR) of occupational injuries was 557.47 ± 397.87. The results showed that the mean of age and work experience of injured workers were 27.82 ± 5.23 and 4.39 ± 3.65 years, respectively. Analysis of health-threatening factors showed that some factors, including age, quality of provided H&S training, number of workers, hazard identification (HAZID), and periodic risk assessment, and periodic H&S training were important factors that affected ASR. In addition, the results of comparison of the four fitting methods showed that the correlation coefficient of ANN (R = 0.968) and the relative error (R.E) of ANN (R.E = 0.063) were the highest and lowest, respectively, among other fitting methods. Conclusion The findings of the present study indicated that, despite the suitability and effectiveness of all fitting methods in analyzing severity of occupational injuries, ANN is the best fitting method for modeling of the threatening factors of a workforce’s health. Furthermore, all fitting methods, especially ANN, should be considered more in analyzing and modeling of occupational injuries and health-threatening factors as well as planning to provide and improve the workforce’s health. PMID:27053999

  17. Multiresidue method for the simultaneous determination of veterinary medicinal products, feed additives and illegal dyes in eggs using liquid chromatography-tandem mass spectrometry.

    PubMed

    Piatkowska, Marta; Jedziniak, Piotr; Zmudzki, Jan

    2016-04-15

    A multiclass method was developed for the simultaneous determination of 120 analytes in fresh eggs. The method covers the analytes from the groups of tetracyclines (6), fluoroquinolones (11), sulphonamides (17), nitroimidazoles (9), amphenicols (2), cephalosporins (7), penicillins (8), macrolides (8), benzimidazoles (20), coccidiostats (14), insecticides (3), dyes (12) and others (3). Samples were extracted using 0.1% formic acid in acetonitrile:water (8:2) with the addition of EDTA and cleaned using solid phase extraction with Hybrid SPE cartridges. The chromatographic separation was achieved on C8 column using mobile phase consisting of (A) methanol:acetonitrile (8:2) - (B) 0.1% formic acid in a gradient mode. Validation results according to the Commission Decision 2002/657/EC are as follows: linearity (r⩾0.99), recovery (75-108%), repeatability (CV 1.60-15.9%), reproducibility (CV 2.60-15%), decision limit (CCα 2.25-1156 μg/kg) and detection capability (CCβ 2.04-1316 μg/kg). The presented method was used for analysis of 150 real eggs samples taken from monitoring control program. PMID:26616990

  18. Multiresidue method for the simultaneous determination of veterinary medicinal products, feed additives and illegal dyes in eggs using liquid chromatography-tandem mass spectrometry.

    PubMed

    Piatkowska, Marta; Jedziniak, Piotr; Zmudzki, Jan

    2016-04-15

    A multiclass method was developed for the simultaneous determination of 120 analytes in fresh eggs. The method covers the analytes from the groups of tetracyclines (6), fluoroquinolones (11), sulphonamides (17), nitroimidazoles (9), amphenicols (2), cephalosporins (7), penicillins (8), macrolides (8), benzimidazoles (20), coccidiostats (14), insecticides (3), dyes (12) and others (3). Samples were extracted using 0.1% formic acid in acetonitrile:water (8:2) with the addition of EDTA and cleaned using solid phase extraction with Hybrid SPE cartridges. The chromatographic separation was achieved on C8 column using mobile phase consisting of (A) methanol:acetonitrile (8:2) - (B) 0.1% formic acid in a gradient mode. Validation results according to the Commission Decision 2002/657/EC are as follows: linearity (r⩾0.99), recovery (75-108%), repeatability (CV 1.60-15.9%), reproducibility (CV 2.60-15%), decision limit (CCα 2.25-1156 μg/kg) and detection capability (CCβ 2.04-1316 μg/kg). The presented method was used for analysis of 150 real eggs samples taken from monitoring control program.

  19. An acceleration of the characteristics by a space-angle two-level method using surface discontinuity factors

    SciTech Connect

    Grassi, G.

    2006-07-01

    We present a non-linear space-angle two-level acceleration scheme for the method of the characteristics (MOC). To the fine level on which the MOC transport calculation is performed, we associate a more coarsely discretized phase space in which a low-order problem is solved as an acceleration step. Cross sections on the coarse level are obtained by a flux-volume homogenisation technique, which entails the non-linearity of the acceleration. Discontinuity factors per surface are introduced as additional degrees of freedom on the coarse level in order to ensure the equivalence of the heterogeneous and the homogenised problem. After each fine transport iteration, a low-order transport problem is iteratively solved on the homogenised grid. The solution of this problem is then used to correct the angular moments of the flux resulting from the previous free transport sweep. Numerical tests for a given benchmark have been performed. Results are discussed. (authors)

  20. Using ethnographic methods to carry out human factors research in software engineering.

    PubMed

    Karn, J S; Cowling, A J

    2006-08-01

    This article describes how ethnographic methods were used to observe and analyze student teams working on software engineering (SE) projects. The aim of this research was to uncover the effects of the interplay of different personality types, as measured by a test based on the Myers-Briggs Type Indicator (MBTI), on the workings of an SE team. Using ethnographic methods allowed the researchers to record the effects of personality type on behavior toward teammates and how this related to the amount of disruption and positive ideas brought forward from each member, also examined in detail were issues that were either dogged by disruption or that did not have sufficient discussion devoted to them and the impact that they had on the outcomes of the project. Initial findings indicate that ethnographic methods are a valuable weapon to have in one's arsenal when carrying out research into human factors of SE. PMID:17186760

  1. Cooperative Voltage Control Method by Power Factor Control of PV Systems and LRT

    NASA Astrophysics Data System (ADS)

    Kawasaki, Shoji; Kanemoto, Noriaki; Taoka, Hisao; Matsuki, Junya; Hayashi, Yasuhiro

    Recently, the number of system interconnection of the renewable energy sources (RES) such as the photovoltaic generation (PV) and wind power generation is increasing drastically, and there is in danger of changing the voltages in a distribution system by the precipitous output variation of RESs. In this study, the authors propose one voltage control method of the distribution system by the power factor control of plural PV systems in consideration of cooperation with the load ratio control transformer (LRT) of laggard control response installed beforehand in the distribution system. In the proposed method, the slow voltage variation is controlled by LRT, and the steep voltage variation uncontrollable by LRT is controlled by plural PV systems, as a result, all the node voltages are controllable within the proper limits. In order to verify the validity of the proposed method, the numerical calculations are carried out by using an analytical model of distribution system which interconnected PV systems.

  2. Classification of ECG signals using LDA with factor analysis method as feature reduction technique.

    PubMed

    Kaur, Manpreet; Arora, A S

    2012-11-01

    The analysis of ECG signal, especially the QRS complex as the most characteristic wave in ECG, is a widely accepted approach to study and to classify cardiac dysfunctions. In this paper, first wavelet coefficients calculated for QRS complex are taken as features. Next, factor analysis procedures without rotation and with orthogonal rotation (varimax, equimax and quartimax) are used for feature reduction. The procedure uses the 'Principal Component Method' to estimate component loadings. Further, classification has been done with a LDA classifier. The MIT-BIH arrhythmia database is used and five types of beats (normal, PVC, paced, LBBB and RBBB) are considered for analysis. Accuracy, sensitivity and positive predictivity are performance parameters used for comparing performance of feature reduction techniques. Results demonstrate that the equimax rotation method yields maximum average accuracy of 99.056% for unknown data sets among other used methods.

  3. Using ethnographic methods to carry out human factors research in software engineering.

    PubMed

    Karn, J S; Cowling, A J

    2006-08-01

    This article describes how ethnographic methods were used to observe and analyze student teams working on software engineering (SE) projects. The aim of this research was to uncover the effects of the interplay of different personality types, as measured by a test based on the Myers-Briggs Type Indicator (MBTI), on the workings of an SE team. Using ethnographic methods allowed the researchers to record the effects of personality type on behavior toward teammates and how this related to the amount of disruption and positive ideas brought forward from each member, also examined in detail were issues that were either dogged by disruption or that did not have sufficient discussion devoted to them and the impact that they had on the outcomes of the project. Initial findings indicate that ethnographic methods are a valuable weapon to have in one's arsenal when carrying out research into human factors of SE.

  4. An evaluation of some factors affecting the detection of blood group antibodies by automated methods.

    PubMed

    Kolberg, J; Nordhagen, R

    1975-01-01

    Some factors affecting the sensitivity of the automated methods for blood group antibody detection have been evaluated. The experiments revealed influencing differences between various albumin preparations. In the BMC method, one lot of albumin permitted no significant antibody detection. In the LISP technique, a plateau of maximum Polybrene activity was found. The beginning of this plateau depended on both the albumin preparation and the Polybrene lot. In the BMC method, methyl cellulose gave optimal sensitivity within a concentration range of 0.3 to 0.5 per cent. The stability of test cells stored in ACD at 4 C was studied. All test cells could be used safely up to two weeks. Cells from different donors showed variable reactivity after three weeks. PMID:1101466

  5. Method for trapping affinity chromatography of transcription factors using aldehyde-hydrazide coupling to agarose.

    PubMed

    Jia, Yinshan; Jarrett, Harry W

    2015-08-01

    The use of a method of coupling DNA was investigated for trapping and purifying transcription factors. Using the GFP-C/EBP (CAAT/enhancer binding protein) fusion protein as a model, trapping gives higher purity and comparable yield to conventional affinity chromatography. The chemistry used is mild and was shown to have no detrimental effect on GFP fluorescence or GFP-C/EBP DNA binding. The method involves introducing a ribose nucleotide to the 3' end of a DNA sequence. Reaction with mM NaIO4 (sodium metaperiodate) produces a dialdehyde of ribose that couples to hydrazide-agarose. The DNA is combined at nM concentration with a nuclear extract or other protein mixture, and DNA-protein complexes form. The complex is then coupled to hydrazide-agarose for trapping the DNA-protein complex and the protein eluted by increasing NaCl concentration. Using a different oligonucleotide with the proximal E-box sequence from the human telomerase promoter, USF-2 transcription factor was purified by trapping, again with higher purity than results from conventional affinity chromatography and similar yield. Other transcription factors binding E-boxes, including E2A, c-Myc, and Myo-D, were also purified, but myogenin and NFκB were not. Therefore, this approach proved to be valuable for both affinity chromatography and the trapping approach. PMID:25935261

  6. Method for trapping affinity chromatography of transcription factors using aldehyde-hydrazide coupling to agarose

    PubMed Central

    Jia, Yinshan; Jarrett, Harry W.

    2015-01-01

    The uses of a method of coupling DNA is investigated for trapping and purifying transcription factors. Using the GFP-C/EBP fusion protein as a model, trapping gives higher purity and comparable yield to conventional affinity chromatography. The chemistry utilized is mild and was shown to have no detrimental effect on GFP fluorescence or GFP-C/EBP DNA-binding. The method involves introducing a ribose nucleotide to the 3′ end of a DNA sequence. Reaction with mM NaIO4 (sodium metaperiodate) produces a dialdehyde of ribose which couples to hydrazide-agarose. The DNA is combined at nM concentration with a nuclear extract or other protein mixture and DNA-protein complexes form. The complex is then coupled to hydrazide-agarose for trapping the DNA-protein complex and the protein eluted by increasing NaCl concentration. Using a different oligonucleotide with the proximal E-box sequence from the human telomerase promoter, USF-2 transcription factor was purified by trapping, again with higher purity than results from conventional affinity chromatography and similar yield. Other transcription factors binding E-boxes including E2A, c-myc, and myo-D were also purified but myogenenin and NFκB were not. Therfore, this approach proved valuable for both affinity chromatography and for the trapping approach. PMID:25935261

  7. Structure/function analysis of human factor XII using recombinant deletion mutants. Evidence for an additional region involved in the binding to negatively charged surfaces.

    PubMed

    Citarella, F; Ravon, D M; Pascucci, B; Felici, A; Fantoni, A; Hack, C E

    1996-05-15

    The binding site of human factor XII (FXII) for negatively charged surfaces has been proposed to be localized in the N-terminal region of factor XII. We have generated two recombinant factor XII proteins that lack this region: one protein consisting of the second growth-factor-like domain, the kringle domain, the proline-rich region and the catalytic domain of FXII (rFXII-U-like), and another consisting of only 16 amino acids of the proline-rich region of the heavy-chain region and the catalytic domain (rFXII-1pc). Each recombinant truncated protein, as well as recombinant full-length FXII (rFXII), were produced in HepG2 cells and purified by immunoaffinity chromatography. The capability of these recombinant proteins to bind to negatively charged surfaces and to initiate contact activation was studied. Radiolabeled rFXII-U-like and, to a lesser extent, rFXII-lpc bound to glass in a concentration-dependent manner, yet with lower efficiency than rFXII. The binding of the recombinant proteins was inhibited by a 100-fold molar excess of non-labeled native factor XII. On native polyacrylamide gel electrophoresis, both truncated proteins appeared to bind also to dextran sulfate, a soluble negatively charged compound. Glass-bound rFXII-U-like was able to activate prekallikrein in FXII-deficient plasma (assessed by measuring the generation of kallikrein-C1-inhibitor complexes), but less efficiently than rFXII, rFXII-U-like and rFXII-lpc exhibited coagulant activity, but this activity was significantly lower than that of rFXII. These data confirm that the N-terminal part of the heavy-chain region of factor XII contains a binding site for negatively charged activating surfaces, and indicate that other sequences, possibly located on the second epidermal-growth-factor-like domain and/or the kringle domain, contribute to the binding of factor XII to these surfaces.

  8. The Dependability of General-Factor Loadings: The Effects of Factor-Extraction Methods, Test Battery Composition, Test Battery Size, and Their Interactions

    ERIC Educational Resources Information Center

    Floyd, Randy G.; Shands, Elizabeth I.; Rafael, Fawziya A.; Bergeron, Renee; McGrew, Kevin S.

    2009-01-01

    To understand the extent to which the general-factor loadings of tests are inherent in their characteristics or due to the sampling of tests, the number of tests in the correlation matrix, and the factor-extraction methods used to obtain them, test scores from a large sample of young adults were inserted into independent and overlapping batteries…

  9. Toward Reflective Judgment in Exploratory Factor Analysis Decisions: Determining the Extraction Method and Number of Factors To Retain.

    ERIC Educational Resources Information Center

    Knight, Jennifer L.

    This paper considers some decisions that must be made by the researcher conducting an exploratory factor analysis. The primary purpose is to aid the researcher in making informed decisions during the factor analysis instead of relying on defaults in statistical programs or traditions of previous researchers. Three decision areas are addressed.…

  10. Determine the Galaxy Bias Factors on Large Scales Using the Bispectrum Method

    NASA Astrophysics Data System (ADS)

    Guo, H.; Jing, Y. P.

    2009-09-01

    We study whether the bias factors of galaxies can be unbiasedly recovered from their power spectra and bispectra. We use a set of numerical N-body simulations and construct large mock galaxy catalogs based upon the semi-analytical model of Croton et al. We measure the reduced bispectra for galaxies of different luminosity, and determine the linear and first nonlinear bias factors from their bispectra. We find that on large scales down to that of the wavenumber k = 0.1 h Mpc-1, the bias factors b 1 and b 2 are nearly constant, and b 1 obtained with the bispectrum method agrees very well with the expected value. The nonlinear bias factor b 2 is negative, except for the most luminous galaxies with Mr < -23 which have a positive b 2. The behavior of b 2 of galaxies is consistent with the b 2 mass dependence of their host halos. We show that it is essential to have an accurate estimation of the dark matter bispectrum in order to have an unbiased measurement of b 1 and b 2. We also test the analytical approach of incorporating halo occupation distribution to model the galaxy power spectrum and bispectrum. The halo model predictions do not fit the simulation results well on the precision requirement of current cosmological studies.

  11. Microencapsulation of human insulin DEAE-dextran complex and the complex in liposomes by the emulsion non-solvent addition method.

    PubMed

    Manosroi, A; Manosroi, J

    1997-01-01

    Human insulin-DEAE (diethyl amino ethyl) dextran complex and human insulin DEAE-dextran complex in liposomes were encapsulated in cellulose acetate butyrate (CAB) microcapsules by the emulsion non-solvent addition method. The ratio of core-to-coat used was 1:1. The average diameters of the complex microcapsules and the complex liposome microcapsules were 239.5 +/- 77.5 and 182.9 +/- 52.2 microns respectively. In vitro dissolution studies of both types of microcapsules in simulated intestinal fluid at pH 7.2 showed a sustained release of the complex and the complex liposome microcapsules with t50 = 1.5 h and 4 h respectively. This study can be applied to the further development of oral formulations of human insulin liposomes for diabetic treatment.

  12. Antecedents of Charter School Success in New York State: Charter School Management Agencies and Additional Factors That Affect English/Language Arts Test Scores in Elementary Charter Schools

    ERIC Educational Resources Information Center

    Schwarz, Jennifer

    2013-01-01

    Charter schools frequently receive public as well as federal attention, and there is a growing body of research becoming available examining charter schools. With all this research there is still a need for further studies which deal specifically with antecedents of charter school success. This study examined factors contributing toward the…

  13. Operator Splitting Implicit Integration Factor Methods for Stiff Reaction-Diffusion-Advection Systems

    PubMed Central

    Zhao, Su; Ovadia, Jeremy; Liu, Xinfeng; Zhang, Yong-Tao; Nie, Qing

    2011-01-01

    For reaction-diffusion-advection equations, the stiffness from the reaction and diffusion terms often requires very restricted time step size, while the nonlinear advection term may lead to a sharp gradient in localized spatial regions. It is challenging to design numerical methods that can efficiently handle both difficulties. For reaction-diffusion systems with both stiff reaction and diffusion terms, implicit integration factor (IIF) method and its higher dimensional analog compact IIF (cIIF) serve as an efficient class of time-stepping methods, and their second order version is linearly unconditionally stable. For nonlinear hyperbolic equations, weighted essentially non-oscillatory (WENO) methods are a class of schemes with a uniformly high-order of accuracy in smooth regions of the solution, which can also resolve the sharp gradient in an accurate and essentially non-oscillatory fashion. In this paper, we couple IIF/cIIF with WENO methods using the operator splitting approach to solve reaction-diffusion-advection equations. In particular, we apply the IIF/cIIF method to the stiff reaction and diffusion terms and the WENO method to the advection term in two different splitting sequences. Calculation of local truncation error and direct numerical simulations for both splitting approaches show the second order accuracy of the splitting method, and linear stability analysis and direct comparison with other approaches reveals excellent efficiency and stability properties. Applications of the splitting approach to two biological systems demonstrate that the overall method is accurate and efficient, and the splitting sequence consisting of two reaction-diffusion steps is more desirable than the one consisting of two advection steps, because CWC exhibits better accuracy and stability. PMID:21666863

  14. Steel Rack Connections: Identification of Most Influential Factors and a Comparison of Stiffness Design Methods.

    PubMed

    Shah, S N R; Sulong, N H Ramli; Shariati, Mahdi; Jumaat, M Z

    2015-01-01

    Steel pallet rack (SPR) beam-to-column connections (BCCs) are largely responsible to avoid the sway failure of frames in the down-aisle direction. The overall geometry of beam end connectors commercially used in SPR BCCs is different and does not allow a generalized analytic approach for all types of beam end connectors; however, identifying the effects of the configuration, profile and sizes of the connection components could be the suitable approach for the practical design engineers in order to predict the generalized behavior of any SPR BCC. This paper describes the experimental behavior of SPR BCCs tested using a double cantilever test set-up. Eight sets of specimens were identified based on the variation in column thickness, beam depth and number of tabs in the beam end connector in order to investigate the most influential factors affecting the connection performance. Four tests were repeatedly performed for each set to bring uniformity to the results taking the total number of tests to thirty-two. The moment-rotation (M-θ) behavior, load-strain relationship, major failure modes and the influence of selected parameters on connection performance were investigated. A comparative study to calculate the connection stiffness was carried out using the initial stiffness method, the slope to half-ultimate moment method and the equal area method. In order to find out the more appropriate method, the mean stiffness of all the tested connections and the variance in values of mean stiffness according to all three methods were calculated. The calculation of connection stiffness by means of the initial stiffness method is considered to overestimate the values when compared to the other two methods. The equal area method provided more consistent values of stiffness and lowest variance in the data set as compared to the other two methods. PMID:26452047

  15. Steel Rack Connections: Identification of Most Influential Factors and a Comparison of Stiffness Design Methods

    PubMed Central

    Shah, S. N. R.; Sulong, N. H. Ramli; Shariati, Mahdi; Jumaat, M. Z.

    2015-01-01

    Steel pallet rack (SPR) beam-to-column connections (BCCs) are largely responsible to avoid the sway failure of frames in the down-aisle direction. The overall geometry of beam end connectors commercially used in SPR BCCs is different and does not allow a generalized analytic approach for all types of beam end connectors; however, identifying the effects of the configuration, profile and sizes of the connection components could be the suitable approach for the practical design engineers in order to predict the generalized behavior of any SPR BCC. This paper describes the experimental behavior of SPR BCCs tested using a double cantilever test set-up. Eight sets of specimens were identified based on the variation in column thickness, beam depth and number of tabs in the beam end connector in order to investigate the most influential factors affecting the connection performance. Four tests were repeatedly performed for each set to bring uniformity to the results taking the total number of tests to thirty-two. The moment-rotation (M-θ) behavior, load-strain relationship, major failure modes and the influence of selected parameters on connection performance were investigated. A comparative study to calculate the connection stiffness was carried out using the initial stiffness method, the slope to half-ultimate moment method and the equal area method. In order to find out the more appropriate method, the mean stiffness of all the tested connections and the variance in values of mean stiffness according to all three methods were calculated. The calculation of connection stiffness by means of the initial stiffness method is considered to overestimate the values when compared to the other two methods. The equal area method provided more consistent values of stiffness and lowest variance in the data set as compared to the other two methods. PMID:26452047

  16. A validated LC-MS/MS determination method for the illegal food additive rhodamine B: Applications of a pharmacokinetic study in rats.

    PubMed

    Cheng, Yung-Yi; Tsai, Tung-Hu

    2016-06-01

    Rhodamine B is an illegal and potentially carcinogenic food dye. The aim of this study was to develop a convenient, rapid, and sensitive UHPLC-MS/MS method for pharmacokinetic studies in rats. Rat plasma samples were deproteinized with acetonitrile and separated by UHPLC on a reverse-phase C18e column (100mm×2.1mm, 2μm) using a mobile phase consisting of methanol-5mM ammonium acetate (90:10, v/v). Detection was performed using a triple quadrupole tandem mass spectrometer in the selected reaction monitoring mode at [M](+) ion m/z 443.39→399.28 for rhodamine B and [M+H](+) ion m/z 253.17→238.02 for 5-methoxyflavone as the internal standard. This method was specific and produced linear results over a concentration range of 0.5-100ng/mL, with a lower limit of quantitation of 0.5ng/mL. All validation parameters, including the inter-day, intra-day, matrix effect, recovery, and stability in rat plasma, were acceptable according to the biological method validation guidelines developed by the FDA (2001). This method was successfully applied to a pharmacokinetic study in rats; oral administration of 1mg/kg of rhodamine B yielded a time to maximum concentration (Tmax) of 1.3±0.4h and an elimination half-life of 8.8±1.4h, with a clearance of 229.7±19.4mL/h/kg. These pharmacokinetic results provide a constructive contribution to our understanding of the absorption mechanism of rhodamine B and support additional food safety evaluations. PMID:27131149

  17. Nonnegative matrix factorization: a blind spectra separation method for in vivo fluorescent optical imaging.

    PubMed

    Montcuquet, Anne-Sophie; Hervé, Lionel; Navarro, Fabrice; Dinten, Jean-Marc; Mars, Jérôme I

    2010-01-01

    Fluorescence imaging in diffusive media is an emerging imaging modality for medical applications that uses injected fluorescent markers that bind to specific targets, e.g., carcinoma. The region of interest is illuminated with near-IR light and the emitted back fluorescence is analyzed to localize the fluorescence sources. To investigate a thick medium, as the fluorescence signal decreases with the light travel distance, any disturbing signal, such as biological tissues intrinsic fluorescence (called autofluorescence) is a limiting factor. Several specific markers may also be simultaneously injected to bind to different molecules, and one may want to isolate each specific fluorescent signal from the others. To remove the unwanted fluorescence contributions or separate different specific markers, a spectroscopic approach is explored. The nonnegative matrix factorization (NMF) is the blind positive source separation method we chose. We run an original regularized NMF algorithm we developed on experimental data, and successfully obtain separated in vivo fluorescence spectra.

  18. Evaluation of factors affecting prescribing behaviors, in iran pharmaceutical market by econometric methods.

    PubMed

    Tahmasebi, Nima; Kebriaeezadeh, Abbas

    2015-01-01

    Prescribing behavior of physicians affected by many factors. The present study is aimed at discovering the simultaneous effects of the evaluated factors (including: price, promotion and demographic characteristics of physicians) and quantification of these effects. In order to estimate these effects, Fluvoxamine (an antidepressant drug) was selected and the model was figured out by panel data method in econometrics. We found that insurance and advertisement respectively are the most effective on increasing the frequency of prescribing, whilst negative correlation was observed between price and the frequency of prescribing a drug. Also brand type is more sensitive to negative effect of price than to generic. Furthermore, demand for a prescription drug is related with physician demographics (age and sex). According to the results of this study, pharmaceutical companies should pay more attention to the demographic characteristics of physicians (age and sex) and their advertisement and pricing strategies. PMID:25901174

  19. Array-representation Integration Factor Method for High-dimensional Systems

    PubMed Central

    Wang, Dongyong; Zhang, Lei; Nie, Qing

    2013-01-01

    High order spatial derivatives and stiff reactions often introduce severe temporal stability constraints on the time step in numerical methods. Implicit integration method (IIF) method, which treats diffusion exactly and reaction implicitly, provides excellent stability properties with good efficiency by decoupling the treatment of reactions and diffusions. One major challenge for IIF is storage and calculation of the potential dense exponential matrices of the sparse discretization matrices resulted from the linear differential operators. Motivated by a compact representation for IIF (cIIF) for Laplacian operators in two and three dimensions, we introduce an array-representation technique for efficient handling of exponential matrices from a general linear differential operator that may include cross-derivatives and non-constant diffusion coefficients. In this approach, exponentials are only needed for matrices of small size that depend only on the order of derivatives and number of discretization points, independent of the size of spatial dimensions. This method is particularly advantageous for high dimensional systems, and it can be easily incorporated with IIF to preserve the excellent stability of IIF. Implementation and direct simulations of the array-representation compact IIF (AcIIF) on systems, such as Fokker-Planck equations in three and four dimensions and chemical master equations, in addition to reaction-diffusion equations, show efficiency, accuracy, and robustness of the new method. Such array-presentation based on methods may have broad applications for simulating other complex systems involving high-dimensional data. PMID:24415797

  20. A structured elicitation method to identify key direct risk factors for the management of natural resources.

    PubMed

    Smith, Michael; Wallace, Ken; Lewis, Loretta; Wagner, Christian

    2015-11-01

    The high level of uncertainty inherent in natural resource management requires planners to apply comprehensive risk analyses, often in situations where there are few resources. In this paper, we demonstrate a broadly applicable, novel and structured elicitation approach to identify important direct risk factors. This new approach combines expert calibration and fuzzy based mathematics to capture and aggregate subjective expert estimates of the likelihood that a set of direct risk factors will cause management failure. A specific case study is used to demonstrate the approach; however, the described methods are widely applicable in risk analysis. For the case study, the management target was to retain all species that characterise a set of natural biological elements. The analysis was bounded by the spatial distribution of the biological elements under consideration and a 20-year time frame. Fourteen biological elements were expected to be at risk. Eleven important direct risk factors were identified that related to surrounding land use practices, climate change, problem species (e.g., feral predators), fire and hydrological change. In terms of their overall influence, the two most important risk factors were salinisation and a lack of water which together pose a considerable threat to the survival of nine biological elements. The described approach successfully overcame two concerns arising from previous risk analysis work: (1) the lack of an intuitive, yet comprehensive scoring method enabling the detection and clarification of expert agreement and associated levels of uncertainty; and (2) the ease with which results can be interpreted and communicated while preserving a rich level of detail essential for informed decision making.

  1. A structured elicitation method to identify key direct risk factors for the management of natural resources.

    PubMed

    Smith, Michael; Wallace, Ken; Lewis, Loretta; Wagner, Christian

    2015-11-01

    The high level of uncertainty inherent in natural resource management requires planners to apply comprehensive risk analyses, often in situations where there are few resources. In this paper, we demonstrate a broadly applicable, novel and structured elicitation approach to identify important direct risk factors. This new approach combines expert calibration and fuzzy based mathematics to capture and aggregate subjective expert estimates of the likelihood that a set of direct risk factors will cause management failure. A specific case study is used to demonstrate the approach; however, the described methods are widely applicable in risk analysis. For the case study, the management target was to retain all species that characterise a set of natural biological elements. The analysis was bounded by the spatial distribution of the biological elements under consideration and a 20-year time frame. Fourteen biological elements were expected to be at risk. Eleven important direct risk factors were identified that related to surrounding land use practices, climate change, problem species (e.g., feral predators), fire and hydrological change. In terms of their overall influence, the two most important risk factors were salinisation and a lack of water which together pose a considerable threat to the survival of nine biological elements. The described approach successfully overcame two concerns arising from previous risk analysis work: (1) the lack of an intuitive, yet comprehensive scoring method enabling the detection and clarification of expert agreement and associated levels of uncertainty; and (2) the ease with which results can be interpreted and communicated while preserving a rich level of detail essential for informed decision making. PMID:27441228

  2. Additivity of Factor Effects in Reading Tasks Is Still a Challenge for Computational Models: Reply to Ziegler, Perry, and Zorzi (2009)

    ERIC Educational Resources Information Center

    Besner, Derek; O'Malley, Shannon

    2009-01-01

    J. C. Ziegler, C. Perry, and M. Zorzi (2009) have claimed that their connectionist dual process model (CDP+) can simulate the data reported by S. O'Malley and D. Besner. Most centrally, they have claimed that the model simulates additive effects of stimulus quality and word frequency on the time to read aloud when words and nonwords are randomly…

  3. Additive influence of genetic predisposition and conventional risk factors in the incidence of coronary heart disease: a population-based study in Greece

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An additive genetic risk score (GRS) for coronary heart disease (CHD) has previously been associated with incident CHD in the population-based Greek European Prospective Investigation into Cancer and nutrition (EPIC) cohort. In this study, we explore GRS-‘environment’ joint actions on CHD for severa...

  4. Impact of the Choice of Normalization Method on Molecular Cancer Class Discovery Using Nonnegative Matrix Factorization

    PubMed Central

    Yang, Haixuan; Seoighe, Cathal

    2016-01-01

    Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm. PMID:27741311

  5. Pathogen inactivation and removal methods for plasma-derived clotting factor concentrates.

    PubMed

    Klamroth, Robert; Gröner, Albrecht; Simon, Toby L

    2014-05-01

    Pathogen safety is crucial for plasma-derived clotting factor concentrates used in the treatment of bleeding disorders. Plasma, the starting material for these products, is collected by plasmapheresis (source plasma) or derived from whole blood donations (recovered plasma). The primary measures regarding pathogen safety are selection of healthy donors donating in centers with appropriate epidemiologic data for the main blood-transmissible viruses, screening donations for the absence of relevant infectious blood-borne viruses, and release of plasma pools for further processing only if they are nonreactive for serologic markers and nucleic acids for these viruses. Despite this testing, pathogen inactivation and/or removal during the manufacturing process of plasma-derived clotting factor concentrates is required to ensure prevention of transmission of infectious agents. Historically, hepatitis viruses and human immunodeficiency virus have posed the greatest threat to patients receiving plasma-derived therapy for treatment of hemophilia or von Willebrand disease. Over the past 30 years, dedicated virus inactivation and removal steps have been integrated into factor concentrate production processes, essentially eliminating transmission of these viruses. Manufacturing steps used in the purification of factor concentrates have also proved to be successful in reducing potential prion infectivity. In this review, current techniques for inactivation and removal of pathogens from factor concentrates are discussed. Ideally, production processes should involve a combination of complementary steps for pathogen inactivation and/or removal to ensure product safety. Finally, potential batch-to-batch contamination is avoided by stringent cleaning and sanitization methods as part of the manufacturing process.

  6. Influence of an Additive-Free Particle Spreading Method on Interactions between Charged Colloidal Particles at an Oil/Water Interface.

    PubMed

    Gao, Peng; Yi, Zonglin; Xing, Xiaochen; Ngai, To; Jin, Fan

    2016-05-17

    The assembly and manipulation of charged colloidal particles at oil/water interfaces represent active areas of fundamental and applied research. Previously, we have shown that colloidal particles can spontaneously generate unstable residual charges at the particle/oil interface when spreading solvent is used to disperse them at an oil/water interface. These residual charges in turn affect the long-ranged electrostatic repulsive forces and packing of particles at the interface. To further uncover the influence arising from the spreading solvents on interfacial particle interactions, in the present study we utilize pure buoyancy to drive the particles onto an oil/water interface and compare the differences between such a spontaneously adsorbed particle monolayer to the spread monolayer based on solvent spreading techniques. Our results show that the solvent-free method could also lead particles to spread well at the interface, but it does not result in violent sliding of particles along the interface. More importantly, this additive-free spreading method can avoid the formation of unstable residual charges at the particle/oil interface. These findings agree well with our previous hypothesis; namely, those unstable residual charges are triboelectric charges that arise from the violently rubbing of particles on oil at the interface. Therefore, if the spreading solvents could be avoided, then we would be able to get rid of the formation of residual charges at interfaces. This finding will provide insight for precisely controlling the interactions among colloidal particles trapped at fluid/fluid interfaces.

  7. Influence of an Additive-Free Particle Spreading Method on Interactions between Charged Colloidal Particles at an Oil/Water Interface.

    PubMed

    Gao, Peng; Yi, Zonglin; Xing, Xiaochen; Ngai, To; Jin, Fan

    2016-05-17

    The assembly and manipulation of charged colloidal particles at oil/water interfaces represent active areas of fundamental and applied research. Previously, we have shown that colloidal particles can spontaneously generate unstable residual charges at the particle/oil interface when spreading solvent is used to disperse them at an oil/water interface. These residual charges in turn affect the long-ranged electrostatic repulsive forces and packing of particles at the interface. To further uncover the influence arising from the spreading solvents on interfacial particle interactions, in the present study we utilize pure buoyancy to drive the particles onto an oil/water interface and compare the differences between such a spontaneously adsorbed particle monolayer to the spread monolayer based on solvent spreading techniques. Our results show that the solvent-free method could also lead particles to spread well at the interface, but it does not result in violent sliding of particles along the interface. More importantly, this additive-free spreading method can avoid the formation of unstable residual charges at the particle/oil interface. These findings agree well with our previous hypothesis; namely, those unstable residual charges are triboelectric charges that arise from the violently rubbing of particles on oil at the interface. Therefore, if the spreading solvents could be avoided, then we would be able to get rid of the formation of residual charges at interfaces. This finding will provide insight for precisely controlling the interactions among colloidal particles trapped at fluid/fluid interfaces. PMID:27108987

  8. Factors influencing base flow in the Swiss Midlands - Can results from different base flow separation methods help to identify these factors?

    NASA Astrophysics Data System (ADS)

    Meyer, Raphael; Schädler, Bruno; Viviroli, Daniel; Weingartner, Rolf

    2010-05-01

    is generally accepted in the literature, secondly in land cover, and, especially for the Swiss Midlands, in aquifer area and aquifer volumes. In this contribution the results of the different methods are presented and conclusions as to control factors are drawn from the results. The data base for river flow analysis in the low flow range is ideal in Switzerland. There are long time series, a dense gauge network and a comprehensive knowledge about uncertainty of the runoff measurements during low flow. This allows, in addition to the obtained process understanding, a well-founded comparison between the methods applied, which is going to be presented as well. Demuth, S. (1993) Untersuchungen zum Niedrigwasser in West-Europa (European low flow study). Freiburger Schriften zur Hydrologie, Band 1, Freiburg, Germany. Institue of Hydrology (1980) Low Flows Studies Report, 3 volumes. Institute of Hydrology, Wallingford, UK. Kille, K. (1970) Das Verfahren MoMNQ, ein Beitrag zur Berechnung der mittleren langjährigen Grundwasserneubildung mit Hilfe der monatlichen Niedrigwasserabflüsse. Zeitschrift der deutschen Geologischen Gesellschaft, Sonderheft Hydrogeologie Hydrogeochemie, 89-95. Wittenberg, H. (1999) Baseflow recession and recharge as nonlinear storage processes. Hydrol. Process., 13, 715-726.

  9. g-FACTOR Measurements of Picosecond States:. Opportunities and Limitations of the Recoil-In Method

    NASA Astrophysics Data System (ADS)

    Stone, N. J.; Stone, J. R.; Bingham, C. R.; Fischer, C. Froese; Jönsson, P.

    2008-08-01

    This paper reports a new a-priori approach to the calibration of attenuations observed in Recoil-in-Vacuum angular distribution experiments which should allow extraction of g-factors for states of picosecond (ps) lifetime in many nuclei, of both odd-A and even-A without the need for extensive experimentally based calibration. The methods used and results for Ge and Mo isotopes are discussed, with outline applications to both on-line beam/target Coulomb excitation and fission fragment experiments.

  10. A systematic review of mixed methods research on human factors and ergonomics in health care.

    PubMed

    Carayon, Pascale; Kianfar, Sarah; Li, Yaqiong; Xie, Anping; Alyousef, Bashar; Wooldridge, Abigail

    2015-11-01

    This systematic literature review provides information on the use of mixed methods research in human factors and ergonomics (HFE) research in health care. Using the PRISMA methodology, we searched four databases (PubMed, PsycInfo, Web of Science, and Engineering Village) for studies that met the following inclusion criteria: (1) field study in health care, (2) mixing of qualitative and quantitative data, (3) HFE issues, and (4) empirical evidence. Using an iterative and collaborative process supported by a structured data collection form, the six authors identified a total of 58 studies that primarily address HFE issues in health information technology (e.g., usability) and in the work of healthcare workers. About two-thirds of the mixed methods studies used the convergent parallel study design where quantitative and qualitative data were collected simultaneously. A variety of methods were used for collecting data, including interview, survey and observation. The most frequent combination involved interview for qualitative data and survey for quantitative data. The use of mixed methods in healthcare HFE research has increased over time. However, increasing attention should be paid to the formal literature on mixed methods research to enhance the depth and breadth of this research.

  11. A Systematic Review of Mixed Methods Research on Human Factors and Ergonomics in Health Care

    PubMed Central

    Carayon, Pascale; Kianfar, Sarah; Li, Yaqiong; Xie, Anping; Alyousef, Bashar; Wooldridge, Abigail

    2016-01-01

    This systematic literature review provides information on the use of mixed methods research in human factors and ergonomics (HFE) research in health care. Using the PRISMA methodology, we searched four databases (PubMed, PsycInfo, Web of Science, and Engineering Village) for studies that met the following inclusion criteria: (1) field study in health care, (2) mixing of qualitative and quantitative data, (3) HFE issues, and (4) empirical evidence. Using an iterative and collaborative process supported by a structured data collection form, the six authors identified a total of 58 studies that primarily address HFE issues in health information technology (e.g., usability) and in the work of healthcare workers. About two-thirds of the mixed methods studies used the convergent parallel study design where quantitative and qualitative data were collected simultaneously. A variety of methods were used for collecting data, including interview, survey and observation. The most frequent combination involved interview for qualitative data and survey for quantitative data. The use of mixed methods in healthcare HFE research has increased over time. However, increasing attention should be paid to the formal literature on mixed methods research to enhance the depth and breadth of this research. PMID:26154228

  12. Automated Robust Image Segmentation: Level Set Method Using Nonnegative Matrix Factorization with Application to Brain MRI.

    PubMed

    Dera, Dimah; Bouaynaya, Nidhal; Fathallah-Shaykh, Hassan M

    2016-07-01

    We address the problem of fully automated region discovery and robust image segmentation by devising a new deformable model based on the level set method (LSM) and the probabilistic nonnegative matrix factorization (NMF). We describe the use of NMF to calculate the number of distinct regions in the image and to derive the local distribution of the regions, which is incorporated into the energy functional of the LSM. The results demonstrate that our NMF-LSM method is superior to other approaches when applied to synthetic binary and gray-scale images and to clinical magnetic resonance images (MRI) of the human brain with and without a malignant brain tumor, glioblastoma multiforme. In particular, the NMF-LSM method is fully automated, highly accurate, less sensitive to the initial selection of the contour(s) or initial conditions, more robust to noise and model parameters, and able to detect as small distinct regions as desired. These advantages stem from the fact that the proposed method relies on histogram information instead of intensity values and does not introduce nuisance model parameters. These properties provide a general approach for automated robust region discovery and segmentation in heterogeneous images. Compared with the retrospective radiological diagnoses of two patients with non-enhancing grade 2 and 3 oligodendroglioma, the NMF-LSM detects earlier progression times and appears suitable for monitoring tumor response. The NMF-LSM method fills an important need of automated segmentation of clinical MRI. PMID:27417984

  13. Using Chebyshev polynomials and approximate inverse triangular factorizations for preconditioning the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Kaporin, I. E.

    2012-02-01

    In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.

  14. Nonnegative matrix factorization: a blind sources separation method to extract content of fluorophores mixture media

    NASA Astrophysics Data System (ADS)

    Zhou, Kenneth J.; Chen, Jun

    2014-03-01

    The fluorophores of malignant human breast cells change their compositions that may be exposed in the fluorescence spectroscopy and blind source separation method. The content of the fluorophores mixture media such as tryptophan, collagen, elastin, NADH, and flavin were varied according to the cancer development. The native fluorescence spectra of these key fluorophores mixture media excited by the selective excitation wavelengths of 300 nm and 340 nm were analyzed using a blind source separation method: Nonnegative Matrix Factorization (NMF). The results show that the contribution from tryptophan, NADH and flavin to the fluorescence spectra of the mixture media is proportional to the content of each fluorophore. These data present a possibility that native fluorescence spectra decomposed by NMF can be used as potential native biomarkers for cancer detection evaluation of the cancer.

  15. Normalized impact factor (NIF): an adjusted method for calculating the citation rate of biomedical journals.

    PubMed

    Owlia, P; Vasei, M; Goliaei, B; Nassiri, I

    2011-04-01

    The interests in journal impact factor (JIF) in scientific communities have grown over the last decades. The JIFs are used to evaluate journals quality and the papers published therein. JIF is a discipline specific measure and the comparison between the JIF dedicated to different disciplines is inadequate, unless a normalization process is performed. In this study, normalized impact factor (NIF) was introduced as a relatively simple method enabling the JIFs to be used when evaluating the quality of journals and research works in different disciplines. The NIF index was established based on the multiplication of JIF by a constant factor. The constants were calculated for all 54 disciplines of biomedical field during 2005, 2006, 2007, 2008 and 2009 years. Also, ranking of 393 journals in different biomedical disciplines according to the NIF and JIF were compared to illustrate how the NIF index can be used for the evaluation of publications in different disciplines. The findings prove that the use of the NIF enhances the equality in assessing the quality of research works produced by researchers who work in different disciplines.

  16. Spectroscopically Enhanced Method and System for Multi-Factor Biometric Authentication

    NASA Astrophysics Data System (ADS)

    Pishva, Davar

    This paper proposes a spectroscopic method and system for preventing spoofing of biometric authentication. One of its focus is to enhance biometrics authentication with a spectroscopic method in a multifactor manner such that a person's unique ‘spectral signatures’ or ‘spectral factors’ are recorded and compared in addition to a non-spectroscopic biometric signature to reduce the likelihood of imposter getting authenticated. By using the ‘spectral factors’ extracted from reflectance spectra of real fingers and employing cluster analysis, it shows how the authentic fingerprint image presented by a real finger can be distinguished from an authentic fingerprint image embossed on an artificial finger, or molded on a fingertip cover worn by an imposter. This paper also shows how to augment two widely used biometrics systems (fingerprint and iris recognition devices) with spectral biometrics capabilities in a practical manner and without creating much overhead or inconveniencing their users.

  17. Understanding Factors that Shape Gender Attitudes in Early Adolescence Globally: A Mixed-Methods Systematic Review

    PubMed Central

    Gibbs, Susannah; Blum, Robert Wm; Moreau, Caroline; Chandra-Mouli, Venkatraman; Herbert, Ann; Amin, Avni

    2016-01-01

    Background Early adolescence (ages 10–14) is a period of increased expectations for boys and girls to adhere to socially constructed and often stereotypical norms that perpetuate gender inequalities. The endorsement of such gender norms is closely linked to poor adolescent sexual and reproductive and other health-related outcomes yet little is known about the factors that influence young adolescents’ personal gender attitudes. Objectives To explore factors that shape gender attitudes in early adolescence across different cultural settings globally. Methods A mixed-methods systematic review was conducted of the peer-reviewed literature in 12 databases from 1984–2014. Four reviewers screened the titles and abstracts of articles and reviewed full text articles in duplicate. Data extraction and quality assessments were conducted using standardized templates by study design. Thematic analysis was used to synthesize quantitative and qualitative data organized by the social-ecological framework (individual, interpersonal and community/societal-level factors influencing gender attitudes). Results Eighty-two studies (46 quantitative, 31 qualitative, 5 mixed-methods) spanning 29 countries were included. Ninety percent of studies were from North America or Western Europe. The review findings indicate that young adolescents, across cultural settings, commonly express stereotypical or inequitable gender attitudes, and such attitudes appear to vary by individual sociodemographic characteristics (sex, race/ethnicity and immigration, social class, and age). Findings highlight that interpersonal influences (family and peers) are central influences on young adolescents’ construction of gender attitudes, and these gender socialization processes differ for boys and girls. The role of community factors (e.g. media) is less clear though there is some evidence that schools may reinforce stereotypical gender attitudes among young adolescents. Conclusions The findings from this

  18. Peat decomposition - shaping factors, significance in environmental studies and methods of determination; a literature review

    NASA Astrophysics Data System (ADS)

    Drzymulska, Danuta

    2016-03-01

    A review of literature data on the degree of peat decomposition - an important parameter that yields data on environmental conditions during the peat-forming process, i.e., humidity of the mire surface, is presented. A decrease in the rate of peat decomposition indicates a rise of the ground water table. In the case of bogs, which receive exclusively atmospheric (meteoric) water, data on changes in the wetness of past mire surfaces could even be treated as data on past climates. Different factors shaping the process of peat decomposition are also discussed, such as humidity of the substratum and climatic conditions, as well as the chemical composition of peat-forming plants. Methods for the determination of the degree of peat decomposition are also outlined, maintaining the division into field and laboratory analyses. Among the latter are methods based on physical and chemical features of peat and microscopic methods. Comparisons of results obtained by different methods can occasionally be difficult, which may be ascribed to different experience of researchers or the chemically undefined nature of many analyses of humification.

  19. On Reducing the Effect of Covariate Factors in Gait Recognition: A Classifier Ensemble Method.

    PubMed

    Guan, Yu; Li, Chang-Tsun; Roli, Fabio

    2015-07-01

    Robust human gait recognition is challenging because of the presence of covariate factors such as carrying condition, clothing, walking surface, etc. In this paper, we model the effect of covariates as an unknown partial feature corruption problem. Since the locations of corruptions may differ for different query gaits, relevant features may become irrelevant when walking condition changes. In this case, it is difficult to train one fixed classifier that is robust to a large number of different covariates. To tackle this problem, we propose a classifier ensemble method based on the random subspace Method (RSM) and majority voting (MV). Its theoretical basis suggests it is insensitive to locations of corrupted features, and thus can generalize well to a large number of covariates. We also extend this method by proposing two strategies, i.e, local enhancing (LE) and hybrid decision-level fusion (HDF) to suppress the ratio of false votes to true votes (before MV). The performance of our approach is competitive against the most challenging covariates like clothing, walking surface, and elapsed time. We evaluate our method on the USF dataset and OU-ISIR-B dataset, and it has much higher performance than other state-of-the-art algorithms.

  20. Opportunistic virus DNA levels after pediatric stem cell transplantation: serostatus matching, anti-thymocyte globulin, and total body irradiation are additive risk factors.

    PubMed

    Kullberg-Lindh, C; Mellgren, K; Friman, V; Fasth, A; Ascher, H; Nilsson, S; Lindh, M

    2011-04-01

    Viral opportunistic infections remain a threat to survival after stem cell transplantation (SCT). We retrospectively investigated infections caused by cytomegalovirus (CMV), Epstein-Barr virus (EBV), human herpesvirus type 6 (HHV6), or adenovirus (AdV) during the first 6-12 months after pediatric SCT. Serum samples from 47 consecutive patients were analyzed by quantitative real-time polymerase chain reaction assay. DNAemia at any time point occurred for CMV in 47%, for EBV in 45%, for HHV6 in 28%, and for AdV in 28%. Three patients (6.3%) died of CMV-, EBV-, or AdV-related complications 4, 9, and 24 weeks after SCT, respectively, representing 21% of total mortality. These 3 cases were clearly distinguishable by DNAemia increasing to high levels. Serum positivity for CMV immunoglobulin G in either recipient or donor at the time of SCT, total body irradiation, and anti-thymocyte globulin conditioning were independent risk factors for high CMV or EBV DNA levels. We conclude that DNAemia levels help to distinguish significant viral infections, and that surveillance and prophylactic measures should be focused on patients with risk factors in whom viral complications rapidly can become fatal.

  1. A novel method to produce solid lipid nanoparticles using n-butanol as an additional co-surfactant according to the o/w microemulsion quenching technique.

    PubMed

    Mojahedian, Mohammad M; Daneshamouz, Saeid; Samani, Soliman Mohammadi; Zargaran, Arman

    2013-09-01

    Solid Lipid Nanoparticles (SLN) and Nanostructured Lipid Carriers (NLC) are novel medicinal carriers for controlled drug release and drug targeting in different roots of administration such as parenteral, oral, ophthalmic and topical. These carriers have some benefits such as increased drug stability, high drug payload, the incorporation of lipophilic and hydrophilic drugs, and no biotoxicity. Therefore, due to the cost-efficient, proportionally increasable, and reproducible preparation of SLN/NLC and the avoidance of organic solvents used, the warm microemulsion quenching method was selected from among several preparation methods for development in this research. To prepare the warm O/W microemulsion, lipids (distearin, stearic acid, beeswax, triolein alone or in combination with others) were melted at a temperature of 65°C. After that, different ratios of Tween60 (10-22.5%) and glyceryl monostearate (surfactant and co-surfactant) and water were added, and the combination was stirred. Then, 1-butanol (co-surfactant) was added dropwise until a clear microemulsion was formed and titration continued to achieve cloudiness (to obtain the microemulsion zone). The warm o/w microemulsions were added dropwise into 4°C water (1:5 volume ratio) while being stirred at 400 or 600 rpm. Lipid nanosuspensions were created upon the addition of the warm o/w microemulsion to the cold water. The SLN were obtained over a range of concentrations of co-surfactants and lipids and observed for microemulsion stability (clearness). For selected preparations, characterization involved also determination of mean particle size, polydispersity and shape. According to the aim of this study, the optimum formulations requiring the minimum amounts of 1-butanol (1.2%) and lower temperatures for creation were selected. Mono-disperse lipid nanoparticles were prepared in the size range 77 ± 1 nm to 124 ± 21 nm according to a laser diffraction particle size analyzer and transmission electron

  2. Determination of Unknown Concentrations of Sodium Acetate Using the Method of Standard Addition and Proton NMR: An Experiment for the Undergraduate Analytical Chemistry Laboratory

    ERIC Educational Resources Information Center

    Rajabzadeh, Massy

    2012-01-01

    In this experiment, students learn how to find the unknown concentration of sodium acetate using both the graphical treatment of standard addition and the standard addition equation. In the graphical treatment of standard addition, the peak area of the methyl peak in each of the sodium acetate standard solutions is found by integration using…

  3. Effects of extraction methods and factors on leaching of metals from recycled concrete aggregates.

    PubMed

    Bestgen, Janile O; Cetin, Bora; Tanyu, Burak F

    2016-07-01

    Leaching of metals (calcium (Ca), chromium (Cr), copper, (Cu), iron (Fe), and zinc (Zn)) of recycled concrete aggregates (RCAs) were investigated with four different leachate extraction methods (batch water leach tests (WLTs), toxicity leaching procedure test (TCLP), synthetic precipitation leaching procedure test (SPLP), and pH-dependent leach tests). WLTs were also used to perform a parametric study to evaluate factors including (i) effects of reaction time, (ii) atmosphere, (iii) liquid-to-solid (L/S) ratio, and (iv) particle size of RCA. The results from WLTs showed that reaction time and exposure to atmosphere had impact on leaching behavior of metals. An increase in L/S ratio decreased the effluent pH and all metal concentrations. Particle size of the RCA had impact on some metals but not all. Comparison of the leached concentrations of metals from select RCA samples with WLT method to leached concentrations from TCLP and SPLP methods revealed significant differences. For the same RCA samples, the highest metal concentrations were obtained with TCLP method, followed by WLT and SPLP methods. However, in all tests, the concentrations of all four (Cr, Cu, Fe, and Zn) metals were below the regulatory limits determined by EPA MCLs in all tests with few exceptions. pH-dependent batch water leach tests revealed that leaching pattern for Ca is more cationic whereas for other metals showed more amphoteric. The results obtained from the pH-dependent tests were evaluated with geochemical modeling (MINTEQA2) to estimate the governing leaching mechanisms for different metals. The results indicated that the releases of the elements were solubility-controlled except Cr. PMID:26996910

  4. Effects of extraction methods and factors on leaching of metals from recycled concrete aggregates.

    PubMed

    Bestgen, Janile O; Cetin, Bora; Tanyu, Burak F

    2016-07-01

    Leaching of metals (calcium (Ca), chromium (Cr), copper, (Cu), iron (Fe), and zinc (Zn)) of recycled concrete aggregates (RCAs) were investigated with four different leachate extraction methods (batch water leach tests (WLTs), toxicity leaching procedure test (TCLP), synthetic precipitation leaching procedure test (SPLP), and pH-dependent leach tests). WLTs were also used to perform a parametric study to evaluate factors including (i) effects of reaction time, (ii) atmosphere, (iii) liquid-to-solid (L/S) ratio, and (iv) particle size of RCA. The results from WLTs showed that reaction time and exposure to atmosphere had impact on leaching behavior of metals. An increase in L/S ratio decreased the effluent pH and all metal concentrations. Particle size of the RCA had impact on some metals but not all. Comparison of the leached concentrations of metals from select RCA samples with WLT method to leached concentrations from TCLP and SPLP methods revealed significant differences. For the same RCA samples, the highest metal concentrations were obtained with TCLP method, followed by WLT and SPLP methods. However, in all tests, the concentrations of all four (Cr, Cu, Fe, and Zn) metals were below the regulatory limits determined by EPA MCLs in all tests with few exceptions. pH-dependent batch water leach tests revealed that leaching pattern for Ca is more cationic whereas for other metals showed more amphoteric. The results obtained from the pH-dependent tests were evaluated with geochemical modeling (MINTEQA2) to estimate the governing leaching mechanisms for different metals. The results indicated that the releases of the elements were solubility-controlled except Cr.

  5. Global self-esteem and method effects: competing factor structures, longitudinal invariance and response styles in adolescents

    PubMed Central

    Urbán, Róbert; Szigeti, Réka; Kökönyei, Gyöngyi; Demetrovics, Zsolt

    2013-01-01

    The Rosenberg Self-Esteem Scale (RSES) is a widely used measure for assessing self-esteem, but its factor structure is debated. Our goals were to compare 10 alternative models for RSES; and to quantify and predict the method effects. This sample involves two waves (N=2513 ninth-grade and 2370 tenth-grade students) from five waves of a school-based longitudinal study. RSES was administered in each wave. The global self-esteem factor with two latent method factors yielded the best fit to the data. The global factor explained large amount of the common variance (61% and 46%); however, a relatively large proportion of the common variance was attributed to the negative method factor (34 % and 41%), and a small proportion of the common variance was explained by the positive method factor (5% and 13%). We conceptualized the method effect as a response style, and found that being a girl and having higher number of depressive symptoms were associated with both low self-esteem and negative response style measured by the negative method factor. Our study supported the one global self-esteem construct and quantified the method effects in adolescents. PMID:24061931

  6. Global self-esteem and method effects: competing factor structures, longitudinal invariance, and response styles in adolescents.

    PubMed

    Urbán, Róbert; Szigeti, Réka; Kökönyei, Gyöngyi; Demetrovics, Zsolt

    2014-06-01

    The Rosenberg Self-Esteem Scale (RSES) is a widely used measure for assessing self-esteem, but its factor structure is debated. Our goals were to compare 10 alternative models for the RSES and to quantify and predict the method effects. This sample involves two waves (N =2,513 9th-grade and 2,370 10th-grade students) from five waves of a school-based longitudinal study. The RSES was administered in each wave. The global self-esteem factor with two latent method factors yielded the best fit to the data. The global factor explained a large amount of the common variance (61% and 46%); however, a relatively large proportion of the common variance was attributed to the negative method factor (34 % and 41%), and a small proportion of the common variance was explained by the positive method factor (5% and 13%). We conceptualized the method effect as a response style and found that being a girl and having a higher number of depressive symptoms were associated with both low self-esteem and negative response style, as measured by the negative method factor. Our study supported the one global self-esteem construct and quantified the method effects in adolescents.

  7. The development of decision limits for the GH-2000 detection methodology using additional insulin-like growth factor-I and amino-terminal pro-peptide of type III collagen assays.

    PubMed

    Holt, Richard I G; Böhning, Walailuck; Guha, Nishan; Bartlett, Christiaan; Cowan, David A; Giraud, Sylvain; Bassett, E Eryl; Sönksen, Peter H; Böhning, Dankmar

    2015-09-01

    The GH-2000 and GH-2004 projects have developed a method for detecting GH misuse based on measuring insulin-like growth factor-I (IGF-I) and the amino-terminal pro-peptide of type III collagen (P-III-NP). The objectives were to analyze more samples from elite athletes to improve the reliability of the decision limit estimates, to evaluate whether the existing decision limits needed revision, and to validate further non-radioisotopic assays for these markers. The study included 998 male and 931 female elite athletes. Blood samples were collected according to World Anti-Doping Agency (WADA) guidelines at various sporting events including the 2011 International Association of Athletics Federations (IAAF) World Athletics Championships in Daegu, South Korea. IGF-I was measured by the Immunotech A15729 IGF-I IRMA, the Immunodiagnostic Systems iSYS IGF-I assay and a recently developed mass spectrometry (LC-MS/MS) method. P-III-NP was measured by the Cisbio RIA-gnost P-III-P, Orion UniQ™ PIIINP RIA and Siemens ADVIA Centaur P-III-NP assays. The GH-2000 score decision limits were developed using existing statistical techniques. Decision limits were determined using a specificity of 99.99% and an allowance for uncertainty because of the finite sample size. The revised Immunotech IGF-I - Orion P-III-NP assay combination decision limit did not change significantly following the addition of the new samples. The new decision limits are applied to currently available non-radioisotopic assays to measure IGF-I and P-III-NP in elite athletes, which should allow wider flexibility to implement the GH-2000 marker test for GH misuse while providing some resilience against manufacturer withdrawal or change of assays.

  8. The development of decision limits for the GH-2000 detection methodology using additional insulin-like growth factor-I and amino-terminal pro-peptide of type III collagen assays.

    PubMed

    Holt, Richard I G; Böhning, Walailuck; Guha, Nishan; Bartlett, Christiaan; Cowan, David A; Giraud, Sylvain; Bassett, E Eryl; Sönksen, Peter H; Böhning, Dankmar

    2015-09-01

    The GH-2000 and GH-2004 projects have developed a method for detecting GH misuse based on measuring insulin-like growth factor-I (IGF-I) and the amino-terminal pro-peptide of type III collagen (P-III-NP). The objectives were to analyze more samples from elite athletes to improve the reliability of the decision limit estimates, to evaluate whether the existing decision limits needed revision, and to validate further non-radioisotopic assays for these markers. The study included 998 male and 931 female elite athletes. Blood samples were collected according to World Anti-Doping Agency (WADA) guidelines at various sporting events including the 2011 International Association of Athletics Federations (IAAF) World Athletics Championships in Daegu, South Korea. IGF-I was measured by the Immunotech A15729 IGF-I IRMA, the Immunodiagnostic Systems iSYS IGF-I assay and a recently developed mass spectrometry (LC-MS/MS) method. P-III-NP was measured by the Cisbio RIA-gnost P-III-P, Orion UniQ™ PIIINP RIA and Siemens ADVIA Centaur P-III-NP assays. The GH-2000 score decision limits were developed using existing statistical techniques. Decision limits were determined using a specificity of 99.99% and an allowance for uncertainty because of the finite sample size. The revised Immunotech IGF-I - Orion P-III-NP assay combination decision limit did not change significantly following the addition of the new samples. The new decision limits are applied to currently available non-radioisotopic assays to measure IGF-I and P-III-NP in elite athletes, which should allow wider flexibility to implement the GH-2000 marker test for GH misuse while providing some resilience against manufacturer withdrawal or change of assays. PMID:25645199

  9. The three-isotope method for equilibrium isotope fractionation factor determination: Unfounded optimism

    NASA Astrophysics Data System (ADS)

    Cao, X.; Hayles, J. A.; Bao, H.

    2015-12-01

    The equilibrium isotope fractionation factor α is a fundamental parameter in stable isotope geochemistry. Although equilibrium α can be determined by theoretical calculation or by measurement of natural samples, direct laboratory experiments are ultimately required to verify those results. The attainment of a true exchange equilibrium in experiments is often difficult, but three methods have been devised and used to ensure that an equilibrium α has been obtained in an isotope exchange experiment. These are the two-directional method, partial-exchange method, and three-isotope method. Of these, the three-isotope method is thought to be the most rigorous. Using water-water exchange as a basic unit, we have developed a set of complex exchange models to study when and why the three-isotope method may work well or not. We found that the method cannot promise to lead to an equilibrium α before the kinetic complexity of the specific exchange experiment is known. An equilibrium point in δ17O-δ18O space can be reached only when all of the isotope exchange pathways are fully reversible, i.e. there is no mass loss at any instant, and the forward and backward reactions share the same pathway. If the exchange pathways are not fully reversible, steady state may be reached, but a steady state α can be very different from the equilibrium α. Our results validated the earlier warning that the trajectory for three-isotope evolution in δ17O-δ18O space may be a distinctly curved line or contain more than one straight line due to the non-fully reversible isotope exchange reactions. The three-isotope method for equilibrium α determination is not as rigorous or as promising as it may seem. Instead, the trajectory of three-isotope evolution provides detailed insights into the kinetics of isotope exchange between compounds. If multiple components exist in the exchange system, the δ17O-δ18O evolving trajectory would be more complex.

  10. Analysing factors related to slipping, stumbling, and falling accidents at work: Application of data mining methods to Finnish occupational accidents and diseases statistics database.

    PubMed

    Nenonen, Noora

    2013-03-01

    The utilisation of data mining methods has become common in many fields. In occupational accident analysis, however, these methods are still rarely exploited. This study applies methods of data mining (decision tree and association rules) to the Finnish national occupational accidents and diseases statistics database to analyse factors related to slipping, stumbling, and falling (SSF) accidents at work from 2006 to 2007. SSF accidents at work constitute a large proportion (22%) of all accidents at work in Finland. In addition, they are more likely to result in longer periods of incapacity for work than other workplace accidents. The most important factor influencing whether or not an accident at work is related to SSF is the specific physical activity of movement. In addition, the risk of SSF accidents at work seems to depend on the occupation and the age of the worker. The results were in line with previous research. Hence the application of data mining methods was considered successful. The results did not reveal anything unexpected though. Nevertheless, because of the capability to illustrate a large dataset and relationships between variables easily, data mining methods were seen as a useful supplementary method in analysing occupational accident data.

  11. Solving the Big Data (BD) Problem in Advanced Manufacturing (Subcategory for work done at Georgia Tech. Study Process and Design Factors for Additive Manufacturing Improvement)

    SciTech Connect

    Clark, Brett W.; Diaz, Kimberly A.; Ochiobi, Chinaza Darlene; Paynabar, Kamran

    2015-09-01

    3D printing originally known as additive manufacturing is a process of making 3 dimensional solid objects from a CAD file. This ground breaking technology is widely used for industrial and biomedical purposes such as building objects, tools, body parts and cosmetics. An important benefit of 3D printing is the cost reduction and manufacturing flexibility; complex parts are built at the fraction of the price. However, layer by layer printing of complex shapes adds error due to the surface roughness. Any such error results in poor quality products with inaccurate dimensions. The main purpose of this research is to measure the amount of printing errors for parts with different geometric shapes and to analyze them for finding optimal printing settings to minimize the error. We use a Design of Experiments framework, and focus on studying parts with cone and ellipsoid shapes. We found that the orientation and the shape of geometric shapes have significant effect on the printing error. From our analysis, we also determined the optimal orientation that gives the least printing error.

  12. An optimization method for importance factors and beam weights based on genetic algorithms for radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Wu, Xingen; Zhu, Yunping

    2001-04-01

    We propose a new method for selecting importance factors (for regions of interest like organs at risk) used to plan conformal radiotherapy. Importance factors, also known as weighting factors or penalty factors, are essential in determining the relative importance of multiple objectives or the penalty ratios of constraints incorporated into cost functions, especially in dealing with dose optimization in radiotherapy treatment planning. Researchers usually choose importance factors on the basis of a trial-and-error process to reach a balance between all the objectives. In this study, we used a genetic algorithm and adopted a real-number encoding method to represent both beam weights and importance factors in each chromosome. The algorithm starts by optimizing the beam weights for a fixed number of iterations then modifying the importance factors for another fixed number of iterations. During the first phase, the genetic operators, such as crossover and mutation, are carried out only on beam weights, and importance factors for each chromosome are not changed or `frozen'. In the second phase, the situation is reversed: the beam weights are `frozen' and the importance factors are changed after crossover and mutation. Through alternation of these two phases, both beam weights and importance factors are adjusted according to a fitness function that describes the conformity of dose distribution in planning target volume and dose-tolerance constraints in organs at risk. Those chromosomes with better fitness are passed into the next generation, showing that they have a better combination of beam weights and importance factors. Although the ranges of the importance factors should be set in advance by using this algorithm, it is much more convenient than selecting specific numbers for importance factors. Three clinical examples are presented and compared with manual plans to verify this method. Three-dimensional standard displays and dose-volume histograms are shown to

  13. Relaxation and approximate factorization methods for the unsteady full potential equation

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Ide, H.; Gorski, J.

    1984-01-01

    The unsteady form of the full potential equation is solved in conservation form, using implicit methods based on approximate factorization and relaxation schemes. A local time linearization for density is introduced to enable solution to the equation in terms of phi, the velocity potential. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity, to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi obtained from requirements of density continuity. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. Results are presented for flows over airfoils, cylinders, and spheres. Comparisons are made with available Euler and full potential results.

  14. Strain energy release rate determination of stress intensity factors by finite element methods

    NASA Technical Reports Server (NTRS)

    Walsh, R. M., Jr.; Pipes, R. B.

    1985-01-01

    The stiffness derivative finite element technique is used to determine the Mode I stress intensity factors for three-crack configurations. The geometries examined include the double edge notch, single edge notch, and the center crack. The results indicate that when the specified guidelines of the Stiffness Derivative Method are used, a high degree of accuracy can be achieved with an optimized, relatively coarse finite element mesh composed of standard, four-node, plane strain, quadrilateral elements. The numerically generated solutions, when compared with analytical ones, yield results within 0.001 percent of each other for the double edge crack, 0.858 percent for the single edge crack, and 2.021 percent for the center crack.

  15. Standardization based on human factors for 3D display: performance characteristics and measurement methods

    NASA Astrophysics Data System (ADS)

    Uehara, Shin-ichi; Ujike, Hiroyasu; Hamagishi, Goro; Taira, Kazuki; Koike, Takafumi; Kato, Chiaki; Nomura, Toshio; Horikoshi, Tsutomu; Mashitani, Ken; Yuuki, Akimasa; Izumi, Kuniaki; Hisatake, Yuzo; Watanabe, Naoko; Umezu, Naoaki; Nakano, Yoshihiko

    2010-02-01

    We are engaged in international standardization activities for 3D displays. We consider that for a sound development of 3D displays' market, the standards should be based on not only mechanism of 3D displays, but also human factors for stereopsis. However, we think that there is no common understanding on what the 3D display should be and that the situation makes developing the standards difficult. In this paper, to understand the mechanism and human factors, we focus on a double image, which occurs in some conditions on an autostereoscopic display. Although the double image is generally considered as an unwanted effect, we consider that whether the double image is unwanted or not depends on the situation and that there are some allowable double images. We tried to classify the double images into the unwanted and the allowable in terms of the display mechanism and visual ergonomics for stereopsis. The issues associated with the double image are closely related to performance characteristics for the autostereoscopic display. We also propose performance characteristics, measurement and analysis methods to represent interocular crosstalk and motion parallax.

  16. Enhanced power factor of higher manganese silicide via melt spin synthesis method

    SciTech Connect

    Shi, Xiaoya; Li, Qiang; Shi, Xun; Chen, Lidong; Li, Yulong; He, Ying

    2014-12-28

    We report on the thermoelectric properties of the higher manganese silicide MnSi{sub 1.75} synthesized by means of a one-step non-equilibrium method. The ultrahigh cooling rate generated from the melt-spin technique is found to be effective in reducing second phases, which are inevitable during the traditional solid state diffusion processes. Aside from being detrimental to thermoelectric properties, second phases skew the revealing of the intrinsic properties of this class of materials, for example, the optimal level of carrier concentration. With this melt-spin sample, we are able to formulate a simple model based on a single parabolic band that can well describe the carrier concentration dependence of the Seebeck coefficient and power factor of the data reported in the literature. An optimal carrier concentration around 5 × 10{sup 20 }cm{sup −3} at 300 K is predicted according to this model. The phase-pure melt-spin sample shows the largest power factor at high temperature, resulting in the highest zT value among the three samples in this paper.

  17. Enhanced power factor of higher manganese silicide via melt spin synthesis method

    DOE PAGES

    Shi, Xiaoya; Shi, Xun; Li, Yulong; He, Ying; Chen, Lidong; Li, Qiang

    2014-12-30

    We report on the thermoelectric properties of the Higher Manganese Silicide MnSi₁.₇₅ (HMS) synthesized by means of a one-step non-equilibrium method. The ultrahigh cooling rate generated from the melt-spin technique is found to be effective in reducing second phases, which are inevitable during the traditional solid state diffusion processes. Aside from being detrimental to thermoelectric properties, second phases skew the revealing of the intrinsic properties of this class of materials, for example the optimal level of carrier concentration. With this melt-spin sample, we are able to formulate a simple model based on a single parabolic band that can well describemore » the carrier concentration dependence of the Seebeck coefficient and power factor of the data reported in the literature. An optimal carrier concentration around 5x10²⁰ cm⁻³ at 300 K is predicted according to this model. The phase-pure melt-spin sample shows the largest power factor at high temperature, resulting in the highest zT value among the three samples in this paper; the maximum value is superior to those reported in the literatures.« less

  18. Enhanced power factor of higher manganese silicide via melt spin synthesis method

    SciTech Connect

    Shi, Xiaoya; Shi, Xun; Li, Yulong; He, Ying; Chen, Lidong; Li, Qiang

    2014-12-30

    We report on the thermoelectric properties of the Higher Manganese Silicide MnSi₁.₇₅ (HMS) synthesized by means of a one-step non-equilibrium method. The ultrahigh cooling rate generated from the melt-spin technique is found to be effective in reducing second phases, which are inevitable during the traditional solid state diffusion processes. Aside from being detrimental to thermoelectric properties, second phases skew the revealing of the intrinsic properties of this class of materials, for example the optimal level of carrier concentration. With this melt-spin sample, we are able to formulate a simple model based on a single parabolic band that can well describe the carrier concentration dependence of the Seebeck coefficient and power factor of the data reported in the literature. An optimal carrier concentration around 5x10²⁰ cm⁻³ at 300 K is predicted according to this model. The phase-pure melt-spin sample shows the largest power factor at high temperature, resulting in the highest zT value among the three samples in this paper; the maximum value is superior to those reported in the literatures.

  19. Unraveling the Relationship between Motor Symptoms, Affective States and Contextual Factors in Parkinson’s Disease: A Feasibility Study of the Experience Sampling Method

    PubMed Central

    Kuijf, Mark L.; Van Oostenbrugge, Robert J.; van Os, Jim; Leentjens, Albert F. G.

    2016-01-01

    Background In Parkinson's disease (PD), the complex relationship between motor symptoms, affective states, and contextual factors remains to be elucidated. The Experience Sampling Method provides (ESM) a novel approach to this issue. Using a mobile device with a special purpose application (app), motor symptoms, affective states and contextual factors are assessed repeatedly at random moments in the flow of daily life, yielding an intensive time series of symptoms and experience. The aim of this study was to study the feasibility of this method. Method We studied the feasibility of a five-day period of ESM in PD and its ability to objectify diurnal fluctuations in motor symptom severity and their relation with affect and contextual factors in five PD patients with motor fluctuations. Results Participants achieved a high compliance, with 84% of assessment moments completed without disturbance of daily activities. The utility of the device was rated 8 on a 10-point scale. We were able to capture extensive diurnal fluctuations that were not revealed by routine clinical assessment. In addition, we were able to detect clinically relevant associations between motor symptoms, emotional fluctuations and contextual factors at an intra-individual level. Conclusions ESM represents a viable and novel approach to elucidate relationships between motor symptoms, affective states and contextual factors at the level of individual subjects. ESM holds promise for clinical practice and scientific research. PMID:26962853

  20. Investigation of factors affecting the heater wire method of calibrating fine wire thermocouples

    NASA Technical Reports Server (NTRS)

    Keshock, E. G.

    1972-01-01

    An analytical investigation was made of a transient method of calibrating fine wire thermocouples. The system consisted of a 10 mil diameter standard thermocouple (Pt, Pt-13% Rh) and an 0.8 mil diameter chromel-alumel thermocouple attached to a 20 mil diameter electrically heated platinum wire. The calibration procedure consisted of electrically heating the wire to approximately 2500 F within about a seven-second period in an environment approximating atmospheric conditions at 120,000 feet. Rapid periodic readout of the standard and fine wire thermocouple signals permitted a comparison of the two temperature indications. An analysis was performed which indicated that the temperature distortion at the heater wire produced by the thermocouple junctions appears to be of negligible magnitude. Consequently, the calibration technique appears to be basically sound, although several practical changes which appear desirable are presented and discussed. Additional investigation is warranted to evaluate radiation effects and transient response characteristics.

  1. Predicting tree species presence and basal area in Utah: A comparison of stochastic gradient boosting, generalized additive models, and tree-based methods

    USGS Publications Warehouse

    Moisen, G.G.; Freeman, E.A.; Blackard, J.A.; Frescino, T.S.; Zimmermann, N.E.; Edwards, T.C.

    2006-01-01

    Many efforts are underway to produce broad-scale forest attribute maps by modelling forest class and structure variables collected in forest inventories as functions of satellite-based and biophysical information. Typically, variants of classification and regression trees implemented in Rulequest's?? See5 and Cubist (for binary and continuous responses, respectively) are the tools of choice in many of these applications. These tools are widely used in large remote sensing applications, but are not easily interpretable, do not have ties with survey estimation methods, and use proprietary unpublished algorithms. Consequently, three alternative modelling techniques were compared for mapping presence and basal area of 13 species located in the mountain ranges of Utah, USA. The modelling techniques compared included the widely used See5/Cubist, generalized additive models (GAMs), and stochastic gradient boosting (SGB). Model performance was evaluated using independent test data sets. Evaluation criteria for mapping species presence included specificity, sensitivity, Kappa, and area under the curve (AUC). Evaluation criteria for the continuous basal area variables included correlation and relative mean squared error. For predicting species presence (setting thresholds to maximize Kappa), SGB had higher values for the majority of the species for specificity and Kappa, while GAMs had higher values for the majority of the species for sensitivity. In evaluating resultant AUC values, GAM and/or SGB models had significantly better results than the See5 models where significant differences could be detected between models. For nine out of 13 species, basal area prediction results for all modelling techniques were poor (correlations less than 0.5 and relative mean squared errors greater than 0.8), but SGB provided the most stable predictions in these instances. SGB and Cubist performed equally well for modelling basal area for three species with moderate prediction success

  2. Inductively coupled plasma spectrometry: Noise characteristics of aerosols, application of generalized standard additions method, and Mach disk as an emission source

    SciTech Connect

    Shen, Luan

    1995-10-06

    This dissertation is focused on three problem areas in the performance of inductively coupled plasma (ICP) source. The noise characteristics of aerosols produced by ICP nebulizers are investigated. A laser beam is scattered by aerosol and detected by a photomultiplier tube and the noise amplitude spectrum of the scattered radiation is measured by a spectrum analyzer. Discrete frequency noise in the aerosol generated by a Meinhard nebulizer or a direct injection nebulizer is primarily caused by pulsation in the liquid flow from the pump. A Scott-type spray chamber suppresses white noise, while a conical, straight-pass spray chamber enhances white noise, relative to the noise seen from the primary aerosol. Simultaneous correction for both spectral interferences and matrix effects in ICP atomic emission spectrometry (AES) can be accomplished by using the generalized standard additions method (GSAM). Results obtained with the application of the GSAM to the Perkin-Elmer Optima 3000 ICP atomic emission spectrometer are presented. The echelle-based polychromator with segmented-array charge-coupled device detectors enables the direct, visual examination of the overlapping lines Cd (1) 228.802 nm and As (1) 228.812 nm. The slit translation capability allows a large number of data points to be sampled, therefore, the advantage of noise averaging is gained. An ICP is extracted into a small quartz vacuum chamber through a sampling orifice in a water-cooled copper plate. Optical emission from the Mach disk region is measured with a new type of echelle spectrometer equipped with two segmented-array charge-coupled-device detectors, with an effort to improve the detection limits for simultaneous multielement analysis by ICP-AES.

  3. Factors Influencing Adherence to Antiretroviral Treatment in Nepal: A Mixed-Methods Study

    PubMed Central

    Wasti, Sharada P.; Simkhada, Padam; Randall, Julian; Freeman, Jennifer V.; van Teijlingen, Edwin

    2012-01-01

    Background Antiretroviral therapy (ART) is a lifesaver for individual patients treated for Human Immunodeficiency Virus (HIV) and Acquired Immune Deficiency Syndrome (AIDS). Maintaining optimal adherence to antiretroviral drugs is essential for HIV infection management. This study aimed to understand the factors influencing adherence amongst ART-prescribed patients and care providers in Nepal. Methods A cross-sectional mixed-methods study surveying 330 ART-prescribed patients and 34 in-depth interviews with three different types of stakeholders: patients, care providers, and key people at policy level. Adherence was assessed through survey self-reporting and during the interviews. A multivariate logistic regression model was used to identify factors associated with adherence, supplemented with a thematic analysis of the interview transcripts. Results A total of 282 (85.5%) respondents reported complete adherence, i.e. no missed doses in the four-weeks prior to interview. Major factors influencing adherence were: non-disclosure of HIV status (OR = 17.99, p =  0.014); alcohol use (OR = 12.89, p = <0.001), being female (OR = 6.91, p = 0.001), being illiterate (OR = 4.58, p = 0.015), side-effects (OR = 6.04, p = 0.025), ART started ≤24 months (OR = 3.18, p = 0.009), travel time to hospital >1 hour (OR = 2.84, p = 0.035). Similarly, lack of knowledge and negative perception towards ART medications also significantly affected non-adherence. Transport costs (for repeat prescription), followed by pills running out, not wanting others to notice, side-effects, and being busy were the most common reasons for non-adherence. The interviews also revealed religious or ritual obstacles, stigma and discrimination, ART-associated costs, transport problems, lack of support, and side-effects as contributing to non-adherence. Conclusion Improving adherence requires a supportive environment; accessible treatment; clear

  4. Factors associated with recurrence of clubfoot treated by the Ponseti method

    PubMed Central

    Azarpira, Mohammad Reza; Emami, Mohammad Jafar; Vosoughi, Amir Reza; Rahbari, Keivan

    2016-01-01

    AIM To assess several associated factors on the recurrence of clubfoot after successful correction by the Ponseti method. METHODS A total of 115 children with 196 clubfeet deformities, treated by the Ponseti method, were evaluated. Demographic data, family history of clubfoot in first-degree relatives, maternal educational level and brace compliance were enquired. Based on their medical files, the characteristics of the patients at the time of presentation such as age, possible associated neuromuscular disease or especial syndrome, severity of the deformity according to the Dimeglio grade and Pirani score, residual deformity after previous Ponseti method and number of casts needed for the correction were recorded. RESULTS There were 83 boys (72.2%) and 32 girls (27.8%) with a male to female ratio of 2.6. The mean age at the initiation of treatment was 5.4 d (range: 1 to 60 d). The average number of casts applied to achieve complete correction of all clubfoot deformities was 4.2. Follow-up range was 11 to 60 mo. In total, 39 feet had recurrence with a minimum Dimeglio grade of 1 or Pirani score of 0.5 at the follow-up visit. More recurrence was observed in non-idiopathic clubfoot deformities (P = 0.001), non-compliance to wear braces (P < 0.001), low educational level of mother (P = 0.033), increased number of casts (P < 0.001), and more follow-up periods (P < 0.001). No increase in the possibility of recurrence was observed when the previous unsuccessful casting was further treated using the Ponseti method (P = 0.091). Also, no significant correlation was found for variables of age (P = 0.763), Dimeglio grade (P = 0.875), and Pirani score (P = 0.624) obtaining at the beginning of the serial casting. CONCLUSION Using the Ponseti method, non-idiopathic clubfoot, non-compliance to wear braces, low educational level of mother, increased number of casts and more follow-up periods had more association to possible increase in recurrence rate after correction of clubfoot

  5. Electrophilic addition of astatine

    SciTech Connect

    Norseev, Yu.V.; Vasaros, L.; Nhan, D.D.; Huan, N.K.

    1988-03-01

    It has been shown for the first time that astatine is capable of undergoing addition reactions to unsaturated hydrocarbons. A new compound of astatine, viz., ethylene astatohydrin, has been obtained, and its retention numbers of squalane, Apiezon, and tricresyl phosphate have been found. The influence of various factors on the formation of ethylene astatohydrin has been studied. It has been concluded on the basis of the results obtained that the univalent cations of astatine in an acidic medium is protonated hypoastatous acid.

  6. Bayesian methods for uncertainty factor application for derivation of reference values.

    PubMed

    Simon, Ted W; Zhu, Yiliang; Dourson, Michael L; Beck, Nancy B

    2016-10-01

    In 2014, the National Research Council (NRC) published Review of EPA's Integrated Risk Information System (IRIS) Process that considers methods EPA uses for developing toxicity criteria for non-carcinogens. These criteria are the Reference Dose (RfD) for oral exposure and Reference Concentration (RfC) for inhalation exposure. The NRC Review suggested using Bayesian methods for application of uncertainty factors (UFs) to adjust the point of departure dose or concentration to a level considered to be without adverse effects for the human population. The NRC foresaw Bayesian methods would be potentially useful for combining toxicity data from disparate sources-high throughput assays, animal testing, and observational epidemiology. UFs represent five distinct areas for which both adjustment and consideration of uncertainty may be needed. NRC suggested UFs could be represented as Bayesian prior distributions, illustrated the use of a log-normal distribution to represent the composite UF, and combined this distribution with a log-normal distribution representing uncertainty in the point of departure (POD) to reflect the overall uncertainty. Here, we explore these suggestions and present a refinement of the methodology suggested by NRC that considers each individual UF as a distribution. From an examination of 24 evaluations from EPA's IRIS program, when individual UFs were represented using this approach, the geometric mean fold change in the value of the RfD or RfC increased from 3 to over 30, depending on the number of individual UFs used and the sophistication of the assessment. We present example calculations and recommendations for implementing the refined NRC methodology. PMID:27211295

  7. Bayesian methods for uncertainty factor application for derivation of reference values.

    PubMed

    Simon, Ted W; Zhu, Yiliang; Dourson, Michael L; Beck, Nancy B

    2016-10-01

    In 2014, the National Research Council (NRC) published Review of EPA's Integrated Risk Information System (IRIS) Process that considers methods EPA uses for developing toxicity criteria for non-carcinogens. These criteria are the Reference Dose (RfD) for oral exposure and Reference Concentration (RfC) for inhalation exposure. The NRC Review suggested using Bayesian methods for application of uncertainty factors (UFs) to adjust the point of departure dose or concentration to a level considered to be without adverse effects for the human population. The NRC foresaw Bayesian methods would be potentially useful for combining toxicity data from disparate sources-high throughput assays, animal testing, and observational epidemiology. UFs represent five distinct areas for which both adjustment and consideration of uncertainty may be needed. NRC suggested UFs could be represented as Bayesian prior distributions, illustrated the use of a log-normal distribution to represent the composite UF, and combined this distribution with a log-normal distribution representing uncertainty in the point of departure (POD) to reflect the overall uncertainty. Here, we explore these suggestions and present a refinement of the methodology suggested by NRC that considers each individual UF as a distribution. From an examination of 24 evaluations from EPA's IRIS program, when individual UFs were represented using this approach, the geometric mean fold change in the value of the RfD or RfC increased from 3 to over 30, depending on the number of individual UFs used and the sophistication of the assessment. We present example calculations and recommendations for implementing the refined NRC methodology.

  8. A method for obtaining first integrals and integrating factors of autonomous systems and application to Euler-Poisson equations

    NASA Astrophysics Data System (ADS)

    Hu, Yanxia; Yang, Xiaozhong

    2006-08-01

    A method for obtaining first integrals and integrating factors of n-th order autonomous systems is proposed. The search for first integrals and integrating factors can be reduced to the search for a class of invariant manifolds of the systems. Finally, the proposed method is applied to Euler-Poisson equations (gyroscope system), and the fourth first integral of the system in general Kovalevskaya case can be obtained.

  9. Non-valvular atrial fibrillation patients with none or one additional risk factor of the CHA2DS2-VASc score. A comprehensive net clinical benefit analysis for warfarin, aspirin, or no therapy.

    PubMed

    Lip, Gregory Y H; Skjøth, Flemming; Nielsen, Peter B; Larsen, Torben Bjerregaard

    2015-10-01

    Oral anticoagulation (OAC) to prevent stroke has to be balanced against the potential harm of serious bleeding, especially intracranial haemorrhage (ICH). We determined the net clinical benefit (NCB) balancing effectiveness and safety of no antithrombotic therapy, aspirin and warfarin in AF patients with none or one stroke risk factor. Using Danish registries, we determined NCB using various definitions intrinsic to our cohort (Danish weights at 1 and 5 year follow-up), with risk weights which were derived from the hazard ratio (HR) of death following an event, relative to HR of death after ischaemic stroke. When aspirin was compared to no treatment, NCB was neutral or negative for both risk strata. For warfarin vs no treatment, NCB using Danish weights was neutral where no risk factors were present and using five years follow-up. For one stroke risk factor, NCB was positive for warfarin vs no treatment, for one year and five year follow-up. For warfarin vs aspirin use in patients with no risk factors, NCB was positive with one year follow-up, but neutral with five year follow-up. With one risk factor, NCB was generally positive for warfarin vs aspirin. In conclusion, we show a positive overall advantage (i.e. positive NCB) of effective stroke prevention with OAC, compared to no therapy or aspirin with one additional stroke risk factor, using Danish weights. 'Low risk' AF patients with no additional stroke risk factors (i.e.CHA2DS2-VASc 0 in males, 1 in females) do not derive any advantage (neutral or negative NCB) with aspirin, nor with warfarin therapy in the long run.

  10. A Randomized Study of the Effects of Additional Fruit and Nuts Consumption on Hepatic Fat Content, Cardiovascular Risk Factors and Basal Metabolic Rate

    PubMed Central

    Romu, Thobias; Dahlqvist-Leinhard, Olof; Borga, Magnus; Leandersson, Per; Nystrom, Fredrik H.

    2016-01-01

    Background Fruit has since long been advocated as a healthy source of many nutrients, however, the high content of sugars in fruit might be a concern. Objectives To study effects of an increased fruit intake compared with similar amount of extra calories from nuts in humans. Methods Thirty healthy non-obese participants were randomized to either supplement the diet with fruits or nuts, each at +7 kcal/kg bodyweight/day for two months. Major endpoints were change of hepatic fat content (HFC, by magnetic resonance imaging, MRI), basal metabolic rate (BMR, with indirect calorimetry) and cardiovascular risk markers. Results Weight gain was numerically similar in both groups although only statistically significant in the group randomized to nuts (fruit: from 22.15±1.61 kg/m2 to 22.30±1.7 kg/m2, p = 0.24 nuts: from 22.54±2.26 kg/m2 to 22.73±2.28 kg/m2, p = 0.045). On the other hand BMR increased in the nut group only (p = 0.028). Only the nut group reported a net increase of calories (from 2519±721 kcal/day to 2763±595 kcal/day, p = 0.035) according to 3-day food registrations. Despite an almost three-fold reported increased fructose-intake in the fruit group (from 9.1±6.0 gram/day to 25.6±9.6 gram/day, p<0.0001, nuts: from 12.4±5.7 gram/day to 6.5±5.3 gram/day, p = 0.007) there was no change of HFC. The numerical increase in fasting insulin was statistical significant only in the fruit group (from 7.73±3.1 pmol/l to 8.81±2.9 pmol/l, p = 0.018, nuts: from 7.29±2.9 pmol/l to 8.62±3.0 pmol/l, p = 0.14). Levels of vitamin C increased in both groups while α-tocopherol/cholesterol-ratio increased only in the fruit group. Conclusions Although BMR increased in the nut-group only this was not linked with differences in weight gain between groups which potentially could be explained by the lack of reported net caloric increase in the fruit group. In healthy non-obese individuals an increased fruit intake seems safe from cardiovascular risk perspective, including

  11. Determining of Factors Influencing the Success and Failure of Hospital Information System and Their Evaluation Methods: A Systematic Review

    PubMed Central

    Sadoughi, Farahnaz; Kimiafar, Khalil; Ahmadi, Maryam; Shakeri, Mohammad Taghi

    2013-01-01

    Background: Nowadays, using new information technology (IT) has provided remarkable opportunities to decrease medical errors, support health care specialist, increase the efficiency and even the quality of patient’s care and safety. Objectives: The purpose of this study was the identification of Hospital Information System (HIS) success and failure factors and the evaluation methods of these factors. This research emphasizes the need to a comprehensive evaluation of HISs which considers a wide range of success and failure factors in these systems. Materials and Methods: We searched for relevant English language studies based on keywords in title and abstract, using PubMed, Ovid Medline (by applying MeSH terms), Scopus, ScienceDirect and Embase (earliest entry to march 17, 2012). Studies which considered success models and success or failure factors, or studied the evaluation models of HISs and the related ones were chosen. Since the studies used in this systematic review were heterogeneous, the combination of extracted data was carried out by using narrative synthesis method. Results: We found 16 articles which required detailed analysis. Finally, the suggested framework includes 12 main factors (functional, organizational, behavioral, cultural, management, technical, strategy, economy, education, legal, ethical and political factors), 67 sub factors, and 33 suggested methods for the evaluation of these sub factors. Conclusions: The results of the present research indicates that the emphasis of the HIS evaluation moves from technical subjects to human and organizational subjects, and from objective to subjective issues. Therefore, this issue entails more familiarity with more qualitative evaluation methods. In most of the reviewed studies, the main focus has been laid on the necessity of using multi-method approaches and combining methods to obtain more comprehensive and useful results. PMID:24693386

  12. Additively Manufactured 3D Porous Ti-6Al-4V Constructs Mimic Trabecular Bone Structure and Regulate Osteoblast Proliferation, Differentiation and Local Factor Production in a Porosity and Surface Roughness Dependent Manner

    PubMed Central

    Cheng, Alice; Humayun, Aiza; Cohen, David J.; Boyan, Barbara D.; Schwartz, Zvi

    2014-01-01

    Additive manufacturing by laser sintering is able to produce high resolution metal constructs for orthopaedic and dental implants. In this study, we used a human trabecular bone template to design and manufacture Ti-6Al-4V constructs with varying porosity via laser sintering. Characterization of constructs revealed interconnected porosities ranging from 15–70% with compressive moduli of 2063–2954 MPa. These constructs with macro porosity were further surface-treated to create a desirable multi-scale micro-/nano-roughness, which has been shown to enhance the osseointegration process. Osteoblasts (MG63 cells) exhibited high viability when grown on the constructs. Proliferation (DNA) and alkaline phosphatase specific activity (ALP), an early differentiation marker, decreased as porosity increased, while osteocalcin (OCN), a late differentiation marker, as well as osteoprotegerin (OPG), vascular endothelial growth factor (VEGF) and bone morphogenetic proteins 2 and 4 (BMP2, BMP4) increased with increasing porosity. 3D constructs with the highest porosity and surface modification supported the greatest osteoblast differentiation and local factor production. These results indicate that additively manufactured 3D porous constructs mimicking human trabecular bone and produced with additional surface treatment can be customized for increased osteoblast response. Increased factors for osteoblast maturation and differentiation on high porosity constructs suggest the enhanced performance of these surfaces for increasing osseointegration in vivo. PMID:25287305

  13. Extracellular nonmitogenic angiogenesis factor and method of isolation thereof from wound fluid

    DOEpatents

    Banda, Michael J.; Werb, Zena; Knighton, David R.; Hunt, Thomas K.

    1985-01-01

    A nonmitogenic angiogenesis factor is isolated from wound fluid by dialysis to include materials in the molecular size range of 2,000 to 14,000, lyophilization, and chromatography. The nonmitogenic angiogenesis factor is identified by activity by corneal implant assay and by cell migration assay. The angiogenesis factor is also characterized by inactivity by mitogenesis assay.

  14. Extracellular nonmitogenic angiogenesis factor and method of isolation thereof from wound fluid

    DOEpatents

    Banda, M.J.; Werb, Z.; Knighton, D.R.; Hunt, T.K.

    1985-03-05

    A nonmitogenic angiogenesis factor is isolated from wound fluid by dialysis to include materials in the molecular size range of 2,000 to 14,000, lyophilization, and chromatography. The nonmitogenic angiogenesis factor is identified by activity by corneal implant assay and by cell migration assay. The angiogenesis factor is also characterized by inactivity by mitogenesis assay. 3 figs.

  15. Smog control fuel additives

    SciTech Connect

    Lundby, W.

    1993-06-29

    A method is described of controlling, reducing or eliminating, ozone and related smog resulting from photochemical reactions between ozone and automotive or industrial gases comprising the addition of iodine or compounds of iodine to hydrocarbon-base fuels prior to or during combustion in an amount of about 1 part iodine per 240 to 10,000,000 parts fuel, by weight, to be accomplished by: (a) the addition of these inhibitors during or after the refining or manufacturing process of liquid fuels; (b) the production of these inhibitors for addition into fuel tanks, such as automotive or industrial tanks; or (c) the addition of these inhibitors into combustion chambers of equipment utilizing solid fuels for the purpose of reducing ozone.

  16. Ultrasonic broadband characterization of a viscous liquid: methods and perturbation factors.

    PubMed

    Ghodhbani, Nacef; Marechal, Pierre; Duflo, Hugues

    2015-02-01

    The perturbation factors involved in ultrasonic broadband characterization of viscous fluids are analyzed. Precisely, the normal incidence error and the thermal sensitivity of the properties have been identified as dominant parameters. Thus, the sensitivity of the ultrasonic parameters of attenuation and phase velocity were measured at room temperature in the MHz frequency range for two reference silicone oils, namely 47V50 and 47V350 (Rhodorsil). Several methods of characterization were carried out: time of flight, cross-correlation and spectral method. These ultrasonic parameters are measured at room temperature. For this family of silicone oil, the dispersion of the attenuation spectrum is modeled by a power law. The velocity dispersion is modeled by two dispersion models: the quasi-local and the temporal causal. The impact of the experimental reproducibility of the phase velocity and acoustic attenuation was measured in the MHz frequency range, using a set of ultrasonic transducers with different center frequencies. These measurements are used to identify the dispersion of the ultrasonic parameters as a function of the frequency. A first experimental and descriptive approach is developed to assess the reproducibility of the normal incidence between the acoustic beam and the viscoelastic material. Thus, the relative error on the measurements of velocity and attenuation are directly related to the angular deviation of the ultrasonic wave, as well as the sampling and signal-to-noise ratio. A second experimental and phenomenological approach deals with the effect of a temperature change, typical of a polymerization reaction. As a result, the sensitivity of the phase velocity of silicone oil 47V50 was evaluated around -2 ms(-1) K(-1). PMID:25238692

  17. Near-field imaging of obstacles with the factorization method: fluid-solid interaction

    NASA Astrophysics Data System (ADS)

    Yin, Tao; Hu, Guanghui; Xu, Liwei; Zhang, Bo

    2016-01-01

    Consider a time-harmonic acoustic point source incident on a bounded isotropic linearly elastic body immersed in a homogeneous compressible inviscid fluid. This paper is concerned with the inverse fluid-solid interaction problem of recovering the elastic body from near-field data generated by infinitely many incident point source waves at a fixed energy. The incident point sources and the receivers for recording scattered signals are both located on a non-spherical closed surface, on which an outgoing-to-incoming operator is appropriately defined. We provide a theoretical justification of the factorization method for precisely characterizing the scatterer by utilizing the spectrum of the near-field operator. This generalizes the imaging scheme developed in (Hu et al 2014 Inverse Problems 30 095005) to the case when near-field data are measured on non-spherical surfaces. Numerical examples in 2D are demonstrated to show the validity and accuracy of the inversion algorithm, even if limited aperture data are available on one or several line segments.

  18. Factors affecting the clinical use of non-invasive prenatal testing: a mixed methods systematic review.

    PubMed

    Skirton, Heather; Patch, Christine

    2013-06-01

    Non-invasive prenatal testing has been in clinical use for a decade; however, there is evidence that this technology will be more widely applied within the next few years. Guidance is therefore required to ensure that the procedure is offered in a way that is evidence based and ethically and clinically acceptable. We conducted a systematic review of the current relevant literature to ascertain the factors that should be considered when offering non-invasive prenatal testing in a clinical setting. We undertook a systematic search of relevant databases, journals and reference lists, and from an initial list of 298 potential papers, identified 11 that were directly relevant to the study. Original data were extracted and presented in a table, and the content of all papers was analysed and presented in narrative form. Four main themes emerged: perceived attributes of the test, regulation and ethical issues, non-invasive prenatal testing in practice and economic considerations. However, there was a basic difference in the approach of actual or potential service users, who were very positive about the benefits of the technology, compared with other research participants, who were concerned with the potential moral and ethical outcomes of using this testing method. Recommendations for the appropriate use of non-invasive prenatal testing are made.

  19. Growth factor delivery methods in the management of sports injuries: the state of play.

    PubMed

    Creaney, L; Hamilton, B

    2008-05-01

    In recent years there have been rapid developments in the use of growth factors for accelerated healing of injury. Growth factors have been used in maxillo-facial and plastic surgery with success and the technology is now being developed for orthopaedics and sports medicine applications. Growth factors mediate the biological processes necessary for repair of soft tissues such as muscle, tendon and ligament following acute traumatic or overuse injury, and animal studies have demonstrated clear benefits in terms of accelerated healing. There are various ways of delivering higher doses of growth factors to injured tissue, but each has in common a reliance on release of growth factors from blood platelets. Platelets contain growth factors in their alpha-granules (insulin-like growth factor-1, basic fibroblast growth factor, platelet-derived growth factor, epidermal growth factor, vascular endothelial growth factor, transforming growth factor-beta(1)) and these are released upon injection at the site of an injury. Three commonly utilised techniques are known as platelet-rich plasma, autologous blood injections and autologous conditioned serum. Each of these techniques has been studied clinically in humans to a very limited degree so far, but results are promising in terms of earlier return to play following muscle and particularly tendon injury. The use of growth factors in sports medicine is restricted under the terms of the World Anti-Doping Agency (WADA) anti-doping code, particularly because of concerns regarding the insulin-like growth factor-1 content of such preparations, and the potential for abuse as performance-enhancing agents. The basic science and clinical trials related to the technology are reviewed, and the use of such agents in relation to the WADA code is discussed. PMID:17984193

  20. 41 CFR 301-70.101 - What factors must we consider in determining which method of transportation results in the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false What factors must we consider in determining which method of transportation results in the greatest advantage to the Government... determining which method of transportation results in the greatest advantage to the Government? In selecting...

  1. Enrollment Forecasting with Double Exponential Smoothing: Two Methods for Objective Weight Factor Selection. AIR Forum 1980 Paper.

    ERIC Educational Resources Information Center

    Gardner, Don E.

    The merits of double exponential smoothing are discussed relative to other types of pattern-based enrollment forecasting methods. The difficulties associated with selecting an appropriate weight factor are discussed, and their potential effects on prediction results are illustrated. Two methods for objectively selecting the "best" weight factor…

  2. [Additional administration of dutasteride in patients with benign prostatic hyperplasia who did not respond sufficiently to α1-adrenoceptor antagonist : investigation of clinical factors affecting the therapeutic effect of dutasteride].

    PubMed

    Masuda, Mitsunobu; Murai, Tetsuo; Osada, Yutaka; Kawai, Masaki; Kasuga, Jun; Yokomizo, Yumiko; Kuroda, Shinnosuke; Nakamura, Mami; Noguchi, Go

    2014-02-01

    We performed additional administration of dutasteride in patients who did not respond sufficiently to α1-adrenoceptor antagonist treatment for lower urinary tract symptoms (LUTS) associated with benign prostatic hyperplasia (BPH) (LUTS/BPH). Among 76 registered patients, efficacy was analyzed in 58 patients. International Prostate Symptom Score (IPSS), subscores for voiding and storage symptoms and quality of life (QOL) on the IPSS, and Overactive Bladder Symptom Score (OABSS) were all significantly improved from the third month of administration compared to the time of initiating additional administration of dutasteride. Additional administration of dutasteride also significantly reduced prostate volume, and residual urine with the exception of the sixth month after administration. Age at initiation of administration and voiding symptom subscore on the IPSS were clinical factors affecting the therapeutic effects of dutasteride. The rate of improvement with treatment decreased with increasing age at initiation of dutasteride administration, and increased as voiding symptom subscore on the IPSS increased. Therefore, additional administration of dutasteride appears useful for cases of LUTS/BPH in which a sufficient response is not achieved with α1-adrenoceptor antagonist treatment. Because patients who have severe voiding symptoms or begin dutasteride at an early age may be expected to respond particularly well to dutasteride in terms of clinical efficacy, they were considered to be suitable targets for additional administration. PMID:24755815

  3. Factors Related to Furniture Anchoring: A Method for Reducing Harm During Earthquakes.

    PubMed

    Haraoka, Tomoko; Hayasaka, Shinya; Murata, Chiyoe; Yamaoka, Taiji; Ojima, Toshiyuki

    2012-12-27

    Objective:  Fatalities and injuries during an earthquake can be reduced by taking preemptive measures beforehand, and furniture anchoring is an important safety measure for all residents. This study sought to clarify the factors associated with furniture anchoring within the home. Methods:  A self-administered mail survey was completed from July to August 2010 by 3500 men and women between the ages of 20 and 69 years who were chosen at random from an official government resident registry of 2 cities in Japan. Results:  Of the 1729 valid responses, 37.1% reported furniture anchoring. An association with furniture anchoring was observed for having viewed earthquake intensity maps or damage predictions (odds ratio [OR] 1.92, 95% CI 1.54-2.39), expressing concern about a future earthquake (OR 2.07, 95% CI 1.36-3.15), feelings of urgency (OR 1.90, 95% CI 1.47-2.45), accuracy of the government disaster preparedness information (OR 1.68, 95% CI 1.17-2.42), knowledge of the meaning of emergency earthquake warnings (OR 1.67, 95% CI 1.12-2.48), and participation in voluntary disaster preparedness activities (OR 1.40, 95% CI 1.12-1.75). Conclusions:  Furniture anchoring was found to be associated with risk awareness, risk perception, disaster preparedness information provided by government to residents, knowledge of earthquakes, participation in voluntary disaster preparedness activities, nonwooden structures, and marital status. An increase in furniture anchoring is important and can be achieved through education and training in daily life.

  4. Application of the new Section XI, A-3000 method for stress intensity factor calculation to thick-walled pressure vessels

    SciTech Connect

    Kendall, D.P.

    1996-12-01

    The ASME Boiler and Pressure Vessel Code, Section XI, Appendix A, Article A-3000 has been recently revised to include a more accurate method for calculating stress intensity factors. It is based on fitting the distribution of the stress normal to the plane of the crack in the uncracked body, over the depth of the crack, with a cubic equation. The coefficients of this equation are used with correction factors given in the code to calculate the stress intensity factors at the deepest point of the crack and near the free surface. Correction factors are given for a range of values of relative crack depth and crack shape. In a pressurized thick-walled cylinder the stresses of interest are the tangential stresses due to internal pressure as given by the Lame Equations, plus the effect of the pressure in the crack. This paper shows that the Lame stresses, as a function of distance from the inner surface, can be accurately fitted with a simple set of cubic equations over the full wall thickness for a wide range of diameter ratios. The coefficients of these equations, combined with the correction factors, are used to calculate stress intensity factors for a range of diameter ratios and at both the deepest point of the crack and near the free surface. The results are compared with stress intensity factors calculated using the linearized stress method proposed by Kendall and Perez. The effect of the plastic zone correction given in the new method is reported. The stress intensity factors due to autofrettage residual stresses calculated by the new method are also reported.

  5. Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering

    NASA Technical Reports Server (NTRS)

    Bolton, Matthew L.; Bass, Ellen J.

    2009-01-01

    Both the human factors engineering (HFE) and formal methods communities are concerned with finding and eliminating problems with safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to use model checking with HFE practices to perform formal verification of a human-interactive system. Despite the use of a seemingly simple target system, a patient controlled analgesia pump, the initial model proved to be difficult for the model checker to verify in a reasonable amount of time. This resulted in a number of model revisions that affected the HFE architectural, representativeness, and understandability goals of the effort. If formal methods are to meet the needs of the HFE community, additional modeling tools and technological developments are necessary.

  6. Assessment of Flatbed Scanner Method for Quality Assurance Testing of Air Content and Spacing Factor in Concrete

    NASA Astrophysics Data System (ADS)

    Nezami, Sona

    The flatbed scanner method for air void analysis of concrete is investigated through a comparison study with the standard ASTM C457 manual and Rapid Air 457 test methods. Air void parameters including air content and spacing factor are determined by image analysis of a large population of scanned samples through contrast enhancement and threshold determination procedures. It is shown that flatbed scanner method is giving comparable results to manual and Rapid Air 457 methods. Furthermore, a comparison of the air void chord length distributions obtained from the two methods of flatbed scanner and Rapid Air 457 has been implemented in this research. The effect of having different settings in the scanning process of scanner method is also investigated. Moreover, a threshold study has been performed that showed the flatbed scanner method can be employed in combination with manual and Rapid Air 457 methods as a time and cost saving strategy.

  7. A simple and efficient method to reduce nontemplated nucleotide addition at the 3 terminus of RNAs transcribed by T7 RNA polymerase.

    PubMed Central

    Kao, C; Zheng, M; Rüdisser, S

    1999-01-01

    DNA templates modified with C2'-methoxyls at the last two nucleotides of the 5' termini dramatically reduced nontemplated nucleotide addition by the T7 RNA polymerase from both single- and double-stranded DNA templates. This strategy was used to generate several different transcripts. Two of the transcripts were demonstrated by nuclear magnetic resonance spectroscopy to be unaffected in their sequence. Transcripts produced from the modified templates can be purified with greater ease and should be useful in a number of applications. PMID:10496227

  8. Factor analysis of perceptual and cognitive abilities tested by different methods.

    PubMed

    Kinney, J A; Luria, S M

    1980-02-01

    A battery of visual, perceptual, and cognitive tests, believed to be important for operation of visual sonar displays, was administered to 100 sonar technicians. The measures varied from standard paper-and-pencil tests to computer-administered perceptual tasks. The results of 33 different measures on these men were compiled and subjected to a factor analysis. The factors extracted represent cohesive and reasonable groups which cut, to some extent, across testing techniques. However, all of the paper-and-pencil, perceptual-cognitive tests had high loadings on the same common factor, a result with general implications for occupational testing.

  9. Application of the incomplete Cholesky factorization preconditioned Krylov subspace method to the vector finite element method for 3-D electromagnetic scattering problems

    NASA Astrophysics Data System (ADS)

    Li, Liang; Huang, Ting-Zhu; Jing, Yan-Fei; Zhang, Yong

    2010-02-01

    The incomplete Cholesky (IC) factorization preconditioning technique is applied to the Krylov subspace methods for solving large systems of linear equations resulted from the use of edge-based finite element method (FEM). The construction of the preconditioner is based on the fact that the coefficient matrix is represented in an upper triangular compressed sparse row (CSR) form. An efficient implementation of the IC factorization is described in detail for complex symmetric matrices. With some ordering schemes our IC algorithm can greatly reduce the memory requirement as well as the iteration numbers. Numerical tests on harmonic analysis for plane wave scattering from a metallic plate and a metallic sphere coated by a lossy dielectric layer show the efficiency of this method.

  10. [Diagnosis of liver diseases by classification of laboratory signal factor pattern findings with the Mahalanobis·Taguchi Adjoint method].

    PubMed

    Nakajima, Hisato; Yano, Kouya; Uetake, Shinichirou; Takagi, Ichiro

    2012-02-01

    There are many autoimmune liver diseases in which diagnosis is difficult so that overlap is accepted, and this negatively affects treatment. The initial diagnosis is therefore important for later treatment and convalescence. We distinguished autoimmune cholangitis, autoimmune hepatitis and primary biliary cirrhosis by the Mahalanobis·Taguchi Adjoint (MTA) method in the Mahalanobis·Taguchi system and analyzed the pattern of factor effects by the MTA method. As a result, the characteristic factor effect pattern of each disease was classified, enabling the qualitative evaluation of cases including overlapping cases which were difficult to diagnose.

  11. Irreversible simulated tempering algorithm with skew detailed balance conditions: a learning method of weight factors in simulated tempering

    NASA Astrophysics Data System (ADS)

    Sakai, Yuji; Hukushima, Koji

    2016-09-01

    Recent numerical studies concerning simulated tempering algorithm without the detailed balance condition are reviewed and an irreversible simulated tempering algorithm based on the skew detailed balance condition is described. A method to estimate weight factors in simulated tempering by sequentially implementing the irreversible simulated tempering algorithm is studied in comparison with the conventional simulated tempering algorithm satisfying the detailed balance condition. It is found that the total amount of Monte Carlo steps for estimating the weight factors is successfully reduced by applying the proposed method to an two-dimensional ferromagnetic Ising model.

  12. Multi-method Assessment of Psychopathy in Relation to Factors of Internalizing and Externalizing from the Personality Assessment Inventory: The Impact of Method Variance and Suppressor Effects

    PubMed Central

    Blonigen, Daniel M.; Patrick, Christopher J.; Douglas, Kevin S.; Poythress, Norman G.; Skeem, Jennifer L.; Lilienfeld, Scott O.; Edens, John F.; Krueger, Robert F.

    2010-01-01

    Research to date has revealed divergent relations across factors of psychopathy measures with criteria of internalizing (INT; anxiety, depression) and externalizing (EXT; antisocial behavior, substance use). However, failure to account for method variance and suppressor effects has obscured the consistency of these findings across distinct measures of psychopathy. Using a large correctional sample, the current study employed a multi-method approach to psychopathy assessment (self-report, interview/file review) to explore convergent and discriminant relations between factors of psychopathy measures and latent criteria of INT and EXT derived from the Personality Assessment Inventory (PAI; L. Morey, 2007). Consistent with prediction, scores on the affective-interpersonal factor of psychopathy were negatively associated with INT and negligibly related to EXT, whereas scores on the social deviance factor exhibited positive associations (moderate and large, respectively) with both INT and EXT. Notably, associations were highly comparable across the psychopathy measures when accounting for method variance (in the case of EXT) and when assessing for suppressor effects (in the case of INT). Findings are discussed in terms of implications for clinical assessment and evaluation of the validity of interpretations drawn from scores on psychopathy measures. PMID:20230156

  13. The test and treatment methods of benign paroxysmal positional vertigo and an addition to the management of vertigo due to the superior vestibular canal (BPPV-SC).

    PubMed

    Rahko, T

    2002-10-01

    A review of the tests and treatment manoeuvres for benign paroxysmal positional vertigo of the posterior, horizontal and superior vestibular canals is presented. Additionally, a new way to test and treat positional vertigo of the superior vestibular canal is presented. In a prospective study, 57 out of 305 patients' visits are reported. They had residual symptoms and dizziness after the test and the treatment of benign paroxysmal positional vertigo of the horizontal canal (BPPV-HC) and posterior canal (PC). They were tested with a new test and treated with a new manoeuvre for superior canal benign paroxysmal positional vertigo (BPPV-SC). Results for vertigo in 53 patients were good; motion sickness and acrophobia disappeared. Reactive neck tension to BPPV was relieved. Older people were numerous among patients and their quality of life (QOL) improved.

  14. Grating array systems having a plurality of gratings operative in a coherently additive mode and methods for making such grating array systems

    DOEpatents

    Kessler, Terrance J.; Bunkenburg, Joachim; Huang, Hu

    2007-02-13

    A plurality of gratings (G1, G2) are arranged together with a wavefront sensor, actuators, and feedback system to align the gratings in such a manner, that they operate like a single, large, monolithic grating. Sub-wavelength-scale movements in the mechanical mounting, due to environmental influences, are monitored by an interferometer (28), and compensated by precision actuators (16, 18, 20) that maintain the coherently additive mode. The actuators define the grating plane, and are positioned in response to the wavefronts from the gratings and a reference flat, thus producing the interferogram that contains the alignment information. Movement of the actuators is also in response to a diffraction-limited spot on the CCD (36) to which light diffracted from the gratings is focused. The actuator geometry is implemented to take advantage of the compensating nature of the degrees of freedom between gratings, reducing the number of necessary control variables.

  15. Joint Cross-Range Scaling and 3D Geometry Reconstruction of ISAR Targets Based on Factorization Method.

    PubMed

    Lei Liu; Feng Zhou; Xue-Ru Bai; Ming-Liang Tao; Zi-Jing Zhang

    2016-04-01

    Traditionally, the factorization method is applied to reconstruct the 3D geometry of a target from its sequential inverse synthetic aperture radar images. However, this method requires performing cross-range scaling to all the sub-images and thus has a large computational burden. To tackle this problem, this paper proposes a novel method for joint cross-range scaling and 3D geometry reconstruction of steadily moving targets. In this method, we model the equivalent rotational angular velocity (RAV) by a linear polynomial with time, and set its coefficients randomly to perform sub-image cross-range scaling. Then, we generate the initial trajectory matrix of the scattering centers, and solve the 3D geometry and projection vectors by the factorization method with relaxed constraints. After that, the coefficients of the polynomial are estimated from the projection vectors to obtain the RAV. Finally, the trajectory matrix is re-scaled using the estimated rotational angle, and accurate 3D geometry is reconstructed. The two major steps, i.e., the cross-range scaling and the factorization, are performed repeatedly to achieve precise 3D geometry reconstruction. Simulation results have proved the effectiveness and robustness of the proposed method.

  16. Factors that influence utilisation of HIV/AIDS prevention methods among university students residing at a selected university campus.

    PubMed

    Ndabarora, Eléazar; Mchunu, Gugu

    2014-01-01

    Various studies have reported that university students, who are mostly young people, rarely use existing HIV/AIDS preventive methods. Although studies have shown that young university students have a high degree of knowledge about HIV/AIDS and HIV modes of transmission, they are still not utilising the existing HIV prevention methods and still engage in risky sexual practices favourable to HIV. Some variables, such as awareness of existing HIV/AIDS prevention methods, have been associated with utilisation of such methods. The study aimed to explore factors that influence use of existing HIV/AIDS prevention methods among university students residing in a selected campus, using the Health Belief Model (HBM) as a theoretical framework. A quantitative research approach and an exploratory-descriptive design were used to describe perceived factors that influence utilisation by university students of HIV/AIDS prevention methods. A total of 335 students completed online and manual questionnaires. Study findings showed that the factors which influenced utilisation of HIV/AIDS prevention methods were mainly determined by awareness of the existing university-based HIV/AIDS prevention strategies. Most utilised prevention methods were voluntary counselling and testing services and free condoms. Perceived susceptibility and perceived threat of HIV/AIDS score was also found to correlate with HIV risk index score. Perceived susceptibility and perceived threat of HIV/AIDS showed correlation with self-efficacy on condoms and their utilisation. Most HBM variables were not predictors of utilisation of HIV/AIDS prevention methods among students. Intervention aiming to improve the utilisation of HIV/AIDS prevention methods among students at the selected university should focus on removing identified barriers, promoting HIV/AIDS prevention services and providing appropriate resources to implement such programmes. PMID:25444096

  17. Factors that influence utilisation of HIV/AIDS prevention methods among university students residing at a selected university campus.

    PubMed

    Ndabarora, Eléazar; Mchunu, Gugu

    2014-01-01

    Various studies have reported that university students, who are mostly young people, rarely use existing HIV/AIDS preventive methods. Although studies have shown that young university students have a high degree of knowledge about HIV/AIDS and HIV modes of transmission, they are still not utilising the existing HIV prevention methods and still engage in risky sexual practices favourable to HIV. Some variables, such as awareness of existing HIV/AIDS prevention methods, have been associated with utilisation of such methods. The study aimed to explore factors that influence use of existing HIV/AIDS prevention methods among university students residing in a selected campus, using the Health Belief Model (HBM) as a theoretical framework. A quantitative research approach and an exploratory-descriptive design were used to describe perceived factors that influence utilisation by university students of HIV/AIDS prevention methods. A total of 335 students completed online and manual questionnaires. Study findings showed that the factors which influenced utilisation of HIV/AIDS prevention methods were mainly determined by awareness of the existing university-based HIV/AIDS prevention strategies. Most utilised prevention methods were voluntary counselling and testing services and free condoms. Perceived susceptibility and perceived threat of HIV/AIDS score was also found to correlate with HIV risk index score. Perceived susceptibility and perceived threat of HIV/AIDS showed correlation with self-efficacy on condoms and their utilisation. Most HBM variables were not predictors of utilisation of HIV/AIDS prevention methods among students. Intervention aiming to improve the utilisation of HIV/AIDS prevention methods among students at the selected university should focus on removing identified barriers, promoting HIV/AIDS prevention services and providing appropriate resources to implement such programmes.

  18. A Fresh Look at Linear Ordinary Differential Equations with Constant Coefficients. Revisiting the Impulsive Response Method Using Factorization

    ERIC Educational Resources Information Center

    Camporesi, Roberto

    2016-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as…

  19. A new method for both harmonic voltage and harmonic current suppression and power factor correction in industrial power systems

    SciTech Connect

    Cheng, H.; Sasaki, Hiroshi; Yorino, Naoto

    1995-12-31

    This paper proposes a new method for designing a group of single tuned filters for both harmonic current injection suppression and harmonic voltage distortion reduction and power factor correction. The proposed method is based on three purposes: (1) reduction of harmonic voltage distortion in the source terminals to an acceptable level, (2) suppression of harmonic current injection in the source terminals to an acceptable level, (3) improvement of power factor at the source terminals. To determine the size of the capacitor in a group of single tuned filters, three new NLP mathematical formulations will be introduced. The first is to suppress harmonic current injection within an acceptable level. The second is to minimize the fundamental reactive power output while reducing harmonic voltage distortion to an acceptable level. The third is to determine an optimal assignment of reactive power output based on the results of harmonic voltage reduction and power factor correction. This new method has been demonstrated for designing a group of single tuned filters and its validity has been successfully confirmed through numerical simulation in a 35 KV industrial power system. The proposed method can efficiently provide an optimal coordination in a group of single tuned filters relating to suppressing harmonic current injection, reducing harmonic voltage distortion and improving power factor.

  20. Linear Confirmatory Factor Models To Evaluate Multitrait-Multimethod Matrices: The Effects of Number of Indicators and Correlation among Methods.

    ERIC Educational Resources Information Center

    Tomas, Jose M.; Hontangas, Pedro M.; Oliver, Amparo

    2000-01-01

    Assessed two models for confirmatory factor analysis of multitrait-multimethod data through Monte Carlo simulation. The correlated traits-correlated methods (CTCM) and the correlated traits-correlated uniqueness (CTCU) models were compared. Results suggest that CTCU is a good alternative to CTCM in the typical multitrait-multimethod matrix, but…