NASA Technical Reports Server (NTRS)
Banyukevich, A.; Ziolkovski, K.
1975-01-01
A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.
Information Placement and Retrieval Through NHIN (InfoPRN)
2011-05-01
and FISMA. Dr. Steffensen will contact NMDS (Captain Bensic), JTF CapMed , and perhaps Kaiser Permanente to ascertain their interest in...activities, project stakeholders, critical project deliverables and milestones were defined and agreed upon. Therefore, the Integrated Master
Cai, Lu; Chen, Lei; Johnson, David; Gao, Yong; Mandal, Prashant; Fang, Min; Tu, Zhiying; Huang, Yingping
2014-01-01
The objective of this study is to provide information on metabolic changes occurring in Chinese sturgeon (an ecologically important endangered fish) subjected to repeated cycles of fatigue and recovery and the effect on swimming capability. Fatigue-recovery cycles likely occur when fish are moving through the fishways of large dams and the results of this investigation are important for fishway design and conservation of wild Chinese sturgeon populations. A series of four stepped velocity tests were carried out successively in a Steffensen-type swimming respirometer and the effects of repeated fatigue-recovery on swimming capability and metabolism were measured. Significant results include: (1) critical swimming speed decreased from 4.34 bl/s to 2.98 bl/s; (2) active oxygen consumption (i.e. the difference between total oxygen consumption and routine oxygen consumption) decreased from 1175 mgO2/kg to 341 mgO2/kg and was the primary reason for the decrease in U crit; (3) excess post-exercise oxygen consumption decreased from 36 mgO2/kg to 22 mgO2/kg; (4) with repeated step tests, white muscle (anaerobic metabolism) began contributing to propulsion at lower swimming speeds. Therefore, Chinese sturgeon conserve energy by swimming efficiently and have high fatigue recovery capability. These results contribute to our understanding of the physiology of the Chinese sturgeon and support the conservation efforts of wild populations of this important species. PMID:24714585
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Step one. 14.503-1 Section... AND CONTRACT TYPES SEALED BIDDING Two-Step Sealed Bidding 14.503-1 Step one. (a) Requests for... use the two step method. (3) The requirements of the technical proposal. (4) The evaluation criteria...
Novel Intersection Type Recognition for Autonomous Vehicles Using a Multi-Layer Laser Scanner.
An, Jhonghyun; Choi, Baehoon; Sim, Kwee-Bo; Kim, Euntai
2016-07-20
There are several types of intersections such as merge-roads, diverge-roads, plus-shape intersections and two types of T-shape junctions in urban roads. When an autonomous vehicle encounters new intersections, it is crucial to recognize the types of intersections for safe navigation. In this paper, a novel intersection type recognition method is proposed for an autonomous vehicle using a multi-layer laser scanner. The proposed method consists of two steps: (1) static local coordinate occupancy grid map (SLOGM) building and (2) intersection classification. In the first step, the SLOGM is built relative to the local coordinate using the dynamic binary Bayes filter. In the second step, the SLOGM is used as an attribute for the classification. The proposed method is applied to a real-world environment and its validity is demonstrated through experimentation.
Novel Intersection Type Recognition for Autonomous Vehicles Using a Multi-Layer Laser Scanner
An, Jhonghyun; Choi, Baehoon; Sim, Kwee-Bo; Kim, Euntai
2016-01-01
There are several types of intersections such as merge-roads, diverge-roads, plus-shape intersections and two types of T-shape junctions in urban roads. When an autonomous vehicle encounters new intersections, it is crucial to recognize the types of intersections for safe navigation. In this paper, a novel intersection type recognition method is proposed for an autonomous vehicle using a multi-layer laser scanner. The proposed method consists of two steps: (1) static local coordinate occupancy grid map (SLOGM) building and (2) intersection classification. In the first step, the SLOGM is built relative to the local coordinate using the dynamic binary Bayes filter. In the second step, the SLOGM is used as an attribute for the classification. The proposed method is applied to a real-world environment and its validity is demonstrated through experimentation. PMID:27447640
Two-step Raman spectroscopy method for tumor diagnosis
NASA Astrophysics Data System (ADS)
Zakharov, V. P.; Bratchenko, I. A.; Kozlov, S. V.; Moryatov, A. A.; Myakinin, O. O.; Artemyev, D. N.
2014-05-01
Two-step Raman spectroscopy phase method was proposed for differential diagnosis of malignant tumor in skin and lung tissue. It includes detection of malignant tumor in healthy tissue on first step with identification of concrete cancer type on the second step. Proposed phase method analyze spectral intensity alteration in 1300-1340 and 1640-1680 cm-1 Raman bands in relation to the intensity of the 1450 cm-1 band on first step, and relative differences between RS intensities for tumor area and healthy skin closely adjacent to the lesion on the second step. It was tested more than 40 ex vivo samples of lung tissue and more than 50 in vivo skin tumors. Linear Discriminant Analysis, Quadratic Discriminant Analysis and Support Vector Machine were used for tumors type classification on phase planes. It is shown that two-step phase method allows to reach 88.9% sensitivity and 87.8% specificity for malignant melanoma diagnosis (skin cancer); 100% sensitivity and 81.5% specificity for adenocarcinoma diagnosis (lung cancer); 90.9% sensitivity and 77.8% specificity for squamous cell carcinoma diagnosis (lung cancer).
Zhou, Wei; Shan, Jinjun; Meng, Minxin
2018-08-17
Fructus Gardeniae-Fructus Forsythiae herb pair is an herbal formula used extensively to treat inflammation and fever, but few systematic identification studies of the bioactive components have been reported. Herein, the unknown analogues in the first-step screening were rapidly identified from representative compounds in different structure types (geniposide as iridoid type, crocetin as crocetin type, jasminoside B as monocyclic monoterpene type, oleanolic acid as saponin type, 3-caffeoylquinic acid as organic acid type, forsythoside A as phenylethanoid type, phillyrin as lignan type and quercetin 3-rutinoside as flavonoid type) by UPLC-Q-Tof/MS combined with mass defect filtering (MDF), and further confirmed with reference standards and published literatures. Similarly, in the second step, other unknown components were rapidly discovered from the compounds identified in the first step by MDF. Using the two-step screening method, a total of 58 components were characterized in Fructus Gardeniae-Fructus Forsythiae (FG-FF) decoction. In rat's blood, 36 compounds in extract and 16 metabolites were unambiguously or tentatively identified. Besides, we found the principal metabolites were glucuronide conjugates, with the glucuronide conjugates of caffeic acid, quercetin and kaempferol confirmed as caffeic acid 3-glucuronide, quercetin 3-glucuronide and kaempferol 3-glucuronide by reference standards, respectively. Additionally, most of them bound more strongly to human serum albumin than their respective prototypes, predicted by Molecular Docking and Simulation, indicating that they had lower blood clearance in vivo and possibly more contribution to pharmacological effects. This study developed a novel two-step screening method in addressing how to comprehensively screen components in herbal medicine by UPLC-Q-Tof/MS with MDF. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Franco, J. M.; Rández, L.
The construction of new two-step hybrid (TSH) methods of explicit type with symmetric nodes and weights for the numerical integration of orbital and oscillatory second-order initial value problems (IVPs) is analyzed. These methods attain algebraic order eight with a computational cost of six or eight function evaluations per step (it is one of the lowest costs that we know in the literature) and they are optimal among the TSH methods in the sense that they reach a certain order of accuracy with minimal cost per step. The new TSH schemes also have high dispersion and dissipation orders (greater than 8) in order to be adapted to the solution of IVPs with oscillatory solutions. The numerical experiments carried out with several orbital and oscillatory problems show that the new eighth-order explicit TSH methods are more efficient than other standard TSH or Numerov-type methods proposed in the scientific literature.
Calcite phase determination of CaCO3 nanoparticles synthesized by one step drying method
NASA Astrophysics Data System (ADS)
Sulimai, N. H.; Rani, Rozina Abdul; Khusaimi, Z.; Abdullah, S.; Salifairus, M. J.; Alrokayan, Salman; Khan, Haseeb; Rusop, M.
2018-05-01
Calcium Carbonate (CaCO3) is a type of carbonic salt. It exist naturally as white odourless solid and may also be synthesized by chemical reactions. This work studies one-step precipitation of CaCO3 that was prepared by novel method of one-step precipitation method. The method was then proceeded by different types of drying. The first type is by normal drying in oven whereas the second type is with the presence of hydrothermal influence. From the results, precipitated CaCO3 dried by normal drying method produces CaCO3 with two polymorphs present; calcite and vaterite. Normal drying at 500°C has no vaterite phase left. Drying by hydrothermal precipitated CaCO3 has Nitrogen (N) left on the surfaces of the precipitated CaCO3. This work successfully identified calcite phase in the precipitated CaCO3.
On some Aitken-like acceleration of the Schwarz method
NASA Astrophysics Data System (ADS)
Garbey, M.; Tromeur-Dervout, D.
2002-12-01
In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.
March, Melissa I; Modest, Anna M; Ralston, Steven J; Hacker, Michele R; Gupta, Munish; Brown, Florence M
2016-01-01
To compare characteristics and outcomes of women diagnosed with gestational diabetes mellitus (GDM) by the newer one-step glucose tolerance test and those diagnosed with the traditional two-step method. This was a retrospective cohort study of women with GDM who delivered in 2010-2011. Data are reported as proportion or median (interquartile range) and were compared using a Chi-square, Fisher's exact or Wilcoxon rank sum test based on data type. Of 235 women with GDM, 55.7% were diagnosed using the two-step method and 44.3% with the one-step method. The groups had similar demographics and GDM risk factors. The two-step method group was diagnosed with GDM one week later [27.0 (24.0-29.0) weeks versus 26.0 (24.0-28.0 weeks); p = 0.13]. The groups had similar median weight gain per week before diagnosis. After diagnosis, women in the one-step method group had significantly higher median weight gain per week [0.67 pounds/week (0.31-1.0) versus 0.56 pounds/week (0.15-0.89); p = 0.047]. In the one-step method group more women had suspected macrosomia (11.7% versus 5.3%, p = 0.07) and more neonates had a birth weight >4000 g (13.6% versus 7.5%, p = 0.13); however, these differences were not statistically significant. Other pregnancy and neonatal complications were similar. Women diagnosed with the one-step method gained more weight per week after GDM diagnosis and had a non-statistically significant increased risk for suspected macrosomia. Our data suggest the one-step method identifies women with at least equally high risk as the two-step method.
A Two-Step Model for Assessing Relative Interest in E-Books Compared to Print
ERIC Educational Resources Information Center
Knowlton, Steven A.
2016-01-01
Librarians often wish to know whether readers in a particular discipline favor e-books or print books. Because print circulation and e-book usage statistics are not directly comparable, it can be hard to determine the relative interest of readers in the two types of books. This study demonstrates a two-step method by which librarians can assess…
A hybrid-perturbation-Galerkin technique which combines multiple expansions
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1989-01-01
A two-step hybrid perturbation-Galerkin method for the solution of a variety of differential equations type problems is found to give better results when multiple perturbation expansions are employed. The method assumes that there is parameter in the problem formulation and that a perturbation method can be sued to construct one or more expansions in this perturbation coefficient functions multiplied by computed amplitudes. In step one, regular and/or singular perturbation methods are used to determine the perturbation coefficient functions. The results of step one are in the form of one or more expansions each expressed as a sum of perturbation coefficient functions multiplied by a priori known gauge functions. In step two the classical Bubnov-Galerkin method uses the perturbation coefficient functions computed in step one to determine a set of amplitudes which replace and improve upon the gauge functions. The hybrid method has the potential of overcoming some of the drawbacks of the perturbation and Galerkin methods as applied separately, while combining some of their better features. The proposed method is applied, with two perturbation expansions in each case, to a variety of model ordinary differential equations problems including: a family of linear two-boundary-value problems, a nonlinear two-point boundary-value problem, a quantum mechanical eigenvalue problem and a nonlinear free oscillation problem. The results obtained from the hybrid methods are compared with approximate solutions obtained by other methods, and the applicability of the hybrid method to broader problem areas is discussed.
March, Melissa I.; Modest, Anna M.; Ralston, Steven J.; Hacker, Michele R.; Gupta, Munish; Brown, Florence M.
2016-01-01
Abstract Objective: To compare characteristics and outcomes of women diagnosed with gestational diabetes mellitus (GDM) by the newer one-step glucose tolerance test and those diagnosed with the traditional two-step method. Research design and methods: This was a retrospective cohort study of women with GDM who delivered in 2010–2011. Data are reported as proportion or median (interquartile range) and were compared using a Chi-square, Fisher's exact or Wilcoxon rank sum test based on data type. Results: Of 235 women with GDM, 55.7% were diagnosed using the two-step method and 44.3% with the one-step method. The groups had similar demographics and GDM risk factors. The two-step method group was diagnosed with GDM one week later [27.0 (24.0–29.0) weeks versus 26.0 (24.0–28.0 weeks); p = 0.13]. The groups had similar median weight gain per week before diagnosis. After diagnosis, women in the one-step method group had significantly higher median weight gain per week [0.67 pounds/week (0.31–1.0) versus 0.56 pounds/week (0.15–0.89); p = 0.047]. In the one-step method group more women had suspected macrosomia (11.7% versus 5.3%, p = 0.07) and more neonates had a birth weight >4000 g (13.6% versus 7.5%, p = 0.13); however, these differences were not statistically significant. Other pregnancy and neonatal complications were similar. Conclusions: Women diagnosed with the one-step method gained more weight per week after GDM diagnosis and had a non-statistically significant increased risk for suspected macrosomia. Our data suggest the one-step method identifies women with at least equally high risk as the two-step method. PMID:25958989
Cross-linked polyvinyl alcohol films as alkaline battery separators
NASA Technical Reports Server (NTRS)
Sheibley, D. W.; Manzo, M. A.; Gonzalez-Sanabria, O. D.
1983-01-01
Cross-linking methods have been investigated to determine their effect on the performance of polyvinyl alcohol (PVA) films as alkaline battery separators. The following types of cross-linked PVA films are discussed: (1) PVA-dialdehyde blends post-treated with an acid or acid periodate solution (two-step method) and (2) PVA-dialdehyde blends cross-linked during film formation (drying) by using a reagent with both aldehyde and acid functionality (one-step method). Laboratory samples of each cross-linked type of film were prepared and evaluated in standard separator screening tests. Then pilot-plant batches of films were prepared and compared to measure differences due to the cross-linking method. The pilot-plant materials were then tested in nickel oxide-zinc cells to compare the two methods with respect to performance characteristics and cycle life. Cell test results are compared with those from tests with Celgard.
Cross-linked polyvinyl alcohol films as alkaline battery separators
NASA Technical Reports Server (NTRS)
Sheibley, D. W.; Manzo, M. A.; Gonzalez-Sanabria, O. D.
1982-01-01
Cross-linking methods were investigated to determine their effect on the performance of polyvinyl alcohol (PVA) films as alkaline battery separators. The following types of cross-linked PVA films are discussed: (1) PVA-dialdehyde blends post-treated with an acid or acid periodate solution (two-step method) and (2) PVA-dialdehyde blends cross-linked during film formation (drying) by using a reagent with both aldehyde and acid functionality (one-step method). Laboratory samples of each cross-linked type of film were prepared and evaluated in standard separator screening tests. The pilot-plant batches of films were prepared and compared to measure differences due to the cross-linking method. The pilot-plant materials were then tested in nickel oxide - zinc cells to compare the two methods with respect to performance characteristics and cycle life. Cell test results are compared with those from tests with Celgard.
Evaluation of the occlusal contact of crowns fabricated with the bite impression method.
Makino, Sachi; Okada, Daizo; Shin, Chiharu; Ogura, Reiko; Ikeda, Masaomi; Miura, Hiroyuki
2013-09-30
In prosthodontic treatment, reconstruction of a proper occlusal contact relationship is very important as well as reconstruction of a proper interproximal relationship and marginal fitness. Unfortunately, occlusal relationships are sometimes lost in the process of occlusal adjustment of crowns. The purpose of this study was to compare the occlusal contacts of single crown fabricated by two different types of impression techniques. Nine subjects, whose molars required treatment with crown restoration, were enrolled in this study. Full cast crowns were fabricated using two types of impression techniques: the conventional impression method (CIM) and the bite impression method (BIM). The occlusal contacts of crowns were precisely evaluated at the following stages: after occlusal adjustment on the articulator (Step 0), before occlusal adjustment in the mouth (Step 1), after occlusal adjustment at the intercuspal position (Step 2), and after occlusal adjustment during lateral and protrusive excursions (Step 3). The number of occlusal contacts of the crowns on the functional cusps fabricated with BIM was significantly greater than that with CIM after occlusal adjustment. For this reason, the crowns fabricated with BIM might have a more functionally desirable occlusal surface compared to the crowns fabricated with CIM.
3D road marking reconstruction from street-level calibrated stereo pairs
NASA Astrophysics Data System (ADS)
Soheilian, Bahman; Paparoditis, Nicolas; Boldo, Didier
This paper presents an automatic approach to road marking reconstruction using stereo pairs acquired by a mobile mapping system in a dense urban area. Two types of road markings were studied: zebra crossings (crosswalks) and dashed lines. These two types of road markings consist of strips having known shape and size. These geometric specifications are used to constrain the recognition of strips. In both cases (i.e. zebra crossings and dashed lines), the reconstruction method consists of three main steps. The first step extracts edge points from the left and right images of a stereo pair and computes 3D linked edges using a matching process. The second step comprises a filtering process that uses the known geometric specifications of road marking objects. The goal is to preserve linked edges that can plausibly belong to road markings and to filter others out. The final step uses the remaining linked edges to fit a theoretical model to the data. The method developed has been used for processing a large number of images. Road markings are successfully and precisely reconstructed in dense urban areas under real traffic conditions.
Barker, Daniel; D'Este, Catherine; Campbell, Michael J; McElduff, Patrick
2017-03-09
Stepped wedge cluster randomised trials frequently involve a relatively small number of clusters. The most common frameworks used to analyse data from these types of trials are generalised estimating equations and generalised linear mixed models. A topic of much research into these methods has been their application to cluster randomised trial data and, in particular, the number of clusters required to make reasonable inferences about the intervention effect. However, for stepped wedge trials, which have been claimed by many researchers to have a statistical power advantage over the parallel cluster randomised trial, the minimum number of clusters required has not been investigated. We conducted a simulation study where we considered the most commonly used methods suggested in the literature to analyse cross-sectional stepped wedge cluster randomised trial data. We compared the per cent bias, the type I error rate and power of these methods in a stepped wedge trial setting with a binary outcome, where there are few clusters available and when the appropriate adjustment for a time trend is made, which by design may be confounding the intervention effect. We found that the generalised linear mixed modelling approach is the most consistent when few clusters are available. We also found that none of the common analysis methods for stepped wedge trials were both unbiased and maintained a 5% type I error rate when there were only three clusters. Of the commonly used analysis approaches, we recommend the generalised linear mixed model for small stepped wedge trials with binary outcomes. We also suggest that in a stepped wedge design with three steps, at least two clusters be randomised at each step, to ensure that the intervention effect estimator maintains the nominal 5% significance level and is also reasonably unbiased.
Ciccia, Rossella
2017-01-01
Typologies have represented an important tool for the development of comparative social policy research and continue to be widely used in spite of growing criticism of their ability to capture the complexity of welfare states and their internal heterogeneity. In particular, debates have focused on the presence of hybrid cases and the existence of distinct cross-national pattern of variation across areas of social policy. There is growing awareness around these issues, but empirical research often still relies on methodologies aimed at classifying countries in a limited number of unambiguous types. This article proposes a two-step approach based on fuzzy-set ideal type analysis for the systematic analysis of hybrids at the level of both policies (step 1) and policy configurations or combinations of policies (step 2). This approach is demonstrated by using the case of childcare policies in European economies. In the first step, parental leave policies are analysed using three methods-direct, indirect, and combinatory-to identify and describe specific hybrid forms at the level of policy analysis. In the second step, the analysis moves on to investigate the relationship between parental leave and childcare services. Clearly shows that many countries display characteristics normally associated with different types (hybrids and sub-types) . Therefore, this two-step approach demonstrates that disaggregated and aggregated analyses are equally important to account for hybrid welfare forms and make sense of the tensions and incongruences within and between policies.
Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua
2015-01-01
A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue. PMID:25603180
Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua
2015-01-16
A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue.
Comparison of two stand-alone CADe systems at multiple operating points
NASA Astrophysics Data System (ADS)
Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas
2015-03-01
Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.
Salamon, S; Ritar, A J
1982-01-01
Five factorial experiments were conducted to examine the effects of concentration of tris(hydroxymethyl)aminomethane (Tris), type and concentration of sugar in the diluent, rate and method of dilution on the survival of goat spermatozoa after freezing by the pellet method. Spermatozoa tolerated a relatively wide range in concentration of Tris, but the cell survival depended on the type of sugar included in the Tris diluent. Glucose and fructose were more suitable components than lactose or raffinose. Survival of spermatozoa after thawing was better for three- to fivefold prefreezing dilution. There was interaction between method of semen dilution (one-step, two-step), holding time at 5 degrees C, and glycerol concentration. The best result was obtained after one-step dilution at 30 degrees C (Tris 375 mM-glucose 41.625 mM-citric acid 124 mM), 1.5 h holding at 5 degrees C, and with 4% (v/v) glycerol concentration in the diluted semen.
Accuracy of Multiple Pour Cast from Various Elastomer Impression Methods
Saad Toman, Majed; Ali Al-Shahrani, Abdullah; Ali Al-Qarni, Abdullah
2016-01-01
The accurate duplicate cast obtained from a single impression reduces the profession clinical time, patient inconvenience, and extra material cost. The stainless steel working cast model assembly consisting of two abutments and one pontic area was fabricated. Two sets of six each custom aluminum trays were fabricated, with five mm spacer and two mm spacer. The impression methods evaluated during the study were additional silicone putty reline (two steps), heavy-light body (one step), monophase (one step), and polyether (one step). Type IV gypsum casts were poured at the interval of one hour, 12 hours, 24 hours, and 48 hours. The resultant cast was measured with traveling microscope for the comparative dimensional accuracy. The data obtained were subjected to Analysis of Variance test at significance level <0.05. The die obtained from two-step putty reline impression techniques had the percentage of variation for the height −0.36 to −0.97%, while diameter was increased by 0.40–0.90%. The values for one-step heavy-light body impression dies, additional silicone monophase impressions, and polyether were −0.73 to −1.21%, −1.34%, and −1.46% for the height and 0.50–0.80%, 1.20%, and −1.30% for the width, respectively. PMID:28096815
Two-step deposition of Al-doped ZnO on p-GaN to form ohmic contacts.
Su, Xi; Zhang, Guozhen; Wang, Xiao; Chen, Chao; Wu, Hao; Liu, Chang
2017-12-01
Al-doped ZnO (AZO) thin films were deposited directly on p-GaN substrates by using a two-step deposition consisting of polymer assisted deposition (PAD) and atomic layer deposition (ALD) methods. Ohmic contacts of the AZO on p-GaN have been formed. The lowest sheet resistance of the two-step prepared AZO films reached to 145 Ω/sq, and the specific contact resistance reduced to 1.47 × 10 -2 Ω·cm 2 . Transmittance of the AZO films remained above 80% in the visible region. The combination of PAD and ALD technique can be used to prepare p-type ohmic contacts for optoelectronics.
Two-step deposition of Al-doped ZnO on p-GaN to form ohmic contacts
NASA Astrophysics Data System (ADS)
Su, Xi; Zhang, Guozhen; Wang, Xiao; Chen, Chao; Wu, Hao; Liu, Chang
2017-07-01
Al-doped ZnO (AZO) thin films were deposited directly on p-GaN substrates by using a two-step deposition consisting of polymer assisted deposition (PAD) and atomic layer deposition (ALD) methods. Ohmic contacts of the AZO on p-GaN have been formed. The lowest sheet resistance of the two-step prepared AZO films reached to 145 Ω/sq, and the specific contact resistance reduced to 1.47 × 10-2 Ω·cm2. Transmittance of the AZO films remained above 80% in the visible region. The combination of PAD and ALD technique can be used to prepare p-type ohmic contacts for optoelectronics.
Linking pedestrian flow characteristics with stepping locomotion
NASA Astrophysics Data System (ADS)
Wang, Jiayue; Boltes, Maik; Seyfried, Armin; Zhang, Jun; Ziemer, Verena; Weng, Wenguo
2018-06-01
While properties of human traffic flow are described by speed, density and flow, the locomotion of pedestrian is based on steps. To relate characteristics of human locomotor system with properties of human traffic flow, this paper aims to connect gait characteristics like step length, step frequency, swaying amplitude and synchronization with speed and density and thus to build a ground for advanced pedestrian models. For this aim, observational and experimental study on the single-file movement of pedestrians at different densities is conducted. Methods to measure step length, step frequency, swaying amplitude and step synchronization are proposed by means of trajectories of the head. Mathematical models for the relations of step length or frequency and speed are evaluated. The problem how step length and step duration are influenced by factors like body height and density is investigated. It is shown that the effect of body height on step length and step duration changes with density. Furthermore, two different types of step in-phase synchronization between two successive pedestrians are observed and the influence of step synchronization on step length is examined.
Preparation and characterization of silica xerogels as carriers for drugs.
Czarnobaj, K
2008-11-01
The aim of the present study was to utilize the sol-gel method to synthesize different forms of xerogel matrices for drugs and to investigate how the synthesis conditions and solubility of drugs influence the change of the profile of drug release and the structure of the matrices. Silica xerogels doped with drugs were prepared by the sol-gel method from a hydrolyzed tetraethoxysilane (TEOS) solution containing two model compounds: diclofenac diethylamine, (DD)--a water-soluble drug or ibuprofen, (IB)--a water insoluble drug. Two procedures were used for the synthesis of sol-gel derived materials: one-step procedure (the sol-gel reaction was carried out under acidic or basic conditions) and the two-step procedure (first, hydrolysis of TEOS was carried out under acidic conditions, and then condensation of silanol groups was carried out under basic conditions) in order to obtain samples with altered microstructures. In vitro release studies of drugs revealed a similar release profile in two steps: an initial diffusion-controlled release followed by a slower release rate. In all the cases studied, the released amount of DD was higher and the released time was shorter compared with IB for the same type of matrices. The released amount of drugs from two-step prepared xerogels was always lower than that from one-step base-catalyzed xerogels. One-step acid-catalyzed xerogels proved unsuitable as the carriers for the examined drugs.
Adaptive [theta]-methods for pricing American options
NASA Astrophysics Data System (ADS)
Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran
2008-12-01
We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.
Han, Yaohui; Mou, Lan; Xu, Gengchi; Yang, Yiqiang; Ge, Zhenlin
2015-03-01
To construct a three-dimensional finite element model comparing between one-step and two-step methods in torque control of anterior teeth during space closure. Dicom image data including maxilla and upper teeth were obtained though cone-beam CT. A three-dimensional model was set up and the maxilla, upper teeth and periodontium were separated using Mimics software. The models were instantiated using Pro/Engineer software, and Abaqus finite element analysis software was used to simulate the sliding mechanics by loading 1.47 Nforce on traction hooks with different heights (2, 4, 6, 8, 10, 12 and 14 mm, respectively) in order to compare the initial displacement between six maxillary anterior teeth (one-step method) and four maxillary anterior teeth (two-step method). When moving anterior teeth bodily, initial displacements of central incisors in two-step method and in one-step method were 29.26 × 10⁻⁶ mm and 15.75 × 10⁻⁶ mm, respectively. The initial displacements of lateral incisors in two-step method and in one-step method were 46.76 × 10(-6) mm and 23.18 × 10(-6) mm, respectively. Under the same amount of light force, the initial displacement of anterior teeth in two-step method was doubled compared with that in one-step method. The root and crown of the canine couldn't obtain the same amount of displacement in one-step method. Two-step method could produce more initial displacement than one-step method. Therefore, two-step method was easier to achieve torque control of the anterior teeth during space closure.
Community detection enhancement using non-negative matrix factorization with graph regularization
NASA Astrophysics Data System (ADS)
Liu, Xiao; Wei, Yi-Ming; Wang, Jian; Wang, Wen-Jun; He, Dong-Xiao; Song, Zhan-Jie
2016-06-01
Community detection is a meaningful task in the analysis of complex networks, which has received great concern in various domains. A plethora of exhaustive studies has made great effort and proposed many methods on community detection. Particularly, a kind of attractive one is the two-step method which first makes a preprocessing for the network and then identifies its communities. However, not all types of methods can achieve satisfactory results by using such preprocessing strategy, such as the non-negative matrix factorization (NMF) methods. In this paper, rather than using the above two-step method as most works did, we propose a graph regularized-based model to improve, specialized, the NMF-based methods for the detection of communities, namely NMFGR. In NMFGR, we introduce the similarity metric which contains both the global and local information of networks, to reflect the relationships between two nodes, so as to improve the accuracy of community detection. Experimental results on both artificial and real-world networks demonstrate the superior performance of NMFGR to some competing methods.
Investigating a hybrid perturbation-Galerkin technique using computer algebra
NASA Technical Reports Server (NTRS)
Andersen, Carl M.; Geer, James F.
1988-01-01
A two-step hybrid perturbation-Galerkin method is presented for the solution of a variety of differential equations type problems which involve a scalar parameter. The resulting (approximate) solution has the form of a sum where each term consists of the product of two functions. The first function is a function of the independent field variable(s) x, and the second is a function of the parameter lambda. In step one the functions of x are determined by forming a perturbation expansion in lambda. In step two the functions of lambda are determined through the use of the classical Bubnov-Gelerkin method. The resulting hybrid method has the potential of overcoming some of the drawbacks of the perturbation and Bubnov-Galerkin methods applied separately, while combining some of the good features of each. In particular, the results can be useful well beyond the radius of convergence associated with the perturbation expansion. The hybrid method is applied with the aid of computer algebra to a simple two-point boundary value problem where the radius of convergence is finite and to a quantum eigenvalue problem where the radius of convergence is zero. For both problems the hybrid method apparently converges for an infinite range of the parameter lambda. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the applicability of the hybrid method to broader problem areas is discussed.
User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm.
Bourobou, Serge Thomas Mickala; Yoo, Younghwan
2015-05-21
This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.
A hybrid fault diagnosis approach based on mixed-domain state features for rotating machinery.
Xue, Xiaoming; Zhou, Jianzhong
2017-01-01
To make further improvement in the diagnosis accuracy and efficiency, a mixed-domain state features data based hybrid fault diagnosis approach, which systematically blends both the statistical analysis approach and the artificial intelligence technology, is proposed in this work for rolling element bearings. For simplifying the fault diagnosis problems, the execution of the proposed method is divided into three steps, i.e., fault preliminary detection, fault type recognition and fault degree identification. In the first step, a preliminary judgment about the health status of the equipment can be evaluated by the statistical analysis method based on the permutation entropy theory. If fault exists, the following two processes based on the artificial intelligence approach are performed to further recognize the fault type and then identify the fault degree. For the two subsequent steps, mixed-domain state features containing time-domain, frequency-domain and multi-scale features are extracted to represent the fault peculiarity under different working conditions. As a powerful time-frequency analysis method, the fast EEMD method was employed to obtain multi-scale features. Furthermore, due to the information redundancy and the submergence of original feature space, a novel manifold learning method (modified LGPCA) is introduced to realize the low-dimensional representations for high-dimensional feature space. Finally, two cases with 12 working conditions respectively have been employed to evaluate the performance of the proposed method, where vibration signals were measured from an experimental bench of rolling element bearing. The analysis results showed the effectiveness and the superiority of the proposed method of which the diagnosis thought is more suitable for practical application. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
2016-08-01
Numerical solutions of the hydrodynamical model of semiconductor devices are presented in one and two-space dimension. The model describes the charge transport in semiconductor devices. Mathematically, the models can be written as a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the conservation element and solution element (CE/SE) method for hyperbolic step, and a semi-implicit scheme for the relaxation step. The numerical results of the suggested scheme are compared with the splitting scheme based on Nessyahu-Tadmor (NT) central scheme for convection step and the same semi-implicit scheme for the relaxation step. The effects of various parameters such as low field mobility, device length, lattice temperature and voltages for one-space dimensional hydrodynamic model are explored to further validate the generic applicability of the CE/SE method for the current model equations. A two dimensional simulation is also performed by CE/SE method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
Resonant frequency calculations using a hybrid perturbation-Galerkin technique
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1991-01-01
A two-step hybrid perturbation Galerkin technique is applied to the problem of determining the resonant frequencies of one or several degree of freedom nonlinear systems involving a parameter. In one step, the Lindstedt-Poincare method is used to determine perturbation solutions which are formally valid about one or more special values of the parameter (e.g., for large or small values of the parameter). In step two, a subset of the perturbation coordinate functions determined in step one is used in Galerkin type approximation. The technique is illustrated for several one degree of freedom systems, including the Duffing and van der Pol oscillators, as well as for the compound pendulum. For all of the examples considered, it is shown that the frequencies obtained by the hybrid technique using only a few terms from the perturbation solutions are significantly more accurate than the perturbation results on which they are based, and they compare very well with frequencies obtained by purely numerical methods.
NASA Astrophysics Data System (ADS)
Amyay, Omar
A method defined in terms of synthesis and verification steps is presented. The specification of the services and protocols of communication within a multilayered architecture of the Open Systems Interconnection (OSI) type is an essential issue for the design of computer networks. The aim is to obtain an operational specification of the protocol service couple of a given layer. Planning synthesis and verification steps constitute a specification trajectory. The latter is based on the progressive integration of the 'initial data' constraints and verification of the specification originating from each synthesis step, through validity constraints that characterize an admissible solution. Two types of trajectories are proposed according to the style of the initial specification of the service protocol couple: operational type and service supplier viewpoint; knowledge property oriented type and service viewpoint. Synthesis and verification activities were developed and formalized in terms of labeled transition systems, temporal logic and epistemic logic. The originality of the second specification trajectory and the use of the epistemic logic are shown. An 'artificial intelligence' approach enables a conceptual model to be defined for a knowledge base system for implementing the method proposed. It is structured in three levels of representation of the knowledge relating to the domain, the reasoning characterizing synthesis and verification activities and the planning of the steps of a specification trajectory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wunschel, David S.; Kreuzer-Martin, Helen W.; Antolick, Kathryn C.
2009-12-01
This report describes method development and preliminary evaluation for analyzing castor samples for signatures of purifying ricin. Ricin purification from the source castor seeds is essentially a problem of protein purification using common biochemical methods. Indications of protein purification will likely manifest themselves as removal of the non-protein fractions of the seed. Two major, non-protein, types of biochemical constituents in the seed are the castor oil and various carbohydrates. The oil comprises roughly half the seed weight while the carbohydrate component comprises roughly half of the remaining “mash” left after oil and hull removal. Different castor oil and carbohydrate componentsmore » can serve as indicators of specific toxin processing steps. Ricinoleic acid is a relatively unique fatty acid in nature and is the most abundant component of castor oil. The loss of ricinoleic acid indicates a step to remove oil from the seeds. The relative amounts of carbohydrates and carbohydrate-like compounds, including arabinose, xylose, myo-inositol fucose, rhamnose, glucosamine and mannose detected in the sample can also indicate specific processing steps. For instance, the differential loss of arabinose relative to mannose and N-acetyl glucosamine indicates enrichment for the protein fraction of the seed using protein precipitation. The methods developed in this project center on fatty acid and carbohydrate extraction from castor samples followed by derivatization to permit analysis by gas chromatography-mass spectrometry (GC-MS). Method descriptions herein include: the source and preparation of castor materials used for method evaluation, the equipment and description of procedure required for chemical derivatization, and the instrument parameters used in the analysis. Two types of derivatization methods describe analysis of carbohydrates and one procedure for analysis of fatty acids. Two types of GC-MS analysis is included in the method development, one employing a quadrupole MS system for compound identification and an isotope ratio MS for measuring the stable isotope ratios of deuterium and hydrogen (D/H) in fatty acids. Finally, the method for analyzing the compound abundance data is included. This study indicates that removal of ricinoleic acid is a conserved consequence of each processing step we tested. Furthermore, the stable isotope D/H ratio of ricinoleic acid distinguished between two of the three castor seed sources. Concentrations of arabinose, xylose, mannose, glucosamine and myo-inositol differentiated between crude or acetone extracted samples and samples produced by protein precipitation. Taken together these data illustrate the ability to distinguish between processes used to purify a ricin sample as well as potentially the source seeds.« less
Ultra high resolution cation analysis of NGRIP deep ice via cryo-cell UV-laser-ablation ICPMS
NASA Astrophysics Data System (ADS)
Della Lunga, Damiano; Muller, Wolfgang; Olander Rasmussen, Sune; Svensson, Anders
2014-05-01
During glacial periods, Earth experienced abrupt climate change events that led to rapid natural warming/ cooling over a few years only (Steffensen et al., 2008). In order to investigate these rapid climate events especially in old thinned ice, highest spatial/time resolution analysis of climate proxies is required. A recently developed methodology at Royal Holloway University of London (Müller et al., 2011), which permits in situ chemical analysis of frozen ice with spatial (and thus time) resolution up to 0.1 mm (100 ?m) using cryo-cell UV-laser ablation inductively-coupled-plasma mass spectrometry (UV-LA-ICPMS), has been optimized and utilized for analysis of (major) elements indicative of dust and/or sea salt (e.g. Fe, Al, Ca, Mg, Na), while maintaining detection limits in the low(est) ppb-range. NGRIP samples of Greenland Stadial GS22 (~86 ka, depth of ~2690 m), representing a minor δ18O shift (of about ± 4) within the stadial phase of D-O event 22, have been selected and analysed. With a single storm-event resolution capability, seasonal, annual and multiannual periodicity of elements have been identified and will be presented with particular focus on the phasing of the climate proxies. Corresponding results include also an optimized UV-LA-ICPMS methodology, particularly with reference to depth-profiling, assessing contamination of the sample surface and standardization. Finally, the location and distribution of soluble and insoluble micro-inclusions in deep ice have also been assessed concerning the partitioning of elements between grain boundaries and grain interiors. Results show that impurities tend to be concentrated along boundaries in clear (winter) ice, whereas in cloudy bands ('dirtier' ice) they distribute equally between boundaries and interiors. References Müller, W., Shelley, J.M.G., Rasmussen, S.O., 2011. Direct chemical analysis of frozen ice cores by UV-laser ablation ICPMS. J. Anal. At. Spectrom. 26, 2391-2395. Steffensen, J.P., Andersen, K.K., Bigler, M., Clausen, H.B., Dahl-Jensen, D., Fischer, H., Goto-Azuma, K., Hansson, M., Johnsen, S.J., Jouzel, J., Masson-Delmotte, V., Popp, T., Rasmussen, S.O., Rothlisberger, R., Ruth, U., Stauffer, B., Siggaard-Andersen, M.L., Sveinbjornsdottir, A.E., Svensson, A., White, J.W.C., 2008. High-resolution Greenland Ice Core data show abrupt climate change happens in few years. Science 321, 680-684.
Automatic 3D kidney segmentation based on shape constrained GC-OAAM
NASA Astrophysics Data System (ADS)
Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua
2011-03-01
The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.
Demir, Aydeniz; Köleli, Nurcan
2013-01-01
A two-step method for the remediation of three different types of lead (Pb)-contaminated soil was evaluated. The first step included soil washing with ethylenediaminetetraacetic acid (EDTA) to remove Pb from soils. The washing experiments were performed with 0.05 M Na2EDTA at 1:10 soil to liquid ratio. Following the washing, Pb removal efficiency from soils ranged within 50-70%. After the soil washing process, Pb2+ ions in the washing solution were reduced electrochemically in a fixed-bed reactor. Lead removal efficiency with the electrochemical reduction at -2.0 V potential ranged within 57-76%. The overall results indicate that this two-step method is an environmentally-friendly and effective technology to remediate Pb-contaminated soils, as well as Pb-contaminated wastewater treatment due to the transformation of toxic Pb2+ ions into a non-hazardous metallic form (Pb(0)).
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Step two. 14.503-2 Section... AND CONTRACT TYPES SEALED BIDDING Two-Step Sealed Bidding 14.503-2 Step two. (a) Sealed bidding... submitting acceptable technical proposals in step one; (2) Include the provision prescribed in 14.201-6(t...
NASA Astrophysics Data System (ADS)
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
In this article, one and two-dimensional hydrodynamical models of semiconductor devices are numerically investigated. The models treat the propagation of electrons in a semiconductor device as the flow of a charged compressible fluid. It plays an important role in predicting the behavior of electron flow in semiconductor devices. Mathematically, the governing equations form a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the kinetic flux-vector splitting (KFVS) method for the hyperbolic step, and a semi-implicit Runge-Kutta method for the relaxation step. The KFVS method is based on the direct splitting of macroscopic flux functions of the system on the cell interfaces. The second order accuracy of the scheme is achieved by using MUSCL-type initial reconstruction and Runge-Kutta time stepping method. Several case studies are considered. For validation, the results of current scheme are compared with those obtained from the splitting scheme based on the NT central scheme. The effects of various parameters such as low field mobility, device length, lattice temperature and voltage are analyzed. The accuracy, efficiency and simplicity of the proposed KFVS scheme validates its generic applicability to the given model equations. A two dimensional simulation is also performed by KFVS method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
Yu, Yan; Jiang, Shenglin; Zhou, Wenli; Miao, Xiangshui; Zeng, Yike; Zhang, Guangzu; Liu, Sisi
2013-01-01
The functional layers of few-layer two-dimensional (2-D) thin flakes on flexible polymers for stretchable applications have attracted much interest. However, most fabrication methods are “indirect” processes that require transfer steps. Moreover, previously reported “transfer-free” methods are only suitable for graphene and not for other few-layer 2-D thin flakes. Here, a friction based room temperature rubbing method is proposed for fabricating different types of few-layer 2-D thin flakes (graphene, hexagonal boron nitride (h-BN), molybdenum disulphide (MoS2), and tungsten disulphide (WS2)) on flexible polymer substrates. Commercial 2-D raw materials (graphite, h-BN, MoS2, and WS2) that contain thousands of atom layers were used. After several minutes, different types of few-layer 2-D thin flakes were fabricated directly on the flexible polymer substrates by rubbing procedures at room temperature and without any transfer step. These few-layer 2-D thin flakes strongly adhere to the flexible polymer substrates. This strong adhesion is beneficial for future applications. PMID:24045289
NASA Astrophysics Data System (ADS)
Chen, Ying; Huang, Jinfang; Yeap, Zhao Qin; Zhang, Xue; Wu, Shuisheng; Ng, Chiew Hoong; Yam, Mun Fei
2018-06-01
Anoectochilus roxburghii (Wall.) Lindl. (Orchidaceae) is a precious traditional Chinese medicinal herb and has been perennially used to treat various illness. However, there were unethical sellers who adulterated wild A. roxburghii with tissue cultured and cultivated ones. Therefore, there is an urgent need for an effective authentication method to differentiate between these different types of A. roxburghii. In this research, the infrared spectroscopic tri-step identification approach including Fourier transform infrared spectroscopy (FT-IR), Second derivative infrared spectra (SD-IR) and two-dimensional correlation infrared spectra (2D-IR) was used to develop a simple and rapid method to discriminate between wild, cultivated and tissue cultivated A. roxburghii plant. Through this study, all three types of A. roxburghii plant were successfully identified and discriminated through the infrared spectroscopic tri-step identification method. Besides that, all the samples of wild, cultivated and tissue cultivated A. roxburghii plant were analysed with the Soft Independent Modelling of Class Analogy (SIMCA) pattern recognition technique to test and verify the experimental results. The results showed that the three types of A. roxburghii can be discriminated clearly as the recognition rate was 100% for all three types and the rejection rate was more than 60%. 70% of the validated samples were also identified correctly by the SIMCA model. The SIMCA model was also validated by comparing 70 standard herbs to the model. As a result, it was demonstrated that the macroscopic IR fingerprint method and the classification analysis could discriminate not only between the A. roxburghi samples and the standard herbs, it could also distinguish between the three different types of A. roxburghi plant in a direct, rapid and holistic manner.
Chen, Ying; Huang, Jinfang; Yeap, Zhao Qin; Zhang, Xue; Wu, Shuisheng; Ng, Chiew Hoong; Yam, Mun Fei
2018-06-15
Anoectochilus roxburghii (Wall.) Lindl. (Orchidaceae) is a precious traditional Chinese medicinal herb and has been perennially used to treat various illness. However, there were unethical sellers who adulterated wild A. roxburghii with tissue cultured and cultivated ones. Therefore, there is an urgent need for an effective authentication method to differentiate between these different types of A. roxburghii. In this research, the infrared spectroscopic tri-step identification approach including Fourier transform infrared spectroscopy (FT-IR), Second derivative infrared spectra (SD-IR) and two-dimensional correlation infrared spectra (2D-IR) was used to develop a simple and rapid method to discriminate between wild, cultivated and tissue cultivated A. roxburghii plant. Through this study, all three types of A. roxburghii plant were successfully identified and discriminated through the infrared spectroscopic tri-step identification method. Besides that, all the samples of wild, cultivated and tissue cultivated A. roxburghii plant were analysed with the Soft Independent Modelling of Class Analogy (SIMCA) pattern recognition technique to test and verify the experimental results. The results showed that the three types of A. roxburghii can be discriminated clearly as the recognition rate was 100% for all three types and the rejection rate was more than 60%. 70% of the validated samples were also identified correctly by the SIMCA model. The SIMCA model was also validated by comparing 70 standard herbs to the model. As a result, it was demonstrated that the macroscopic IR fingerprint method and the classification analysis could discriminate not only between the A. roxburghi samples and the standard herbs, it could also distinguish between the three different types of A. roxburghi plant in a direct, rapid and holistic manner. Copyright © 2018 Elsevier B.V. All rights reserved.
Separation negatives from Kodak film types SO-368 and SO-242
NASA Technical Reports Server (NTRS)
Weinstein, M. S.
1972-01-01
Two master resolution friskets were produced on Kodak film types SO-368 and SO-242. These target masters consisted of 21 density steps with three-bar resolution targets at five modulation levels within each step. The target masters were contact printed onto Kodak separation negative film, type 4131, using both a contact printing frame and enlarger as one method of exposure, and a Miller-Holzwarth contact printer as the other exposing device. Red, green, and blue Wratten filters were used to filter the exposing source. Tray processing was done with DK-50 developer diluted 1:2 at a temperature of 70 F. The resolution values were read for the SO-368 and SO-242 target masters, and the red, green, and blue separation negatives.
Signatures of two-step impurity mediated vortex lattice melting in Bose-Einstein condensate
NASA Astrophysics Data System (ADS)
Dey, Bishwajyoti
2017-04-01
We study impurity mediated vortex lattice melting in a rotating two-dimensional Bose-Einstein condensate (BEC). Impurities are introduced either through a protocol in which vortex lattice is produced in an impurity potential or first creating the vortex lattice in the absence of random pinning and then cranking up the impurity potential. These two protocols have obvious relation with the two commonly known protocols of creating vortex lattice in a type-II superconductor: zero field cooling protocol and the field cooling protocol respectively. Time-splitting Crank-Nicolson method has been used to numerically simulate the vortex lattice dynamics. It is shown that the vortex lattice follows a two-step melting via loss of positional and orientational order. This vortex lattice melting process in BEC closely mimics the recently observed two-step melting of vortex matter in weakly pinned type-II superconductor Co-intercalated NbSe2. Also, using numerical perturbation analysis, we compare between the states obtained in two protocols and show that the vortex lattice states are metastable and more disordered when impurities are introduced after the formation of an ordered vortex lattice. The author would like to thank SERB, Govt. of India and BCUD-SPPU for financial support through research Grants.
Zhang, Qingyang
2018-05-16
Differential co-expression analysis, as a complement of differential expression analysis, offers significant insights into the changes in molecular mechanism of different phenotypes. A prevailing approach to detecting differentially co-expressed genes is to compare Pearson's correlation coefficients in two phenotypes. However, due to the limitations of Pearson's correlation measure, this approach lacks the power to detect nonlinear changes in gene co-expression which is common in gene regulatory networks. In this work, a new nonparametric procedure is proposed to search differentially co-expressed gene pairs in different phenotypes from large-scale data. Our computational pipeline consisted of two main steps, a screening step and a testing step. The screening step is to reduce the search space by filtering out all the independent gene pairs using distance correlation measure. In the testing step, we compare the gene co-expression patterns in different phenotypes by a recently developed edge-count test. Both steps are distribution-free and targeting nonlinear relations. We illustrate the promise of the new approach by analyzing the Cancer Genome Atlas data and the METABRIC data for breast cancer subtypes. Compared with some existing methods, the new method is more powerful in detecting nonlinear type of differential co-expressions. The distance correlation screening can greatly improve computational efficiency, facilitating its application to large data sets.
Yang, James J; Williams, L Keoki; Buu, Anne
2017-08-24
A multivariate genome-wide association test is proposed for analyzing data on multivariate quantitative phenotypes collected from related subjects. The proposed method is a two-step approach. The first step models the association between the genotype and marginal phenotype using a linear mixed model. The second step uses the correlation between residuals of the linear mixed model to estimate the null distribution of the Fisher combination test statistic. The simulation results show that the proposed method controls the type I error rate and is more powerful than the marginal tests across different population structures (admixed or non-admixed) and relatedness (related or independent). The statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that applying the multivariate association test may facilitate identification of the pleiotropic genes contributing to the risk for alcohol dependence commonly expressed by four correlated phenotypes. This study proposes a multivariate method for identifying pleiotropic genes while adjusting for cryptic relatedness and population structure between subjects. The two-step approach is not only powerful but also computationally efficient even when the number of subjects and the number of phenotypes are both very large.
User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm
Bourobou, Serge Thomas Mickala; Yoo, Younghwan
2015-01-01
This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen’s temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home. PMID:26007738
Capote, F Priego; Jiménez, J Ruiz; de Castro, M D Luque
2007-08-01
An analytical method for the sequential detection, identification and quantitation of extra virgin olive oil adulteration with four edible vegetable oils--sunflower, corn, peanut and coconut oils--is proposed. The only data required for this method are the results obtained from an analysis of the lipid fraction by gas chromatography-mass spectrometry. A total number of 566 samples (pure oils and samples of adulterated olive oil) were used to develop the chemometric models, which were designed to accomplish, step-by-step, the three aims of the method: to detect whether an olive oil sample is adulterated, to identify the type of adulterant used in the fraud, and to determine how much aldulterant is in the sample. Qualitative analysis was carried out via two chemometric approaches--soft independent modelling of class analogy (SIMCA) and K nearest neighbours (KNN)--both approaches exhibited prediction abilities that were always higher than 91% for adulterant detection and 88% for type of adulterant identification. Quantitative analysis was based on partial least squares regression (PLSR), which yielded R2 values of >0.90 for calibration and validation sets and thus made it possible to determine adulteration with excellent precision according to the Shenk criteria.
Scanning tunneling microscope study of GaAs(001) surfaces grown by migration enhanced epitaxy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J.; Gallagher, M.C.; Willis, R.F.
We report an investigation of the morphology of p-type GaAs(001) surfaces using scanning tunneling microscopy (STM). The substrates were prepared using two methods: migration enhanced epitaxy (MEE) and standard molecular-beam epitaxy (MBE). The STM measurements were performed ex situ using As decapping. Analysis indicates that the overall step density of the MEE samples decreases as the growth temperature is increased. Nominally flat samples grown at 300{degrees}C exhibited step densities of 10.5 steps/1000 {Angstrom} along [ 110] dropping to 2.5 steps at 580{degrees}C. MEE samples exhibited a lower step density than MBE samples. However as-grown surfaces exhibited a larger distribution ofmore » step heights. Annealing the samples reduced the step height distribution exposing fewer atomic layers. Samples grown by MEE at 580{degrees}C and annealed for 2 min displayed the lowest step density and the narrowest step height distribution. All samples displayed an anisotropic step density. We found a ratio of A-type to B-type steps of between 2 and 3 which directly reflects the difference in the incorporation energy at steps. The aspect ratio increased slightly with growth temperature. We found a similar aspect ratio on samples grown by MBE. This indicates that anisotropic growth during MEE, like MBE, is dominated by incorporation kinetics. MEE samples grown at 580{degrees}C and capped immediately following growth exhibited a number of {open_quotes}holes{close_quotes} in the surface. The holes could be eliminated by annealing the surface prior to quenching. 20 refs., 3 figs., 1 tab.« less
Testing the Stability of 2-D Recursive QP, NSHP and General Digital Filters of Second Order
NASA Astrophysics Data System (ADS)
Rathinam, Ananthanarayanan; Ramesh, Rengaswamy; Reddy, P. Subbarami; Ramaswami, Ramaswamy
Several methods for testing stability of first quadrant quarter-plane two dimensional (2-D) recursive digital filters have been suggested in 1970's and 80's. Though Jury's row and column algorithms, row and column concatenation stability tests have been considered as highly efficient mapping methods. They still fall short of accuracy as they need infinite number of steps to conclude about the exact stability of the filters and also the computational time required is enormous. In this paper, we present procedurally very simple algebraic method requiring only two steps when applied to the second order 2-D quarter - plane filter. We extend the same method to the second order Non-Symmetric Half-plane (NSHP) filters. Enough examples are given for both these types of filters as well as some lower order general recursive 2-D digital filters. We applied our method to barely stable or barely unstable filter examples available in the literature and got the same decisions thus showing that our method is accurate enough.
Takashiri, Masayuki; Asai, Yuki; Yamauchi, Kazuki
2016-08-19
We investigated the effects of homogeneous electron beam (EB) irradiation and thermal annealing treatments on the structural, optical, and transport properties of bismuth telluride thin films. Bismuth telluride thin films were prepared by an RF magnetron sputtering method at room temperature. After deposition, the films were treated with homogeneous EB irradiation, thermal annealing, or a combination of both the treatments (two-step treatment). We employed Williamson-Hall analysis for separating the strain contribution from the crystallite domain contribution in the x-ray diffraction data of the films. We found that strain was induced in the thin films by EB irradiation and was relieved by thermal annealing. The crystal orientation along c-axis was significantly enhanced by the two-step treatment. Scanning electron microscopy indicated the melting and aggregation of nano-sized grains on the film surface by the two-step treatment. Optical analysis indicated that the interband transition of all the thin films was possibly of the indirect type, and that thermal annealing and two-step treatment methods increased the band gap of the films due to relaxation of the strain. Thermoelectric performance was significantly improved by the two-step treatment. The power factor reached a value of 17.2 μW (cm(-1) K(-2)), approximately 10 times higher than that of the as-deposited thin films. We conclude that improving the crystal orientation and relaxing the strain resulted in enhanced thermoelectric performance.
Rothrock, Michael J.; Hiett, Kelli L.; Gamble, John; Caudill, Andrew C.; Cicconi-Hogan, Kellie M.; Caporaso, J. Gregory
2014-01-01
The efficacy of DNA extraction protocols can be highly dependent upon both the type of sample being investigated and the types of downstream analyses performed. Considering that the use of new bacterial community analysis techniques (e.g., microbiomics, metagenomics) is becoming more prevalent in the agricultural and environmental sciences and many environmental samples within these disciplines can be physiochemically and microbiologically unique (e.g., fecal and litter/bedding samples from the poultry production spectrum), appropriate and effective DNA extraction methods need to be carefully chosen. Therefore, a novel semi-automated hybrid DNA extraction method was developed specifically for use with environmental poultry production samples. This method is a combination of the two major types of DNA extraction: mechanical and enzymatic. A two-step intense mechanical homogenization step (using bead-beating specifically formulated for environmental samples) was added to the beginning of the “gold standard” enzymatic DNA extraction method for fecal samples to enhance the removal of bacteria and DNA from the sample matrix and improve the recovery of Gram-positive bacterial community members. Once the enzymatic extraction portion of the hybrid method was initiated, the remaining purification process was automated using a robotic workstation to increase sample throughput and decrease sample processing error. In comparison to the strict mechanical and enzymatic DNA extraction methods, this novel hybrid method provided the best overall combined performance when considering quantitative (using 16S rRNA qPCR) and qualitative (using microbiomics) estimates of the total bacterial communities when processing poultry feces and litter samples. PMID:25548939
Twostep-by-twostep PIRK-type PC methods with continuous output formulas
NASA Astrophysics Data System (ADS)
Cong, Nguyen Huu; Xuan, Le Ngoc
2008-11-01
This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.
NASA Astrophysics Data System (ADS)
Katin, Viktor; Kosygin, Vladimir; Akhtiamov, Midkhat
2017-10-01
This paper substantiates the method of mathematical planning for experimental research in the process of selecting the most efficient types of burning devices for tubular refinery furnaces of vertical-cylindrical design. This paper provides detailed consideration of an experimental plan of a 4×4 Latin square type when studying the impact of three factors with four levels of variance. On the basis of the experimental research we have developed practical recommendations on the employment of optimal burners for two-step fuel combustion.
Suzuki, Yasuhiro; Kagawa, Naoko; Fujino, Toru; Sumiya, Tsuyoshi; Andoh, Taichi; Ishikawa, Kumiko; Kimura, Rie; Kemmochi, Kiyokazu; Ohta, Tsutomu; Tanaka, Shigeo
2005-01-01
There is an increasing demand for easy, high-throughput (HTP) methods for protein engineering to support advances in the development of structural biology, bioinformatics and drug design. Here, we describe an N- and C-terminal cloning method utilizing Gateway cloning technology that we have adopted for chimeric and mutant genes production as well as domain shuffling. This method involves only three steps: PCR, in vitro recombination and transformation. All three processes consist of simple handling, mixing and incubation steps. We have characterized this novel HTP method on 96 targets with >90% success. Here, we also discuss an N- and C-terminal cloning method for domain shuffling and a combination of mutation and chimeragenesis with two types of plasmid vectors. PMID:16009811
Jagtap, Pratik; Goslinga, Jill; Kooren, Joel A; McGowan, Thomas; Wroblewski, Matthew S; Seymour, Sean L; Griffin, Timothy J
2013-04-01
Large databases (>10(6) sequences) used in metaproteomic and proteogenomic studies present challenges in matching peptide sequences to MS/MS data using database-search programs. Most notably, strict filtering to avoid false-positive matches leads to more false negatives, thus constraining the number of peptide matches. To address this challenge, we developed a two-step method wherein matches derived from a primary search against a large database were used to create a smaller subset database. The second search was performed against a target-decoy version of this subset database merged with a host database. High confidence peptide sequence matches were then used to infer protein identities. Applying our two-step method for both metaproteomic and proteogenomic analysis resulted in twice the number of high confidence peptide sequence matches in each case, as compared to the conventional one-step method. The two-step method captured almost all of the same peptides matched by the one-step method, with a majority of the additional matches being false negatives from the one-step method. Furthermore, the two-step method improved results regardless of the database search program used. Our results show that our two-step method maximizes the peptide matching sensitivity for applications requiring large databases, especially valuable for proteogenomics and metaproteomics studies. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Visibility Equalizer Cutaway Visualization of Mesoscopic Biological Models.
Le Muzic, M; Mindek, P; Sorger, J; Autin, L; Goodsell, D; Viola, I
2016-06-01
In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine-tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer-based approach for visibility specification was valuable and effective for both, scientific and educational purposes.
Visibility Equalizer Cutaway Visualization of Mesoscopic Biological Models
Le Muzic, M.; Mindek, P.; Sorger, J.; Autin, L.; Goodsell, D.; Viola, I.
2017-01-01
In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine-tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer-based approach for visibility specification was valuable and effective for both, scientific and educational purposes. PMID:28344374
Twisk, J W R; Hoogendijk, E O; Zwijsen, S A; de Boer, M R
2016-04-01
Within epidemiology, a stepped wedge trial design (i.e., a one-way crossover trial in which several arms start the intervention at different time points) is increasingly popular as an alternative to a classical cluster randomized controlled trial. Despite this increasing popularity, there is a huge variation in the methods used to analyze data from a stepped wedge trial design. Four linear mixed models were used to analyze data from a stepped wedge trial design on two example data sets. The four methods were chosen because they have been (frequently) used in practice. Method 1 compares all the intervention measurements with the control measurements. Method 2 treats the intervention variable as a time-independent categorical variable comparing the different arms with each other. In method 3, the intervention variable is a time-dependent categorical variable comparing groups with different number of intervention measurements, whereas in method 4, the changes in the outcome variable between subsequent measurements are analyzed. Regarding the results in the first example data set, methods 1 and 3 showed a strong positive intervention effect, which disappeared after adjusting for time. Method 2 showed an inverse intervention effect, whereas method 4 did not show a significant effect at all. In the second example data set, the results were the opposite. Both methods 2 and 4 showed significant intervention effects, whereas the other two methods did not. For method 4, the intervention effect attenuated after adjustment for time. Different methods to analyze data from a stepped wedge trial design reveal different aspects of a possible intervention effect. The choice of a method partly depends on the type of the intervention and the possible time-dependent effect of the intervention. Furthermore, it is advised to combine the results of the different methods to obtain an interpretable overall result. Copyright © 2016 Elsevier Inc. All rights reserved.
Study of a two-stage photobase generator for photolithography in microelectronics.
Turro, Nicholas J; Li, Yongjun; Jockusch, Steffen; Hagiwara, Yuji; Okazaki, Masahiro; Mesch, Ryan A; Schuster, David I; Willson, C Grant
2013-03-01
The investigation of the photochemistry of a two-stage photobase generator (PBG) is described. Absorption of a photon by a latent PBG (1) (first step) produces a PBG (2). Irradiation of 2 in the presence of water produces a base (second step). This two-photon sequence (1 + hν → 2 + hν → base) is an important component in the design of photoresists for pitch division technology, a method that doubles the resolution of projection photolithography for the production of microelectronic chips. In the present system, the excitation of 1 results in a Norrish type II intramolecular hydrogen abstraction to generate a 1,4-biradiacal that undergoes cleavage to form 2 and acetophenone (Φ ∼ 0.04). In the second step, excitation of 2 causes cleavage of the oxime ester (Φ = 0.56) followed by base generation after reaction with water.
[Effect of two-step sintering method on properties of zirconia ceramic].
Huang, Hui; Wei, Bin; Zhang, Fu-Qiang; Sun, Jing; Gao, Lian
2008-04-01
To study the influence of two-step sintering method on the sintering property, mechanical properties and microstructure of zirconia ceramic. The nano-size zirconia powder were compacted and divided into two groups, one group for one-step sintering method, another group for two-step sintering method. All samples sintered at different temperature. The relative density, three-bend strength, HV hardness, fracture toughness and microstructure of sintered block were investigated. Two-step sintering method influenced the sintering property and mechanical properties of zirconia ceramic. The maximal relative density was 98.49% at 900 degrees C/1,450 degrees C sintering temperature. There were significant difference of mechanical properties between one-step sintering and two-step sintering, the three-bend strength and fracture toughness declined, hardness increased at two-step sintering. The three-bend strength, HV hardness and fracture toughness reached to maximum value as 1,059.08 MPa +/- 75.24 MPa, 1,377.00 MPa +/- 16.37 MPa and 5.92 MPa x m1/2 +/- 0.37 MPa x m1/2 at 900 degrees C/1,450 degrees C sintering temperature respectively. Microscopy revealed the relationship between the porosity and shapes of grains was correlated to strength of the zirconia ceramics. Despite of the two-step sintering method influences the properties of zirconia, it also is a promising esthetic all-ceramic dental material.
On the Development of Multi-Step Inverse FEM with Shell Model
NASA Astrophysics Data System (ADS)
Huang, Y.; Du, R.
2005-08-01
The inverse or one-step finite element approach is increasingly used in the sheet metal stamping industry to predict strain distribution and the initial blank shape in the preliminary design stage. Based on the existing theory, there are two types of method: one is based on the principle of virtual work and the other is based on the principle of extreme work. Much research has been conducted to improve the accuracy of simulation results. For example, based on the virtual work principle, Batoz et al. developed a new method using triangular DKT shell elements. In this new method, the bending and unbending effects are considered. Based on the principle of extreme work, Majlessi and et al. proposed the multi-step inverse approach with membrane elements and applied it to an axis-symmetric part. Lee and et al. presented an axis-symmetric shell element model to solve the similar problem. In this paper, a new multi-step inverse method is introduced with no limitation on the workpiece shape. It is a shell element model based on the virtual work principle. The new method is validated by means of comparing to the commercial software system (PAMSTAMP®). The comparison results indicate that the accuracy is good.
Introduction to Remote Sensing Image Registration
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline
2017-01-01
For many applications, accurate and fast image registration of large amounts of multi-source data is the first necessary step before subsequent processing and integration. Image registration is defined by several steps and each step can be approached by various methods which all present diverse advantages and drawbacks depending on the type of data, the type of applications, the a prior information known about the data and the type of accuracy that is required. This paper will first present a general overview of remote sensing image registration and then will go over a few specific methods and their applications
Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N
2017-01-25
This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.
A transient response analysis of the space shuttle vehicle during liftoff
NASA Technical Reports Server (NTRS)
Brunty, J. A.
1990-01-01
A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.
Picard-Meyer, Evelyne; Peytavin de Garam, Carine; Schereffer, Jean Luc; Marchal, Clotilde; Robardet, Emmanuelle; Cliquet, Florence
2015-01-01
This study evaluates the performance of five two-step SYBR Green RT-qPCR kits and five one-step SYBR Green qRT-PCR kits using real-time PCR assays. Two real-time thermocyclers showing different throughput capacities were used. The analysed performance evaluation criteria included the generation of standard curve, reaction efficiency, analytical sensitivity, intra- and interassay repeatability as well as the costs and the practicability of kits, and thermocycling times. We found that the optimised one-step PCR assays had a higher detection sensitivity than the optimised two-step assays regardless of the machine used, while no difference was detected in reaction efficiency, R (2) values, and intra- and interreproducibility between the two methods. The limit of detection at the 95% confidence level varied between 15 to 981 copies/µL and 41 to 171 for one-step kits and two-step kits, respectively. Of the ten kits tested, the most efficient kit was the Quantitect SYBR Green qRT-PCR with a limit of detection at 95% of confidence of 20 and 22 copies/µL on the thermocyclers Rotor gene Q MDx and MX3005P, respectively. The study demonstrated the pivotal influence of the thermocycler on PCR performance for the detection of rabies RNA, as well as that of the master mixes.
Picard-Meyer, Evelyne; Peytavin de Garam, Carine; Schereffer, Jean Luc; Marchal, Clotilde; Robardet, Emmanuelle; Cliquet, Florence
2015-01-01
This study evaluates the performance of five two-step SYBR Green RT-qPCR kits and five one-step SYBR Green qRT-PCR kits using real-time PCR assays. Two real-time thermocyclers showing different throughput capacities were used. The analysed performance evaluation criteria included the generation of standard curve, reaction efficiency, analytical sensitivity, intra- and interassay repeatability as well as the costs and the practicability of kits, and thermocycling times. We found that the optimised one-step PCR assays had a higher detection sensitivity than the optimised two-step assays regardless of the machine used, while no difference was detected in reaction efficiency, R 2 values, and intra- and interreproducibility between the two methods. The limit of detection at the 95% confidence level varied between 15 to 981 copies/µL and 41 to 171 for one-step kits and two-step kits, respectively. Of the ten kits tested, the most efficient kit was the Quantitect SYBR Green qRT-PCR with a limit of detection at 95% of confidence of 20 and 22 copies/µL on the thermocyclers Rotor gene Q MDx and MX3005P, respectively. The study demonstrated the pivotal influence of the thermocycler on PCR performance for the detection of rabies RNA, as well as that of the master mixes. PMID:25785274
A survey on the geographic scope of textual documents
NASA Astrophysics Data System (ADS)
Monteiro, Bruno R.; Davis, Clodoveu A.; Fonseca, Fred
2016-11-01
Recognizing references to places in texts is needed in many applications, such as search engines, location-based social media and document classification. In this paper we present a survey of methods and techniques for the recognition and identification of places referenced in texts. We discuss concepts and terminology, and propose a classification of the solutions given in the literature. We introduce a definition of the Geographic Scope Resolution (GSR) problem, dividing it in three steps: geoparsing, reference resolution, and grounding references. Solutions to the first two steps are organized according to the method used, and solutions to the third step are organized according to the type of output produced. We found that it is difficult to compare existing solutions directly to one another, because they often create their own benchmarking data, targeted to their own problem.
NASA Astrophysics Data System (ADS)
Medvedeva, Maria F.; Doubrovski, Valery A.
2017-03-01
The resolution of the acousto-optical method for blood typing was estimated experimentally by means of two types of reagents: monoclonal antibodies and standard hemagglutinating sera. The peculiarity of this work is the application of digital photo images processing by pixel analysis previously proposed by the authors. The influence of the concentrations of reagents, of blood sample, which is to be tested, as well as of the duration of the ultrasonic action on the biological object upon the resolution of acousto-optical method were investigated. The optimal experimental conditions to obtain maximum of the resolution of the acousto-optical method were found, it creates the prerequisites for a reliable blood typing. The present paper is a further step in the development of acousto-optical method for determining human blood groups.
NASA Astrophysics Data System (ADS)
Liu, Changying; Wu, Xinyuan
2017-07-01
In this paper we explore arbitrarily high-order Lagrange collocation-type time-stepping schemes for effectively solving high-dimensional nonlinear Klein-Gordon equations with different boundary conditions. We begin with one-dimensional periodic boundary problems and first formulate an abstract ordinary differential equation (ODE) on a suitable infinity-dimensional function space based on the operator spectrum theory. We then introduce an operator-variation-of-constants formula which is essential for the derivation of our arbitrarily high-order Lagrange collocation-type time-stepping schemes for the nonlinear abstract ODE. The nonlinear stability and convergence are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix under some suitable smoothness assumptions. With regard to the two dimensional Dirichlet or Neumann boundary problems, our new time-stepping schemes coupled with discrete Fast Sine / Cosine Transformation can be applied to simulate the two-dimensional nonlinear Klein-Gordon equations effectively. All essential features of the methodology are present in one-dimensional and two-dimensional cases, although the schemes to be analysed lend themselves with equal to higher-dimensional case. The numerical simulation is implemented and the numerical results clearly demonstrate the advantage and effectiveness of our new schemes in comparison with the existing numerical methods for solving nonlinear Klein-Gordon equations in the literature.
NASA Astrophysics Data System (ADS)
Bera, Amrita Mandal; Wargulski, Dan Ralf; Unold, Thomas
2018-04-01
Hybrid organometal perovskites have been emerged as promising solar cell material and have exhibited solar cell efficiency more than 20%. Thin films of Methylammonium lead iodide CH3NH3PbI3 perovskite materials have been synthesized by two different (one step and two steps) methods and their morphological properties have been studied by scanning electron microscopy and optical microscope imaging. The morphology of the perovskite layer is one of the most important parameters which affect solar cell efficiency. The morphology of the films revealed that two steps method provides better surface coverage than the one step method. However, the grain sizes were smaller in case of two steps method. The films prepared by two steps methods on different substrates revealed that the grain size also depend on the substrate where an increase of the grain size was found from glass substrate to FTO with TiO2 blocking layer to FTO without any change in the surface coverage area. Present study reveals that an improved quality of films can be obtained by two steps method by an optimization of synthesis processes.
Dynamical Chaos in the Wisdom-Holman Integrator: Origins and Solutions
NASA Technical Reports Server (NTRS)
Rauch, Kevin P.; Holman, Matthew
1999-01-01
We examine the nonlinear stability of the Wisdom-Holman (WH) symplectic mapping applied to the integration of perturbed, highly eccentric (e-0.9) two-body orbits. We find that the method is unstable and introduces artificial chaos into the computed trajectories for this class of problems, unless the step size chosen 1s small enough that PeriaPse is always resolved, in which case the method is generically stable. This 'radial orbit instability' persists even for weakly perturbed systems. Using the Stark problem as a fiducial test case, we investigate the dynamical origin of this instability and argue that the numerical chaos results from the overlap of step-size resonances; interestingly, for the Stark-problem many of these resonances appear to be absolutely stable. We similarly examine the robustness of several alternative integration methods: a time-regularized version of the WH mapping suggested by Mikkola; the potential-splitting (PS) method of Duncan, Levison, Lee; and two original methods incorporating approximations based on Stark motion instead of Keplerian motion. The two fixed point problem and a related, more general problem are used to conduct a comparative test of the various methods for several types of motion. Among the algorithms tested, the time-transformed WH mapping is clearly the most efficient and stable method of integrating eccentric, nearly Keplerian orbits in the absence of close encounters. For test particles subject to both high eccentricities and very close encounters, we find an enhanced version of the PS method-incorporating time regularization, force-center switching, and an improved kernel function-to be both economical and highly versatile. We conclude that Stark-based methods are of marginal utility in N-body type integrations. Additional implications for the symplectic integration of N-body systems are discussed.
Music Retrieval Based on the Relation between Color Association and Lyrics
NASA Astrophysics Data System (ADS)
Nakamur, Tetsuaki; Utsumi, Akira; Sakamoto, Maki
Various methods for music retrieval have been proposed. Recently, many researchers are tackling developing methods based on the relationship between music and feelings. In our previous psychological study, we found that there was a significant correlation between colors evoked from songs and colors evoked only from lyrics, and showed that the music retrieval system using lyrics could be developed. In this paper, we focus on the relationship among music, lyrics and colors, and propose a music retrieval method using colors as queries and analyzing lyrics. This method estimates colors evoked from songs by analyzing lyrics of the songs. On the first step of our method, words associated with colors are extracted from lyrics. We assumed two types of methods to extract words associated with colors. In the one of two methods, the words are extracted based on the result of a psychological experiment. In the other method, in addition to the words extracted based on the result of the psychological experiment, the words from corpora for the Latent Semantic Analysis are extracted. On the second step, colors evoked from the extracted words are compounded, and the compounded colors are regarded as those evoked from the song. On the last step, colors as queries are compared with colors estimated from lyrics, and the list of songs is presented based on similarities. We evaluated the two methods described above and found that the method based on the psychological experiment and corpora performed better than the method only based on the psychological experiment. As a result, we showed that the method using colors as queries and analyzing lyrics is effective for music retrieval.
Comparing Multi-Step IMAC and Multi-Step TiO2 Methods for Phosphopeptide Enrichment
Yue, Xiaoshan; Schunter, Alissa; Hummon, Amanda B.
2016-01-01
Phosphopeptide enrichment from complicated peptide mixtures is an essential step for mass spectrometry-based phosphoproteomic studies to reduce sample complexity and ionization suppression effects. Typical methods for enriching phosphopeptides include immobilized metal affinity chromatography (IMAC) or titanium dioxide (TiO2) beads, which have selective affinity and can interact with phosphopeptides. In this study, the IMAC enrichment method was compared with the TiO2 enrichment method, using a multi-step enrichment strategy from whole cell lysate, to evaluate their abilities to enrich for different types of phosphopeptides. The peptide-to-beads ratios were optimized for both IMAC and TiO2 beads. Both IMAC and TiO2 enrichments were performed for three rounds to enable the maximum extraction of phosphopeptides from the whole cell lysates. The phosphopeptides that are unique to IMAC enrichment, unique to TiO2 enrichment, and identified with both IMAC and TiO2 enrichment were analyzed for their characteristics. Both IMAC and TiO2 enriched similar amounts of phosphopeptides with comparable enrichment efficiency. However, phosphopeptides that are unique to IMAC enrichment showed a higher percentage of multi-phosphopeptides, as well as a higher percentage of longer, basic, and hydrophilic phosphopeptides. Also, the IMAC and TiO2 procedures clearly enriched phosphopeptides with different motifs. Finally, further enriching with two rounds of TiO2 from the supernatant after IMAC enrichment, or further enriching with two rounds of IMAC from the supernatant TiO2 enrichment does not fully recover the phosphopeptides that are not identified with the corresponding multi-step enrichment. PMID:26237447
Vail, III, William B.
1993-01-01
Methods of operation of an apparatus having at least two pairs of voltage measurement electrodes vertically disposed in a cased well to measure the resistivity of adjacent geological formations from inside the cased well. During stationary measurements with the apparatus at a fixed vertical depth within the cased well, the invention herein discloses methods of operation which include a measurement step and subsequent first and second compensation steps respectively resulting in improved accuracy of measurement. First and second order errors of measurement are identified, and the measurement step and two compensation steps provide methods to substantially eliminate their influence on the results. A multiple frequency apparatus adapted to movement within the well is described which simultaneously provide the measurement and two compensation steps.
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544
Multistep integration formulas for the numerical integration of the satellite problem
NASA Technical Reports Server (NTRS)
Lundberg, J. B.; Tapley, B. D.
1981-01-01
The use of two Class 2/fixed mesh/fixed order/multistep integration packages of the PECE type for the numerical integration of the second order, nonlinear, ordinary differential equation of the satellite orbit problem. These two methods are referred to as the general and the second sum formulations. The derivation of the basic equations which characterize each formulation and the role of the basic equations in the PECE algorithm are discussed. Possible starting procedures are examined which may be used to supply the initial set of values required by the fixed mesh/multistep integrators. The results of the general and second sum integrators are compared to the results of various fixed step and variable step integrators.
Gauss Seidel-type methods for energy states of a multi-component Bose Einstein condensate
NASA Astrophysics Data System (ADS)
Chang, Shu-Ming; Lin, Wen-Wei; Shieh, Shih-Feng
2005-01-01
In this paper, we propose two iterative methods, a Jacobi-type iteration (JI) and a Gauss-Seidel-type iteration (GSI), for the computation of energy states of the time-independent vector Gross-Pitaevskii equation (VGPE) which describes a multi-component Bose-Einstein condensate (BEC). A discretization of the VGPE leads to a nonlinear algebraic eigenvalue problem (NAEP). We prove that the GSI method converges locally and linearly to a solution of the NAEP if and only if the associated minimized energy functional problem has a strictly local minimum. The GSI method can thus be used to compute ground states and positive bound states, as well as the corresponding energies of a multi-component BEC. Numerical experience shows that the GSI converges much faster than JI and converges globally within 10-20 steps.
Cai, Yao; Hu, Huasi; Lu, Shuangying; Jia, Qinggang
2018-05-01
To minimize the size and weight of a vehicle-mounted accelerator-driven D-T neutron source and protect workers from unnecessary irradiation after the equipment shutdown, a method to optimize radiation shielding material aiming at compactness, lightweight, and low activation for the fast neutrons was developed. The method employed genetic algorithm, combining MCNP and ORIGEN codes. A series of composite shielding material samples were obtained by the method step by step. The volume and weight needed to build a shield (assumed as a coaxial tapered cylinder) were adopted to compare the performance of the materials visually and conveniently. The results showed that the optimized materials have excellent performance in comparison with the conventional materials. The "MCNP6-ACT" method and the "rigorous two steps" (R2S) method were used to verify the activation grade of the shield irradiated by D-T neutrons. The types of radionuclide, the energy spectrum of corresponding decay gamma source, and the variation in decay gamma dose rate were also computed. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Eissa, N. A.; Sheta, N. H.; Ahmed, M. A.
1992-04-01
Coal has been recently discovered in Maghara mine at Northern Sinai, Egypt. Coal samples have been collected from different depths and were measured by XRD, XRF, and MS, in order to characterize this type of coal. It has been found that the iron bearing minerals are mainly pyrite and different sulphates depending on the depth of the sample. The second part contains the application of desulphurization techniques to Egyptian coal which are: floatation (one step and two steps) chemical [(HCl+HNO3), and Fe2(SO4)3] and bacterial methods (Chromatium and Chlorobium species). The efficiency of each technique was calculated. A comparative discussion is given of each desulphurization method, from which the bacterial method has proved to be the most efficient one.
Clustering of Variables for Mixed Data
NASA Astrophysics Data System (ADS)
Saracco, J.; Chavent, M.
2016-05-01
This chapter presents clustering of variables which aim is to lump together strongly related variables. The proposed approach works on a mixed data set, i.e. on a data set which contains numerical variables and categorical variables. Two algorithms of clustering of variables are described: a hierarchical clustering and a k-means type clustering. A brief description of PCAmix method (that is a principal component analysis for mixed data) is provided, since the calculus of the synthetic variables summarizing the obtained clusters of variables is based on this multivariate method. Finally, the R packages ClustOfVar and PCAmixdata are illustrated on real mixed data. The PCAmix and ClustOfVar approaches are first used for dimension reduction (step 1) before applying in step 2 a standard clustering method to obtain groups of individuals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamaguchi, Nobuyoshi; Nakao, Masato; Murakami, Masahide
2008-07-08
For seismic design, ductility-related force modification factors are named R factor in Uniform Building Code of U.S, q factor in Euro Code 8 and Ds (inverse of R) factor in Japanese Building Code. These ductility-related force modification factors for each type of shear elements are appeared in those codes. Some constructions use various types of shear walls that have different ductility, especially for their retrofit or re-strengthening. In these cases, engineers puzzle the decision of force modification factors of the constructions. Solving this problem, new method to calculate lateral strengths of stories for simple shear wall systems is proposed andmore » named 'Stiffness--Potential Energy Addition Method' in this paper. This method uses two design lateral strengths for each type of shear walls in damage limit state and safety limit state. Two lateral strengths of stories in both limit states are calculated from these two design lateral strengths for each type of shear walls in both limit states. Calculated strengths have the same quality as values obtained by strength addition method using many steps of load-deformation data of shear walls. The new method to calculate ductility factors is also proposed in this paper. This method is based on the new method to calculate lateral strengths of stories. This method can solve the problem to obtain ductility factors of stories with shear walls of different ductility.« less
Tenebrio beetles use magnetic inclination compass
NASA Astrophysics Data System (ADS)
Vácha, Martin; Drštková, Dana; Půžová, Tereza
2008-08-01
Animals that guide directions of their locomotion or their migration routes by the lines of the geomagnetic field use either polarity or inclination compasses to determine the field polarity (the north or south direction). Distinguishing the two compass types is a guideline for estimation of the molecular principle of reception and has been achieved for a number of animal groups, with the exception of insects. A standard diagnostic method to distinguish a compass type is based on reversing the vertical component of the geomagnetic field, which leads to the opposite reactions of animals with two different compass types. In the present study, adults of the mealworm beetle Tenebrio molitor were tested by means of a two-step laboratory test of magnetoreception. Beetles that were initially trained to memorize the magnetic position of the light source preferred, during the subsequent test, this same direction, pursuant geomagnetic cues only. In the following step, the vertical component was reversed between the training and the test. The beetles significantly turned their preferred direction by 180°. Our results brought until then unknown original findings that insects, represented here by the T. molitor species, use—in contrast to another previously researched Arthropod, spiny lobster—the inclination compass.
Rational reduction of periodic propagators for off-period observations.
Blanton, Wyndham B; Logan, John W; Pines, Alexander
2004-02-01
Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.
NASA Astrophysics Data System (ADS)
Chen, Y.; Ho, C.; Chang, L.
2011-12-01
In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the conditional probability density function (PDF) of precipitations approximated by the kernel density estimation are calculated respectively for each weather types. In the synthesis step, 100 patterns of synthesis data are generated. First, the weather type of the n-th day are determined by the results of K-means clustering. The associated transition matrix and PDF of the weather type were also determined for the usage of the next sub-step in the synthesis process. Second, the precipitation condition, dry or wet, can be synthesized basing on the transition matrix. If the synthesized condition is dry, the quantity of precipitation is zero; otherwise, the quantity should be further determined in the third sub-step. Third, the quantity of the synthesized precipitation is assigned as the random variable of the PDF defined above. The synthesis efficiency compares the gap of the monthly mean curves and monthly standard deviation curves between the historical precipitation data and the 100 patterns of synthesis data.
NASA Astrophysics Data System (ADS)
Petruš, Ondrej; Oriňak, Andrej; Oriňaková, Renáta; Orságová Králová, Zuzana; Múdra, Erika; Kupková, Miriam; Kovaľ, Karol
2017-11-01
Two types of metallised nanocavities (single and hybrid) were fabricated by colloid lithography followed by electrochemical deposition of Ni and subsequently Ag layers. Introductory Ni deposition step iniciates more homogenous decoration of nanocavities with Ag nanoparticles. Silver nanocavity decoration has been so performed with lower nucleation rate and with Ag nanoparticles homogeinity increase. By this, two step Ni and Ag deposition trough polystyrene nanospheres (100, 300, 500, 700, 900 nm), the various Ag surfaces were obtained. Ni layer formation in the first step of deposition enabled more precise controlling of Ag film deposition and thus final Ag surface morphology. Prepared substrates were tested as active surfaces in SERS application. The best SERS signal enhancement was observed at 500 nm Ag nanocavities with normalised thickness Ni layer ∼0.5. Enhancement factor has been established at value 1.078 × 1010; time stability was determined within 13 weeks; charge distribution at nanocavity Ag surfaces as well as reflection spectra were calculated by FDTD method. Newly prepared nanocavity surface can be applied in SERS analysis, predominantly.
Tsujimoto, Akimasa; Barkmeier, Wayne W; Hosoya, Yumiko; Nojiri, Kie; Nagura, Yuko; Takamizawa, Toshiki; Latta, Mark A; Miyazaki, Masashi
2017-10-01
To comparatively evaluate universal adhesives and two-step self-etch adhesives for enamel bond fatigue durability in self-etch mode. Three universal adhesives (Clearfil Universal Bond; G-Premio Bond; Scotchbond Universal Adhesive) and three two-step self-etch adhesives (Clearfil SE Bond; Clearfil SE Bond 2; OptiBond XTR) were used. The initial shear bond strength and shear fatigue strength of the adhesive to enamel in self-etch mode were determined. The initial shear bond strengths of the universal adhesives to enamel in self-etch mode was significantly lower than those of two-step self-etch adhesives and initial shear bond strengths were not influenced by type of adhesive in each adhesive category. The shear fatigue strengths of universal adhesives to enamel in self-etch mode were significantly lower than that of Clearfil SE Bond and Clearfil SE Bond 2, but similar to that OptiBond XTR. Unlike two-step self-etch adhesives, the initial shear bond strength and shear fatigue strength of universal adhesives to enamel in self-etch mode was not influenced by the type of adhesive. This laboratory study showed that the enamel bond fatigue durability of universal adhesives was lower than Clearfil SE Bond and Clearfil SE Bond 2, similar to Optibond XTR, and was not influenced by type of adhesive, unlike two-step self-etch adhesives.
Sharma, Manuj; Petersen, Irene; Nazareth, Irwin; Coton, Sonia J
2016-01-01
Background Research into diabetes mellitus (DM) often requires a reproducible method for identifying and distinguishing individuals with type 1 DM (T1DM) and type 2 DM (T2DM). Objectives To develop a method to identify individuals with T1DM and T2DM using UK primary care electronic health records. Methods Using data from The Health Improvement Network primary care database, we developed a two-step algorithm. The first algorithm step identified individuals with potential T1DM or T2DM based on diagnostic records, treatment, and clinical test results. We excluded individuals with records for rarer DM subtypes only. For individuals to be considered diabetic, they needed to have at least two records indicative of DM; one of which was required to be a diagnostic record. We then classified individuals with T1DM and T2DM using the second algorithm step. A combination of diagnostic codes, medication prescribed, age at diagnosis, and whether the case was incident or prevalent were used in this process. We internally validated this classification algorithm through comparison against an independent clinical examination of The Health Improvement Network electronic health records for a random sample of 500 DM individuals. Results Out of 9,161,866 individuals aged 0–99 years from 2000 to 2014, we classified 37,693 individuals with T1DM and 418,433 with T2DM, while 1,792 individuals remained unclassified. A small proportion were classified with some uncertainty (1,155 [3.1%] of all individuals with T1DM and 6,139 [1.5%] with T2DM) due to unclear health records. During validation, manual assignment of DM type based on clinical assessment of the entire electronic record and algorithmic assignment led to equivalent classification in all instances. Conclusion The majority of individuals with T1DM and T2DM can be readily identified from UK primary care electronic health records. Our approach can be adapted for use in other health care settings. PMID:27785102
Homojunction silicon solar cells doping by ion implantation
NASA Astrophysics Data System (ADS)
Milési, Frédéric; Coig, Marianne; Lerat, Jean-François; Desrues, Thibaut; Le Perchec, Jérôme; Lanterne, Adeline; Lachal, Laurent; Mazen, Frédéric
2017-10-01
Production costs and energy efficiency are the main priorities for the photovoltaic (PV) industry (COP21 conclusions). To lower costs and increase efficiency, we are proposing to reduce the number of processing steps involved in the manufacture of N-type Passivated Rear Totally Diffused (PERT) silicon solar cells. Replacing the conventional thermal diffusion doping steps by ion implantation followed by thermal annealing allows reducing the number of steps from 7 to 3 while maintaining similar efficiency. This alternative approach was investigated in the present work. Beamline and plasma immersion ion implantation (BLII and PIII) methods were used to insert n-(phosphorus) and p-type (boron) dopants into the Si substrate. With higher throughput and lower costs, PIII is a better candidate for the photovoltaic industry, compared to BL. However, the optimization of the plasma conditions is demanding and more complex than the beamline approach. Subsequent annealing was performed on selected samples to activate the dopants on both sides of the solar cell. Two annealing methods were investigated: soak and spike thermal annealing. Best performing solar cells, showing a PV efficiency of about 20%, was obtained using spike annealing with adapted ion implantation conditions.
Two dimensional fully nonlinear numerical wave tank based on the BEM
NASA Astrophysics Data System (ADS)
Sun, Zhe; Pang, Yongjie; Li, Hongwei
2012-12-01
The development of a two dimensional numerical wave tank (NWT) with a rocker or piston type wavemaker based on the high order boundary element method (BEM) and mixed Eulerian-Lagrangian (MEL) is examined. The cauchy principle value (CPV) integral is calculated by a special Gauss type quadrature and a change of variable. In addition the explicit truncated Taylor expansion formula is employed in the time-stepping process. A modified double nodes method is assumed to tackle the corner problem, as well as the damping zone technique is used to absorb the propagation of the free surface wave at the end of the tank. A variety of waves are generated by the NWT, for example; a monochromatic wave, solitary wave and irregular wave. The results confirm the NWT model is efficient and stable.
Park, Jae-Min; Jang, Se Jin; Lee, Sang-Ick; Lee, Won-Jun
2018-03-14
We designed cyclosilazane-type silicon precursors and proposed a three-step plasma-enhanced atomic layer deposition (PEALD) process to prepare silicon nitride films with high quality and excellent step coverage. The cyclosilazane-type precursor, 1,3-di-isopropylamino-2,4-dimethylcyclosilazane (CSN-2), has a closed ring structure for good thermal stability and high reactivity. CSN-2 showed thermal stability up to 450 °C and a sufficient vapor pressure of 4 Torr at 60 °C. The energy for the chemisorption of CSN-2 on the undercoordinated silicon nitride surface as calculated by density functional theory method was -7.38 eV. The PEALD process window was between 200 and 500 °C, with a growth rate of 0.43 Å/cycle. The best film quality was obtained at 500 °C, with hydrogen impurity of ∼7 atom %, oxygen impurity less than 2 atom %, low wet etching rate, and excellent step coverage of ∼95%. At 300 °C and lower temperatures, the wet etching rate was high especially at the lower sidewall of the trench pattern. We introduced the three-step PEALD process to improve the film quality and the step coverage on the lower sidewall. The sequence of the three-step PEALD process consists of the CSN-2 feeding step, the NH 3 /N 2 plasma step, and the N 2 plasma step. The H radicals in NH 3 /N 2 plasma efficiently remove the ligands from the precursor, and the N 2 plasma after the NH 3 plasma removes the surface hydrogen atoms to activate the adsorption of the precursor. The films deposited at 300 °C using the novel precursor and the three-step PEALD process showed a significantly improved step coverage of ∼95% and an excellent wet etching resistance at the lower sidewall, which is only twice as high as that of the blanket film prepared by low-pressure chemical vapor deposition.
fMRI capture of auditory hallucinations: Validation of the two-steps method.
Leroy, Arnaud; Foucher, Jack R; Pins, Delphine; Delmaire, Christine; Thomas, Pierre; Roser, Mathilde M; Lefebvre, Stéphanie; Amad, Ali; Fovet, Thomas; Jaafari, Nemat; Jardri, Renaud
2017-10-01
Our purpose was to validate a reliable method to capture brain activity concomitant with hallucinatory events, which constitute frequent and disabling experiences in schizophrenia. Capturing hallucinations using functional magnetic resonance imaging (fMRI) remains very challenging. We previously developed a method based on a two-steps strategy including (1) multivariate data-driven analysis of per-hallucinatory fMRI recording and (2) selection of the components of interest based on a post-fMRI interview. However, two tests still need to be conducted to rule out critical pitfalls of conventional fMRI capture methods before this two-steps strategy can be adopted in hallucination research: replication of these findings on an independent sample and assessment of the reliability of the hallucination-related patterns at the subject level. To do so, we recruited a sample of 45 schizophrenia patients suffering from frequent hallucinations, 20 schizophrenia patients without hallucinations and 20 matched healthy volunteers; all participants underwent four different experiments. The main findings are (1) high accuracy in reporting unexpected sensory stimuli in an MRI setting; (2) good detection concordance between hypothesis-driven and data-driven analysis methods (as used in the two-steps strategy) when controlled unexpected sensory stimuli are presented; (3) good agreement of the two-steps method with the online button-press approach to capture hallucinatory events; (4) high spatial consistency of hallucinatory-related networks detected using the two-steps method on two independent samples. By validating the two-steps method, we advance toward the possible transfer of such technology to new image-based therapies for hallucinations. Hum Brain Mapp 38:4966-4979, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Isolation and characterization of antimicrobial food components.
Papetti, Adele
2012-04-01
Nowadays there is an evident growing interest in natural antimicrobial compounds isolated from food matrices. According to the type of matrix, different isolation and purification steps are needed and as these active compounds belong to different chemical classes, also different chromatographic and electrophoretic methods coupled with various detectors (the most used diode array detector and mass spectrometer) have to be performed. This review covers recent steps made in the fundamental understanding of sample preparation methods as well as of analytical tools useful for the complete characterization of bioactive food compounds. The most commonly used methods for extraction of natural antimicrobial compounds are the conventional liquid-liquid or solid-liquid extraction and the modern techniques such as pressurized liquid extraction, microwave-assisted extraction, ultrasound-assisted extraction, solid-phase micro-extraction, supercritical fluid extraction, and matrix solid phase dispersion. The complete characterization of the compounds is achieved using both monodimensional chromatographic processes (LC, nano-LC, GC, and CE coupled with different type of detectors) and, recently, using comprehensive two-dimensional systems (LC×LC and GC×GC). Copyright © 2011 Elsevier Ltd. All rights reserved.
A modular modulation method for achieving increases in metabolite production.
Acerenza, Luis; Monzon, Pablo; Ortega, Fernando
2015-01-01
Increasing the production of overproducing strains represents a great challenge. Here, we develop a modular modulation method to determine the key steps for genetic manipulation to increase metabolite production. The method consists of three steps: (i) modularization of the metabolic network into two modules connected by linking metabolites, (ii) change in the activity of the modules using auxiliary rates producing or consuming the linking metabolites in appropriate proportions and (iii) determination of the key modules and steps to increase production. The mathematical formulation of the method in matrix form shows that it may be applied to metabolic networks of any structure and size, with reactions showing any kind of rate laws. The results are valid for any type of conservation relationships in the metabolite concentrations or interactions between modules. The activity of the module may, in principle, be changed by any large factor. The method may be applied recursively or combined with other methods devised to perform fine searches in smaller regions. In practice, it is implemented by integrating to the producer strain heterologous reactions or synthetic pathways producing or consuming the linking metabolites. The new procedure may contribute to develop metabolic engineering into a more systematic practice. © 2015 American Institute of Chemical Engineers.
Roch, Samuel; Brinker, Alexander
2017-04-18
The rising evidence of microplastic pollution impacts on aquatic organisms in both marine and freshwater ecosystems highlights a pressing need for adequate and comparable detection methods. Available tissue digestion protocols are time-consuming (>10 h) and/or require several procedural steps, during which materials can be lost and contaminants introduced. This novel approach comprises an accelerated digestion step using sodium hydroxide and nitric acid in combination to digest all organic material within 1 h plus an additional separation step using sodium iodide which can be used to reduce mineral residues in samples where necessary. This method yielded a microplastic recovery rate of ≥95%, and all tested polymer types were recovered with only minor changes in weight, size, and color with the exception of polyamide. The method was also shown to be effective on field samples from two benthic freshwater fish species, revealing a microplastic burden comparable to that indicated in the literature. As a consequence, the present method saves time, minimizes the loss of material and the risk of contamination, and facilitates the identification of plastic particles and fibers, thus providing an efficient method to detect and quantify microplastics in the gastrointestinal tract of fishes.
Image design and replication for image-plane disk-type multiplex holograms
NASA Astrophysics Data System (ADS)
Chen, Chih-Hung; Cheng, Yih-Shyang
2017-09-01
The fabrication methods and parameter design for both real-image generation and virtual-image display in image-plane disk-type multiplex holography are introduced in this paper. A theoretical model of a disk-type hologram is also presented and is then used in our two-step holographic processes, including the production of a non-image-plane master hologram and optical replication using a single-beam copying system for the production of duplicated holograms. Experimental results are also presented to verify the possibility of mass production using the one-shot holographic display technology described in this study.
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Yankov, Artem; Collins, Benjamin; Klein, Markus; ...
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
Rapid detection of the CYP2A6*12 hybrid allele by Pyrosequencing technology.
Koontz, Deborah A; Huckins, Jacqueline J; Spencer, Antonina; Gallagher, Margaret L
2009-08-24
Identification of CYP2A6 alleles associated with reduced enzyme activity is important in the study of inter-individual differences in drug metabolism. CYP2A6*12 is a hybrid allele that results from unequal crossover between CYP2A6 and CYP2A7 genes. The 5' regulatory region and exons 1-2 are derived from CYP2A7, and exons 3-9 are derived from CYP2A6. Conventional methods for detection of CYP2A6*12 consist of two-step PCR protocols that are laborious and unsuitable for high-throughput genotyping. We developed a rapid and accurate method to detect the CYP2A6*12 allele by Pyrosequencing technology. A single set of PCR primers was designed to specifically amplify both the CYP2A6*1 wild-type allele and the CYP2A6*12 hybrid allele. An internal Pyrosequencing primer was used to generate allele-specific sequence information, which detected homozygous wild-type, heterozygous hybrid, and homozygous hybrid alleles. We first validated the assay on 104 DNA samples that were also genotyped by conventional two-step PCR and by cycle sequencing. CYP2A6*12 allele frequencies were then determined using the Pyrosequencing assay on 181 multi-ethnic DNA samples from subjects of African American, European Caucasian, Pacific Rim, and Hispanic descent. Finally, we streamlined the Pyrosequencing assay by integrating liquid handling robotics into the workflow. Pyrosequencing results demonstrated 100% concordance with conventional two-step PCR and cycle sequencing methods. Allele frequency data showed slightly higher prevalence of the CYP2A6*12 allele in European Caucasians and Hispanics. This Pyrosequencing assay proved to be a simple, rapid, and accurate alternative to conventional methods, which can be easily adapted to the needs of higher-throughput studies.
NASA Technical Reports Server (NTRS)
Chan, Daniel C.; Darian, Armen; Sindir, Munir
1992-01-01
We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).
NASA Astrophysics Data System (ADS)
Le Hardy, D.; Favennec, Y.; Rousseau, B.
2016-08-01
The 2D radiative transfer equation coupled with specular reflection boundary conditions is solved using finite element schemes. Both Discontinuous Galerkin and Streamline-Upwind Petrov-Galerkin variational formulations are fully developed. These two schemes are validated step-by-step for all involved operators (transport, scattering, reflection) using analytical formulations. Numerical comparisons of the two schemes, in terms of convergence rate, reveal that the quadratic SUPG scheme proves efficient for solving such problems. This comparison constitutes the main issue of the paper. Moreover, the solution process is accelerated using block SOR-type iterative methods, for which the determination of the optimal parameter is found in a very cheap way.
Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew
2005-05-03
A new class of surface modified particles and a multi-step Michael-type addition surface modification process for the preparation of the same is provided. The multi-step Michael-type addition surface modification process involves two or more reactions to compatibilize particles with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through reactive organic linking groups. Specifically, these reactive groups are activated carbon—carbon pi bonds and carbon and non-carbon nucleophiles that react via Michael or Michael-type additions.
Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.
1995-01-01
The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.
Assessing Student Behaviors and Motivation for Actively Learning Biology
NASA Astrophysics Data System (ADS)
Moore, Michael Edward
Vision and Change states that one of the major changes in the way we design biology courses should be a switch in approach from teacher-centered learning to student-centered learning and identifies active learning as a recommended methods. Studies show performance benefits for students taking courses that use active learning. What is unknown is why active learning is such an effective instructional tool and the limits of this instructional method’s ability to influence performance. This dissertation builds a case in three steps for why active learning is an effective instructional tool. In step one, I assessed the influence of different types of active learning (clickers, group activities, and whole class discussions) on student engagement behavior in one semester of two different introductory biology courses and found that active learning positively influenced student engagement behavior significantly more than lecture. For step two, I examined over four semesters whether student engagement behavior was a predictor of performance and found participation (engagement behavior) in the online (video watching) and in-class course activities (clicker participation) that I measure were significant predictors of performance. In the third, I assessed whether certain active learning satisfied the psychological needs that lead to students’ intrinsic motivation to participate in those activities when compared over two semesters and across two different institutions of higher learning. Findings from this last step show us that student’s perceptions of autonomy, competency, and relatedness in doing various types of active learning are significantly higher than lecture and consistent across two institutions of higher learning. Lastly, I tie everything together, discuss implications of the research, and address future directions for research on biology student motivation and behavior.
"2sDR": Process Development of a Sustainable Way to Recycle Steel Mill Dusts in the 21st Century
NASA Astrophysics Data System (ADS)
Rösler, Gernot; Pichler, Christoph; Antrekowitsch, Jürgen; Wegscheider, Stefan
2014-09-01
Significant amounts of electric arc furnace dust originating from steel production are recycled every year by the Waelz process, despite the fact that this type of process has several disadvantages. One alternative method would be the recovery of very high-quality ZnO as well as iron and even chromium in the two-step dust recycling process, which was invented to treat special waste for the recovery of heavy metal-containing residues. The big advantage of that process is that various types of residues, especially dusts, can be treated in an oxidizing first step for cleaning, with a subsequent reducing step for the metal recovery. After the treatment, three different fractions—dust, slag, and an iron alloy, can be used without any limitations. This study focuses on the development of the process along with some thermodynamic considerations. Moreover, a final overview of mass balances of an experiment performed in a 100-kg top blowing rotary converter with further developments is provided.
NASA Astrophysics Data System (ADS)
Zhao, Pengzhi
Magnetic method is a common geophysical technique used to explore kimberlites. The analysis and interpretation of measured magnetic data provides the information of magnetic and geometric properties of potential kimberlite pipes. A crucial parameter of kimberlite magnetic interpretation is the remanent magnetization that dominates the classification of kimberlite. However, the measured magnetic data is the total field affected by the remanent magnetization and the susceptibility. The presence of remanent magnetization can pose severe challenges to the quantitative interpretation of magnetic data by skewing or laterally shifting magnetic anomalies relative to the subsurface source (Haney and Li, 2002). Therefore, identification of remanence effects and determination of remanent magnetization are important in magnetic data interpretation. This project presents a new method to determine the magnetic and geometric properties of kimberlite pipes in the presence of strong remanent magnetization. This method consists of two steps. The first step is to estimate the total magnetization and geometric properties of magnetic anomaly. The second step is to separate the remanent magnetization from the total magnetization. In the first step, a joint parametric inversion of total-field magnetic data and its analytic signal (derived from the survey data by Fourier transform method) is used. The algorithm of the joint inversion is based on the Gauss-Newton method and it is more stable and more accurate than the separate inversion method. It has been tested with synthetic data and applied to interpret the field data from the Lac de Gras, North-West Territories of Canada. The results of the synthetic examples and the field data applications show that joint inversion can recovers the total magnetization and geometric properties of magnetic anomaly with a good data fit and stable convergence. In the second step, the remanent magnetization is separated from the total magnetization by using a determined susceptibility. The susceptibility value is estimated by using the frequency domain electromagnetic data. The inversion method is achieved by a code, named “EM1DFM”, developed by University of British Columbia was designed to construct one of four types of 1D model, using any type of geophysical frequency domain loop-loop data with one of four variations of the inversion algorithm. The results show that the susceptibility of magnetic body is recovered, even if the depth and thickness are not well estimated. This two-step process provides a new way to determine magnetic and geometric properties of kimberlite pipes in the presence of strong remanent magnetization. The joint inversion of the total-field magnetic data and its analytic signal obtains the total magnetization and geometric properties. The frequency domain EM method provides the susceptibility. As a result, the remanent magnetization can be separated from the total magnetization accurately.
Sun, Wanjie; Larsen, Michael D; Lachin, John M
2014-04-15
In longitudinal studies, a quantitative outcome (such as blood pressure) may be altered during follow-up by the administration of a non-randomized, non-trial intervention (such as anti-hypertensive medication) that may seriously bias the study results. Current methods mainly address this issue for cross-sectional studies. For longitudinal data, the current methods are either restricted to a specific longitudinal data structure or are valid only under special circumstances. We propose two new methods for estimation of covariate effects on the underlying (untreated) general longitudinal outcomes: a single imputation method employing a modified expectation-maximization (EM)-type algorithm and a multiple imputation (MI) method utilizing a modified Monte Carlo EM-MI algorithm. Each method can be implemented as one-step, two-step, and full-iteration algorithms. They combine the advantages of the current statistical methods while reducing their restrictive assumptions and generalizing them to realistic scenarios. The proposed methods replace intractable numerical integration of a multi-dimensionally censored MVN posterior distribution with a simplified, sufficiently accurate approximation. It is particularly attractive when outcomes reach a plateau after intervention due to various reasons. Methods are studied via simulation and applied to data from the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications study of treatment for type 1 diabetes. Methods proved to be robust to high dimensions, large amounts of censored data, low within-subject correlation, and when subjects receive non-trial intervention to treat the underlying condition only (with high Y), or for treatment in the majority of subjects (with high Y) in combination with prevention for a small fraction of subjects (with normal Y). Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Adachi, Kazunari; Suzuki, Kohei; Shibamata, Yuki
2018-06-01
We previously developed a 100 W piezoelectric transformer comprising two identical bolt-clamped Langevin-type transducers (BLTs) and a stepped horn whose cross-sectional area ratio determines the specified step-up voltage transformation ratio. Unlike conventional piezoelectric transformers, this transformer is driven at a frequency quite near its mechanical resonance, and thus can be mechanically held firmly at its clearly identified vibratory node without mechanical energy loss. However, it has been revealed that the high-power operation of the transformer often becomes very unstable owing to the “jumping and dropping” phenomena first found by Takahashi and Hirose [Jpn. J. Appl. Phys. 31, 3055 (1992)]. To avoid this instability, we have investigated the peculiar phenomena, and found that they can be attributed to a heavily distorted electric field inside the piezoelectric ceramic disks of the BLT on the primary side of the transformer being driven by a low-impedance voltage source near the mechanical resonance. The resultant concentration of the electric field leads to the local reversal of piezoelectric polarization in every half period of the vibration, viz., the instability. Consequently, we have developed a scheme for the steady high-power operation of this type of piezoelectric transformer and examined its validity experimentally. The method has eventually improved the linearity and power transfer efficiency of the transformer significantly.
Hydrothermal synthesis of hierarchical CoO/SnO2 nanostructures for ethanol gas sensor.
Wang, Qingji; Kou, Xueying; Liu, Chang; Zhao, Lianjing; Lin, Tingting; Liu, Fangmeng; Yang, Xueli; Lin, Jun; Lu, Geyu
2018-03-01
In this work, ethanol gas sensor with high performance was fabricated successfully with hierarchical CoO/SnO 2 heterojunction by two-steps hydrothermal method. The response value of CoO/SnO 2 sensor is up to 145 at 250 °C when exposed to 100 ppm ethanol gas, which is much higher than that (13.5) of SnO 2 sensor. These good sensing performances mainly attribute to the formation of the CoO/SnO 2 heterojunction, which makes great variation of resistance in air and ethanol gas. Thus, the combination of n-type SnO 2 and p-type CoO provides an effective strategy to design new ethanol gas sensors. The unique nanostructure also played an important role in detecting ethanol, due to its contribution in facilitating the transport rate of the ethanol gas molecules. Also, we provide a general two-step strategy for designing the heterojunction based on the SnO 2 nanostructure. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Caesar, Jennifer; Tamm, Alexandra; Ruckteschler, Nina; Lena Leifke, Anna; Weber, Bettina
2018-03-01
Chlorophyll concentrations of biological soil crust (biocrust) samples are commonly determined to quantify the relevance of photosynthetically active organisms within these surface soil communities. Whereas chlorophyll extraction methods for freshwater algae and leaf tissues of vascular plants are well established, there is still some uncertainty regarding the optimal extraction method for biocrusts, where organism composition is highly variable and samples comprise major amounts of soil. In this study we analyzed the efficiency of two different chlorophyll extraction solvents, the effect of grinding the soil samples prior to the extraction procedure, and the impact of shaking as an intermediate step during extraction. The analyses were conducted on four different types of biocrusts. Our results show that for all biocrust types chlorophyll contents obtained with ethanol were significantly lower than those obtained using dimethyl sulfoxide (DMSO) as a solvent. Grinding of biocrust samples prior to analysis caused a highly significant decrease in chlorophyll content for green algal lichen- and cyanolichen-dominated biocrusts, and a tendency towards lower values for moss- and algae-dominated biocrusts. Shaking of the samples after each extraction step had a significant positive effect on the chlorophyll content of green algal lichen- and cyanolichen-dominated biocrusts. Based on our results we confirm a DMSO-based chlorophyll extraction method without grinding pretreatment and suggest the addition of an intermediate shaking step for complete chlorophyll extraction (see Supplement S6 for detailed manual). Determination of a universal chlorophyll extraction method for biocrusts is essential for the inter-comparability of publications conducted across all continents.
NASA Astrophysics Data System (ADS)
Rasmussen, Sune O.
2014-05-01
Due to their outstanding resolution and well-constrained chronologies, Greenland ice core records have long been used as a master record of past climatic changes during the last interglacial-glacial cycle in the North Atlantic region. As part of the INTIMATE (INtegration of Ice-core, MArine and TErrestrial records) project, protocols have been proposed to ensure consistent and robust correlation between different records of past climate. A key element of these protocols has been the formal definition of numbered Greenland Stadials (GS) and Greenland Interstadials (GI) within the past glacial period as the Greenland expressions of the characteristic Dansgaard-Oeschger events that represent cold and warm phases of the North Atlantic region, respectively. Using a recent synchronization of the NGRIP, GRIP, and GISP2 ice cores that allows the parallel analysis of all three records on a common time scale, we here present an extension of the GS/GI stratigraphic template to the entire glacial period. This is based on a combination of isotope ratios (δ18O, reflecting mainly local temperature) and calcium concentrations (reflecting mainly atmospheric dust loading). In addition to the well-known sequence of Dansgaard-Oeschger events that were first defined and numbered in the ice core records more than two decades ago, a number of short-lived climatic oscillations have been identified in the three synchronized records. Some of these events have been observed in other studies, but we here propose a consistent scheme for discriminating and naming all the significant climatic events of the last glacial period that are represented in the Greenland ice cores. This is a key step aimed at promoting unambiguous comparison and correlation between different proxy records, as well as a more secure basis for investigating the dynamics and fundamental causes of these climatic perturbations. The work presented is under review for publication in Quaternary Science Reviews. Author team: S.O. Rasmussen, M. Bigler, S.P.E. Blockley, T. Blunier, S.L. Buchardt, H.B. Clausen;, I. Cvijanovic, D. Dahl-Jensen, S.J. Johnsen;, H. Fischer, V. Gkinis, M. Guillevic, W.Z. Hoek, J.J. Lowe, J. Pedro, T. Popp, I.K. Seierstad, J.P. Steffensen, A.M. Svensson, P. Vallelonga, B.M. Vinther, M.J.C. Walker, J.J. Wheatley, and M. Winstrup (ased).
Obtaining correct compile results by absorbing mismatches between data types representations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni
Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementingmore » step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.« less
Obtaining correct compile results by absorbing mismatches between data types representations
Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio
2017-03-21
Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.
Obtaining correct compile results by absorbing mismatches between data types representations
Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio
2017-11-21
Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.
Nondestructive mechanical characterization of developing biological tissues using inflation testing.
Oomen, P J A; van Kelle, M A J; Oomens, C W J; Bouten, C V C; Loerakker, S
2017-10-01
One of the hallmarks of biological soft tissues is their capacity to grow and remodel in response to changes in their environment. Although it is well-accepted that these processes occur at least partly to maintain a mechanical homeostasis, it remains unclear which mechanical constituent(s) determine(s) mechanical homeostasis. In the current study a nondestructive mechanical test and a two-step inverse analysis method were developed and validated to nondestructively estimate the mechanical properties of biological tissue during tissue culture. Nondestructive mechanical testing was achieved by performing an inflation test on tissues that were cultured inside a bioreactor, while the tissue displacement and thickness were nondestructively measured using ultrasound. The material parameters were estimated by an inverse finite element scheme, which was preceded by an analytical estimation step to rapidly obtain an initial estimate that already approximated the final solution. The efficiency and accuracy of the two-step inverse method was demonstrated on virtual experiments of several material types with known parameters. PDMS samples were used to demonstrate the method's feasibility, where it was shown that the proposed method yielded similar results to tensile testing. Finally, the method was applied to estimate the material properties of tissue-engineered constructs. Via this method, the evolution of mechanical properties during tissue growth and remodeling can now be monitored in a well-controlled system. The outcomes can be used to determine various mechanical constituents and to assess their contribution to mechanical homeostasis. Copyright © 2017 Elsevier Ltd. All rights reserved.
Trong Bui, Duong; Nguyen, Nhan Duc; Jeong, Gu-Min
2018-06-25
Human activity recognition and pedestrian dead reckoning are an interesting field because of their importance utilities in daily life healthcare. Currently, these fields are facing many challenges, one of which is the lack of a robust algorithm with high performance. This paper proposes a new method to implement a robust step detection and adaptive distance estimation algorithm based on the classification of five daily wrist activities during walking at various speeds using a smart band. The key idea is that the non-parametric adaptive distance estimator is performed after two activity classifiers and a robust step detector. In this study, two classifiers perform two phases of recognizing five wrist activities during walking. Then, a robust step detection algorithm, which is integrated with an adaptive threshold, peak and valley correction algorithm, is applied to the classified activities to detect the walking steps. In addition, the misclassification activities are fed back to the previous layer. Finally, three adaptive distance estimators, which are based on a non-parametric model of the average walking speed, calculate the length of each strike. The experimental results show that the average classification accuracy is about 99%, and the accuracy of the step detection is 98.7%. The error of the estimated distance is 2.2⁻4.2% depending on the type of wrist activities.
NASA Astrophysics Data System (ADS)
Pan, Liang; Xu, Kun; Li, Qibing; Li, Jiequan
2016-12-01
For computational fluid dynamics (CFD), the generalized Riemann problem (GRP) solver and the second-order gas-kinetic scheme (GKS) provide a time-accurate flux function starting from a discontinuous piecewise linear flow distributions around a cell interface. With the adoption of time derivative of the flux function, a two-stage Lax-Wendroff-type (L-W for short) time stepping method has been recently proposed in the design of a fourth-order time accurate method for inviscid flow [21]. In this paper, based on the same time-stepping method and the second-order GKS flux function [42], a fourth-order gas-kinetic scheme is constructed for the Euler and Navier-Stokes (NS) equations. In comparison with the formal one-stage time-stepping third-order gas-kinetic solver [24], the current fourth-order method not only reduces the complexity of the flux function, but also improves the accuracy of the scheme. In terms of the computational cost, a two-dimensional third-order GKS flux function takes about six times of the computational time of a second-order GKS flux function. However, a fifth-order WENO reconstruction may take more than ten times of the computational cost of a second-order GKS flux function. Therefore, it is fully legitimate to develop a two-stage fourth order time accurate method (two reconstruction) instead of standard four stage fourth-order Runge-Kutta method (four reconstruction). Most importantly, the robustness of the fourth-order GKS is as good as the second-order one. In the current computational fluid dynamics (CFD) research, it is still a difficult problem to extend the higher-order Euler solver to the NS one due to the change of governing equations from hyperbolic to parabolic type and the initial interface discontinuity. This problem remains distinctively for the hypersonic viscous and heat conducting flow. The GKS is based on the kinetic equation with the hyperbolic transport and the relaxation source term. The time-dependent GKS flux function provides a dynamic process of evolution from the kinetic scale particle free transport to the hydrodynamic scale wave propagation, which provides the physics for the non-equilibrium numerical shock structure construction to the near equilibrium NS solution. As a result, with the implementation of the fifth-order WENO initial reconstruction, in the smooth region the current two-stage GKS provides an accuracy of O ((Δx) 5 ,(Δt) 4) for the Euler equations, and O ((Δx) 5 ,τ2 Δt) for the NS equations, where τ is the time between particle collisions. Many numerical tests, including difficult ones for the Navier-Stokes solvers, have been used to validate the current method. Perfect numerical solutions can be obtained from the high Reynolds number boundary layer to the hypersonic viscous heat conducting flow. Following the two-stage time-stepping framework, the third-order GKS flux function can be used as well to construct a fifth-order method with the usage of both first-order and second-order time derivatives of the flux function. The use of time-accurate flux function may have great advantages on the development of higher-order CFD methods.
Hosseini, Elham; Janghorbani, Mohsen; Aminorroaya, Ashraf
2018-06-01
To study the incidence, risk factors, and pregnancy outcomes associated with gestational diabetes mellitus (GDM) diagnosed with one-step and two-step screening approaches. 1000 pregnant women who were eligible and consented to participate underwent fasting plasma glucose testing at the first prenatal visit (6-14 weeks). The women free from GDM or overt diabetes were screened at 24-28 weeks using the 50-g glucose challenge test (GCT) followed by 100-g, 3-h oral glucose tolerance test (OGTT) (two-step method). Regardless of the GCT result, all women underwent a 75-g, 2-h OGTT within one-week interval (one-step method). GDM incidence using the one-step and two-step methods was 9.3% (95% CI: 7.4-11.2) and 4.2% (95% CI: 2.9-5.5). GDM significantly increased the risk of macrosomia, gestational hypertension, preeclampsia, and cesarean section and older age and family history of diabetes significantly increased the risk of developing GDM in both approaches. In two-step method, higher pre-pregnancy body mass index and lower physical activity during pregnancy along with higher earlier cesarean section also increased significantly the risk of developing GDM. Despite a higher incidence of GDM using the one-step approach, more risk factors for and a stronger effect of GDM on adverse pregnancy outcomes were found when using the two-step approach. Longer follow-up of women with and without GDM may change the results using both approaches. Copyright © 2018 Elsevier B.V. All rights reserved.
Using Ab-Initio Calculations to Appraise Stm-Based - and Kink-Formation Energies
NASA Astrophysics Data System (ADS)
Feibelman, Peter J.
2001-03-01
Ab-initio total energies can and should be used to test the typically model-dependent results of interpreting STM morphologies. The benefits of such tests are illustrated here by ab-initio energies of step- and kink-formation on Pb and Pt(111) which show that the STM-based values of the kink energies must be revised. On Pt(111), the computed kink-energies for (100)- and (111)-microfacet steps are about 0.25 and 0.18 eV. These results imply a specific ratio of formation energies for the two step types, namely 1.14, in excellent agreement with experiment. If kink-formation actually cost the same energy on the two step types, an inference drawn from scanning probe observations of step wandering,(M. Giesen et al., Surf. Sci. 366, 229(1996).) this ratio ought to be 1. In the case of Pb(111), though computed energies to form (100)- and (111)-microfacet steps agree with measurement, the ab-initio kink-formation energies for the two step types, 41 and 60 meV, are 40-50% below experimental values drawn from STM images.(K. Arenhold et al., Surf. Sci. 424, 271(1999).) The discrepancy results from interpreting the images with a step-stiffness vs. kink-energy relation appropriate to (100) but not (111) surfaces. Good agreement is found when proper account of the trigonal symmetry of Pb(111) is taken in reinterpreting the step-stiffness data.
NASA Astrophysics Data System (ADS)
Doha, E. H.; Bhrawy, A. H.; Abdelkawy, M. A.; Van Gorder, Robert A.
2014-03-01
A Jacobi-Gauss-Lobatto collocation (J-GL-C) method, used in combination with the implicit Runge-Kutta method of fourth order, is proposed as a numerical algorithm for the approximation of solutions to nonlinear Schrödinger equations (NLSE) with initial-boundary data in 1+1 dimensions. Our procedure is implemented in two successive steps. In the first one, the J-GL-C is employed for approximating the functional dependence on the spatial variable, using (N-1) nodes of the Jacobi-Gauss-Lobatto interpolation which depends upon two general Jacobi parameters. The resulting equations together with the two-point boundary conditions induce a system of 2(N-1) first-order ordinary differential equations (ODEs) in time. In the second step, the implicit Runge-Kutta method of fourth order is applied to solve this temporal system. The proposed J-GL-C method, used in combination with the implicit Runge-Kutta method of fourth order, is employed to obtain highly accurate numerical approximations to four types of NLSE, including the attractive and repulsive NLSE and a Gross-Pitaevskii equation with space-periodic potential. The numerical results obtained by this algorithm have been compared with various exact solutions in order to demonstrate the accuracy and efficiency of the proposed method. Indeed, for relatively few nodes used, the absolute error in our numerical solutions is sufficiently small.
Liu, Yu; Li, Ji-Jia; Zu, Peng; Liu, Hong-Xu; Yu, Zhan-Wu; Ren, Yi
2017-12-07
To introduce a two-step method for creating a gastric tube during laparoscopic-thoracoscopic Ivor-Lewis esophagectomy and assess its clinical application. One hundred and twenty-two patients with middle or lower esophageal cancer who underwent laparoscopic-thoracoscopic Ivor-Lewis esophagectomy at Liaoning Cancer Hospital and Institute from March 2014 to March 2016 were included in this study, and divided into two groups based on the procedure used for creating a gastric tube. One group used a two-step method for creating a gastric tube, and the other group used the conventional method. The two groups were compared regarding the operating time, surgical complications, and number of stapler cartridges used. The mean operating time was significantly shorter in the two-step method group than in the conventional method group [238 (179-293) min vs 272 (189-347) min, P < 0.01]. No postoperative death occurred in either group. There was no significant difference in the rate of complications [14 (21.9%) vs 13 (22.4%), P = 0.55] or mean number of stapler cartridges used [5 (4-6) vs 5.2 (5-6), P = 0.007] between the two groups. The two-step method for creating a gastric tube during laparoscopic-thoracoscopic Ivor-Lewis esophagectomy has the advantages of simple operation, minimal damage to the tubular stomach, and reduced use of stapler cartridges.
Electric fields preceding cloud-to-ground lightning flashes
NASA Astrophysics Data System (ADS)
Beasley, W.; Uman, M. A.; Rustan, P. L., Jr.
1982-06-01
A detailed analysis is presented of the electric-field variations preceding the first return strokes of 80 cloud-to-ground lightning flashes in nine different storms observed at the NASA Kennedy Space Center during the summers of 1976 and 1977. It is suggested that the electric-field variations can best be characterized as having two sections: preliminary variations and stepped leader. The stepped-leader change begins during a transition period of a few milliseconds marked by characteristic bipolar pulses; the duration of stepped leaders lies most frequently in the 6-20 millisecond range. It is also suggested that there is only one type of stepped leader, not two types (alpha and beta) often referred to in the literature.
Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction
NASA Astrophysics Data System (ADS)
Rizal Isnanto, R.
2015-06-01
Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)
Luewan, Suchaya; Bootchaingam, Phenphan; Tongsong, Theera
2018-01-01
To compare the prevalence and pregnancy outcomes of GDM between those screened by the "one-step" (75 gm GTT) and "two-step" (100 gm GTT) methods. A prospective study was conducted on singleton pregnancies at low or average risk of GDM. All were screened between 24 and 28 weeks, using the one-step or two-step method based on patients' preference. The primary outcome was prevalence of GDM, and secondary outcomes included birthweight, gestational age, rates of preterm birth, small/large-for-gestational age, low Apgar scores, cesarean section, and pregnancy-induced hypertension. A total of 648 women were screened: 278 in the one-step group and 370 in the two-step group. The prevalence of GDM was significantly higher in the one-step group; 32.0% versus 10.3%. Baseline characteristics and pregnancy outcomes in both groups were comparable. However, mean birthweight was significantly higher among pregnancies with GDM diagnosed by the two-step approach (3204 ± 555 versus 3009 ± 666 g; p =0.022). Likewise, the rate of large-for-date tended to be higher in the two-step group, but was not significant. The one-step approach is associated with very high prevalence of GDM among Thai population, without clear evidence of better outcomes. Thus, this approach may not be appropriate for screening in a busy antenatal care clinic like our setting or other centers in developing countries.
An efficient mode-splitting method for a curvilinear nearshore circulation model
Shi, Fengyan; Kirby, James T.; Hanes, Daniel M.
2007-01-01
A mode-splitting method is applied to the quasi-3D nearshore circulation equations in generalized curvilinear coordinates. The gravity wave mode and the vorticity wave mode of the equations are derived using the two-step projection method. Using an implicit algorithm for the gravity mode and an explicit algorithm for the vorticity mode, we combine the two modes to derive a mixed difference–differential equation with respect to surface elevation. McKee et al.'s [McKee, S., Wall, D.P., and Wilson, S.K., 1996. An alternating direction implicit scheme for parabolic equations with mixed derivative and convective terms. J. Comput. Phys., 126, 64–76.] ADI scheme is then used to solve the parabolic-type equation in dealing with the mixed derivative and convective terms from the curvilinear coordinate transformation. Good convergence rates are found in two typical cases which represent respectively the motions dominated by the gravity mode and the vorticity mode. Time step limitations imposed by the vorticity convective Courant number in vorticity-mode-dominant cases are discussed. Model efficiency and accuracy are verified in model application to tidal current simulations in San Francisco Bight.
NASA Astrophysics Data System (ADS)
van Rossum, Anne C.; Lin, Hai Xiang; Dubbeldam, Johan; van der Herik, H. Jaap
2018-04-01
In machine vision typical heuristic methods to extract parameterized objects out of raw data points are the Hough transform and RANSAC. Bayesian models carry the promise to optimally extract such parameterized objects given a correct definition of the model and the type of noise at hand. A category of solvers for Bayesian models are Markov chain Monte Carlo methods. Naive implementations of MCMC methods suffer from slow convergence in machine vision due to the complexity of the parameter space. Towards this blocked Gibbs and split-merge samplers have been developed that assign multiple data points to clusters at once. In this paper we introduce a new split-merge sampler, the triadic split-merge sampler, that perform steps between two and three randomly chosen clusters. This has two advantages. First, it reduces the asymmetry between the split and merge steps. Second, it is able to propose a new cluster that is composed out of data points from two different clusters. Both advantages speed up convergence which we demonstrate on a line extraction problem. We show that the triadic split-merge sampler outperforms the conventional split-merge sampler. Although this new MCMC sampler is demonstrated in this machine vision context, its application extend to the very general domain of statistical inference.
Smoke regions extraction based on two steps segmentation and motion detection in early fire
NASA Astrophysics Data System (ADS)
Jian, Wenlin; Wu, Kaizhi; Yu, Zirong; Chen, Lijuan
2018-03-01
Aiming at the early problems of video-based smoke detection in fire video, this paper proposes a method to extract smoke suspected regions by combining two steps segmentation and motion characteristics. Early smoldering smoke can be seen as gray or gray-white regions. In the first stage, regions of interests (ROIs) with smoke are obtained by using two step segmentation methods. Then, suspected smoke regions are detected by combining the two step segmentation and motion detection. Finally, morphological processing is used for smoke regions extracting. The Otsu algorithm is used as segmentation method and the ViBe algorithm is used to detect the motion of smoke. The proposed method was tested on 6 test videos with smoke. The experimental results show the effectiveness of our proposed method over visual observation.
Inverse imaging of the breast with a material classification technique.
Manry, C W; Broschat, S L
1998-03-01
In recent publications [Chew et al., IEEE Trans. Blomed. Eng. BME-9, 218-225 (1990); Borup et al., Ultrason. Imaging 14, 69-85 (1992)] the inverse imaging problem has been solved by means of a two-step iterative method. In this paper, a third step is introduced for ultrasound imaging of the breast. In this step, which is based on statistical pattern recognition, classification of tissue types and a priori knowledge of the anatomy of the breast are integrated into the iterative method. Use of this material classification technique results in more rapid convergence to the inverse solution--approximately 40% fewer iterations are required--as well as greater accuracy. In addition, tumors are detected early in the reconstruction process. Results for reconstructions of a simple two-dimensional model of the human breast are presented. These reconstructions are extremely accurate when system noise and variations in tissue parameters are not too great. However, for the algorithm used, degradation of the reconstructions and divergence from the correct solution occur when system noise and variations in parameters exceed threshold values. Even in this case, however, tumors are still identified within a few iterations.
van Houte, Bart PP; Binsl, Thomas W; Hettling, Hannes; Pirovano, Walter; Heringa, Jaap
2009-01-01
Background Array comparative genomic hybridization (aCGH) is a popular technique for detection of genomic copy number imbalances. These play a critical role in the onset of various types of cancer. In the analysis of aCGH data, normalization is deemed a critical pre-processing step. In general, aCGH normalization approaches are similar to those used for gene expression data, albeit both data-types differ inherently. A particular problem with aCGH data is that imbalanced copy numbers lead to improper normalization using conventional methods. Results In this study we present a novel method, called CGHnormaliter, which addresses this issue by means of an iterative normalization procedure. First, provisory balanced copy numbers are identified and subsequently used for normalization. These two steps are then iterated to refine the normalization. We tested our method on three well-studied tumor-related aCGH datasets with experimentally confirmed copy numbers. Results were compared to a conventional normalization approach and two more recent state-of-the-art aCGH normalization strategies. Our findings show that, compared to these three methods, CGHnormaliter yields a higher specificity and precision in terms of identifying the 'true' copy numbers. Conclusion We demonstrate that the normalization of aCGH data can be significantly enhanced using an iterative procedure that effectively eliminates the effect of imbalanced copy numbers. This also leads to a more reliable assessment of aberrations. An R-package containing the implementation of CGHnormaliter is available at . PMID:19709427
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.
2014-01-15
Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than amore » single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.« less
NASA Astrophysics Data System (ADS)
Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.
2014-01-01
Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge-Kutta-like time-steps to advance the parabolic terms by a time-step that is s2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge-Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems - a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.
NASA Technical Reports Server (NTRS)
Liu, A. F.
1974-01-01
A systematic approach for applying methods for fracture control in the structural components of space vehicles consists of four major steps. The first step is to define the primary load-carrying structural elements and the type of load, environment, and design stress levels acting upon them. The second step is to identify the potential fracture-critical parts by means of a selection logic flow diagram. The third step is to evaluate the safe-life and fail-safe capabilities of the specified part. The last step in the sequence is to apply the control procedures that will prevent damage to the fracture-critical parts. The fracture control methods discussed include fatigue design and analysis methods, methods for preventing crack-like defects, fracture mechanics analysis methods, and nondestructive evaluation methods. An example problem is presented for evaluation of the safe-crack-growth capability of the space shuttle crew compartment skin structure.
Comparison of Penalty Functions for Sparse Canonical Correlation Analysis
Chalise, Prabhakar; Fridley, Brooke L.
2011-01-01
Canonical correlation analysis (CCA) is a widely used multivariate method for assessing the association between two sets of variables. However, when the number of variables far exceeds the number of subjects, such in the case of large-scale genomic studies, the traditional CCA method is not appropriate. In addition, when the variables are highly correlated the sample covariance matrices become unstable or undefined. To overcome these two issues, sparse canonical correlation analysis (SCCA) for multiple data sets has been proposed using a Lasso type of penalty. However, these methods do not have direct control over sparsity of solution. An additional step that uses Bayesian Information Criterion (BIC) has also been suggested to further filter out unimportant features. In this paper, a comparison of four penalty functions (Lasso, Elastic-net, SCAD and Hard-threshold) for SCCA with and without the BIC filtering step have been carried out using both real and simulated genotypic and mRNA expression data. This study indicates that the SCAD penalty with BIC filter would be a preferable penalty function for application of SCCA to genomic data. PMID:21984855
Li, Siwei; Ding, Wentao; Zhang, Xueli; Jiang, Huifeng; Bi, Changhao
2016-01-01
Saccharomyces cerevisiae has already been used for heterologous production of fuel chemicals and valuable natural products. The establishment of complicated heterologous biosynthetic pathways in S. cerevisiae became the research focus of Synthetic Biology and Metabolic Engineering. Thus, simple and efficient genomic integration techniques of large number of transcription units are demanded urgently. An efficient DNA assembly and chromosomal integration method was created by combining homologous recombination (HR) in S. cerevisiae and Golden Gate DNA assembly method, designated as modularized two-step (M2S) technique. Two major assembly steps are performed consecutively to integrate multiple transcription units simultaneously. In Step 1, Modularized scaffold containing a head-to-head promoter module and a pair of terminators was assembled with two genes. Thus, two transcription units were assembled with Golden Gate method into one scaffold in one reaction. In Step 2, the two transcription units were mixed with modules of selective markers and integration sites and transformed into S. cerevisiae for assembly and integration. In both steps, universal primers were designed for identification of correct clones. Establishment of a functional β-carotene biosynthetic pathway in S. cerevisiae within 5 days demonstrated high efficiency of this method, and a 10-transcriptional-unit pathway integration illustrated the capacity of this method. Modular design of transcription units and integration elements simplified assembly and integration procedure, and eliminated frequent designing and synthesis of DNA fragments in previous methods. Also, by assembling most parts in Step 1 in vitro, the number of DNA cassettes for homologous integration in Step 2 was significantly reduced. Thus, high assembly efficiency, high integration capacity, and low error rate were achieved.
Method of controlling a variable geometry type turbocharger
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirabayashi, Y.
1988-08-23
This patent describes a method of controlling the supercharging pressure of a variable geometry type turbocharger having a bypass, comprising the following steps which are carried out successively: receiving signals from an engine speed sensor and from an engine knocking sensor; receiving a signal from a throttle valve sensor; judging whether or not an engine is being accelerated, and proceeding to step below if the engine is being accelerated and to step below if the engine is not being accelerated, i.e., if the engine is in a constant speed operation; determining a first correction value and proceeding to step below;more » judging whether or not the engine is knocking, and proceeding to step (d) if knocking is occurring and to step (f) below if no knocking is occurring; determining a second correction value and proceeding to step; receiving signals from the engine speed sensor and from an airflow meter which measures the quantity of airflow to be supplied to the engine; calculating an airflow rate per engine revolution; determining a duty valve according to the calculated airflow rate; transmitting the corrected duty value to control means for controlling the geometry of the variable geometry type turbocharger and the opening of bypass of the turbocharger, thereby controlling the supercharging pressure of the turbocharger.« less
Comparison of Two Methods of RNA Extraction from Formalin-Fixed Paraffin-Embedded Tissue Specimens
Gouveia, Gisele Rodrigues; Ferreira, Suzete Cleusa; Ferreira, Jerenice Esdras; Siqueira, Sheila Aparecida Coelho; Pereira, Juliana
2014-01-01
The present study aimed to compare two different methods of extracting RNA from formalin-fixed paraffin-embedded (FFPE) specimens of diffuse large B-cell lymphoma (DLBCL). We further aimed to identify possible influences of variables—such as tissue size, duration of paraffin block storage, fixative type, primers used for cDNA synthesis, and endogenous genes tested—on the success of amplification from the samples. Both tested protocols used the same commercial kit for RNA extraction (the RecoverAll Total Nucleic Acid Isolation Optimized for FFPE Samples from Ambion). However, the second protocol included an additional step of washing with saline buffer just after sample rehydration. Following each protocol, we compared the RNA amount and purity and the amplification success as evaluated by standard PCR and real-time PCR. The results revealed that the extra washing step added to the RNA extraction process resulted in significantly improved RNA quantity and quality and improved success of amplification from paraffin-embedded specimens. PMID:25105117
Nevo, Daniel; Zucker, David M; Tamimi, Rulla M; Wang, Molin
2016-12-30
A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps-clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
An extended affinity propagation clustering method based on different data density types.
Zhao, XiuLi; Xu, WeiXiang
2015-01-01
Affinity propagation (AP) algorithm, as a novel clustering method, does not require the users to specify the initial cluster centers in advance, which regards all data points as potential exemplars (cluster centers) equally and groups the clusters totally by the similar degree among the data points. But in many cases there exist some different intensive areas within the same data set, which means that the data set does not distribute homogeneously. In such situation the AP algorithm cannot group the data points into ideal clusters. In this paper, we proposed an extended AP clustering algorithm to deal with such a problem. There are two steps in our method: firstly the data set is partitioned into several data density types according to the nearest distances of each data point; and then the AP clustering method is, respectively, used to group the data points into clusters in each data density type. Two experiments are carried out to evaluate the performance of our algorithm: one utilizes an artificial data set and the other uses a real seismic data set. The experiment results show that groups are obtained more accurately by our algorithm than OPTICS and AP clustering algorithm itself.
Numerical study on flow over stepped spillway using Lagrangian method
NASA Astrophysics Data System (ADS)
Wang, Junmin; Fu, Lei; Xu, Haibo; Jin, Yeechung
2018-02-01
Flow over stepped spillway has been studied for centuries, due to its unstable and the characteristics of cavity, the simulation of this type of spillway flow is always difficult. Most of the early studies of flow over stepped spillway are based on experiment, while in the recent decades, numerical studies of flow over stepped spillway draw most of the researchers’ attentions due to its simplicity and efficiency. In this study, a new Lagrangian based particle method is introduced to reproduce the phenomenon of flow over stepped spillway, the inherent advantages of this particle based method provide a convincing free surface and velocity profiles compared with previous experimental data. The capacity of this new method is proved and it is anticipated to be an alternative tool of traditional mesh based method in environmental engineering field such as the simulation of flow over stepped spillway.
Seismic data interpolation and denoising by learning a tensor tight frame
NASA Astrophysics Data System (ADS)
Liu, Lina; Plonka, Gerlind; Ma, Jianwei
2017-10-01
Seismic data interpolation and denoising plays a key role in seismic data processing. These problems can be understood as sparse inverse problems, where the desired data are assumed to be sparsely representable within a suitable dictionary. In this paper, we present a new method based on a data-driven tight frame (DDTF) of Kronecker type (KronTF) that avoids the vectorization step and considers the multidimensional structure of data in a tensor-product way. It takes advantage of the structure contained in all different modes (dimensions) simultaneously. In order to overcome the limitations of a usual tensor-product approach we also incorporate data-driven directionality. The complete method is formulated as a sparsity-promoting minimization problem. It includes two main steps. In the first step, a hard thresholding algorithm is used to update the frame coefficients of the data in the dictionary; in the second step, an iterative alternating method is used to update the tight frame (dictionary) in each different mode. The dictionary that is learned in this way contains the principal components in each mode. Furthermore, we apply the proposed KronTF to seismic interpolation and denoising. Examples with synthetic and real seismic data show that the proposed method achieves better results than the traditional projection onto convex sets method based on the Fourier transform and the previous vectorized DDTF methods. In particular, the simple structure of the new frame construction makes it essentially more efficient.
NASA Astrophysics Data System (ADS)
Pandey, Praveen K.; Sharma, Kriti; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.
2003-11-01
CdTe quantum dots embedded in glass matrix are grown using two-step annealing method. The results for the optical transmission characterization are analysed and compared with the results obtained from CdTe quantum dots grown using conventional single-step annealing method. A theoretical model for the absorption spectra is used to quantitatively estimate the size dispersion in the two cases. In the present work, it is established that the quantum dots grown using two-step annealing method have stronger quantum confinement, reduced size dispersion and higher volume ratio as compared to the single-step annealed samples. (
A two-step method for rapid characterization of electroosmotic flows in capillary electrophoresis.
Zhang, Wenjing; He, Muyi; Yuan, Tao; Xu, Wei
2017-12-01
The measurement of electroosmotic flow (EOF) is important in a capillary electrophoresis (CE) experiment in terms of performance optimization and stability improvement. Although several methods exist, there are demanding needs to accurately characterize ultra-low electroosmotic flow rates (EOF rates), such as in coated capillaries used in protein separations. In this work, a new method, called the two-step method, was developed to accurately and rapidly measure EOF rates in a capillary, especially for measuring the ultra-low EOF rates in coated capillaries. In this two-step method, the EOF rates were calculated by measuring the migration time difference of a neutral marker in two consecutive experiments, in which a pressure driven was introduced to accelerate the migration and the DC voltage was reversed to switch the EOF direction. Uncoated capillaries were first characterized by both this two-step method and a conventional method to confirm the validity of this new method. Then this new method was applied in the study of coated capillaries. Results show that this new method is not only fast in speed, but also better in accuracy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Read Two Impress: An Intervention for Disfluent Readers
ERIC Educational Resources Information Center
Young, Chase; Rasinski, Timothy; Mohr, Kathleen A. J.
2016-01-01
The authors describe a research-based method to increase students' reading fluency. The method is called Read Two Impress, which is derived from the Neurological Impress Method and the method of repeated readings. The authors provide step-by-step procedures to effectively implement the reading fluency intervention. Previous research indicates that…
Saito, Maiko; Kurosawa, Yae; Okuyama, Tsuneo
2012-02-01
Antibody purification using proteins A and G has been a standard method for research and industrial processes. The conventional method, however, includes a three-step process, including buffer exchange, before chromatography. In addition, proteins A and G require low pH elution, which causes antibody aggregation and inactivates the antibody's immunity. This report proposes a two-step method using hydroxyapatite chromatography and membrane filtration, without proteins A and G. This novel method shortens the running time to one-third the conventional method for each cycle. Using our two-step method, 90.2% of the monoclonal antibodies purified were recovered in the elution fraction, the purity achieved was >90%, and most of the antigen-specific activity was retained. This report suggests that the two-step method using hydroxyapatite chromatography and membrane filtration should be considered as an alternative to purification using proteins A and G.
NASA Astrophysics Data System (ADS)
Singh, R. A.; Satyanarayana, N.; Kustandi, T. S.; Sinha, S. K.
2011-01-01
Micro/nano-electro-mechanical-systems (MEMS/NEMS) are miniaturized devices built at micro/nanoscales. At these scales, the surface/interfacial forces are extremely strong and they adversely affect the smooth operation and the useful operating lifetimes of such devices. When these forces manifest in severe forms, they lead to material removal and thereby reduce the wear durability of the devices. In this paper, we present a simple, yet robust, two-step surface modification method to significantly enhance the tribological performance of MEMS/NEMS materials. The two-step method involves oxygen plasma treatment of polymeric films and the application of a nanolubricant, namely perfluoropolyether. We apply the two-step method to the two most important MEMS/NEMS structural materials, namely silicon and SU8 polymer. On applying surface modification to these materials, their initial coefficient of friction reduces by ~4-7 times and the steady-state coefficient of friction reduces by ~2.5-3.5 times. Simultaneously, the wear durability of both the materials increases by >1000 times. The two-step method is time effective as each of the steps takes the time duration of approximately 1 min. It is also cost effective as the oxygen plasma treatment is a part of the MEMS/NEMS fabrication process. The two-step method can be readily and easily integrated into MEMS/NEMS fabrication processes. It is anticipated that this method will work for any kind of structural material from which MEMS/NEMS are or can be made.
Refined BCF-type boundary conditions for mesoscale surface step dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Renjie; Ackerman, David M.; Evans, James W.
Deposition on a vicinal surface with alternating rough and smooth steps is described by a solid-on-solid model with anisotropic interactions. Kinetic Monte Carlo (KMC) simulations of the model reveal step pairing in the absence of any additional step attachment barriers. We explore the description of this behavior within an analytic Burton-Cabrera-Frank (BCF)-type step dynamics treatment. Without attachment barriers, conventional kinetic coefficients for the rough and smooth steps are identical, as are the predicted step velocities for a vicinal surface with equal terrace widths. However, we determine refined kinetic coefficients from a two-dimensional discrete deposition-diffusion equation formalism which accounts for stepmore » structure. These coefficients are generally higher for rough steps than for smooth steps, reflecting a higher propensity for capture of diffusing terrace adatoms due to a higher kink density. Such refined coefficients also depend on the local environment of the step and can even become negative (corresponding to net detachment despite an excess adatom density) for a smooth step in close proximity to a rough step. Incorporation of these refined kinetic coefficients into a BCF-type step dynamics treatment recovers quantitatively the mesoscale step-pairing behavior observed in the KMC simulations.« less
Refined BCF-type boundary conditions for mesoscale surface step dynamics
Zhao, Renjie; Ackerman, David M.; Evans, James W.
2015-06-24
Deposition on a vicinal surface with alternating rough and smooth steps is described by a solid-on-solid model with anisotropic interactions. Kinetic Monte Carlo (KMC) simulations of the model reveal step pairing in the absence of any additional step attachment barriers. We explore the description of this behavior within an analytic Burton-Cabrera-Frank (BCF)-type step dynamics treatment. Without attachment barriers, conventional kinetic coefficients for the rough and smooth steps are identical, as are the predicted step velocities for a vicinal surface with equal terrace widths. However, we determine refined kinetic coefficients from a two-dimensional discrete deposition-diffusion equation formalism which accounts for stepmore » structure. These coefficients are generally higher for rough steps than for smooth steps, reflecting a higher propensity for capture of diffusing terrace adatoms due to a higher kink density. Such refined coefficients also depend on the local environment of the step and can even become negative (corresponding to net detachment despite an excess adatom density) for a smooth step in close proximity to a rough step. Incorporation of these refined kinetic coefficients into a BCF-type step dynamics treatment recovers quantitatively the mesoscale step-pairing behavior observed in the KMC simulations.« less
Sharma, Manuj; Petersen, Irene; Nazareth, Irwin; Coton, Sonia J
2016-01-01
Research into diabetes mellitus (DM) often requires a reproducible method for identifying and distinguishing individuals with type 1 DM (T1DM) and type 2 DM (T2DM). To develop a method to identify individuals with T1DM and T2DM using UK primary care electronic health records. Using data from The Health Improvement Network primary care database, we developed a two-step algorithm. The first algorithm step identified individuals with potential T1DM or T2DM based on diagnostic records, treatment, and clinical test results. We excluded individuals with records for rarer DM subtypes only. For individuals to be considered diabetic, they needed to have at least two records indicative of DM; one of which was required to be a diagnostic record. We then classified individuals with T1DM and T2DM using the second algorithm step. A combination of diagnostic codes, medication prescribed, age at diagnosis, and whether the case was incident or prevalent were used in this process. We internally validated this classification algorithm through comparison against an independent clinical examination of The Health Improvement Network electronic health records for a random sample of 500 DM individuals. Out of 9,161,866 individuals aged 0-99 years from 2000 to 2014, we classified 37,693 individuals with T1DM and 418,433 with T2DM, while 1,792 individuals remained unclassified. A small proportion were classified with some uncertainty (1,155 [3.1%] of all individuals with T1DM and 6,139 [1.5%] with T2DM) due to unclear health records. During validation, manual assignment of DM type based on clinical assessment of the entire electronic record and algorithmic assignment led to equivalent classification in all instances. The majority of individuals with T1DM and T2DM can be readily identified from UK primary care electronic health records. Our approach can be adapted for use in other health care settings.
Compound image segmentation of published biomedical figures.
Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit
2018-04-01
Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.
Chokeshai-u-saha, Kaj; Buranapraditkun, Supranee; Jacquet, Alain; Nguyen, Catherine; Ruxrungtham, Kiat
2012-09-01
To study the role of human naïve B cells in antigen presentation and stimulation to naïve CD4+ T cell, a suitable method to reproducibly isolate sufficient naïve B cells is required. To improve the purity of isolated naive B cells obtained from a conventional one-step magnetic bead method, we added a rosetting step to enrich total B cell isolates from human whole blood samples prior to negative cell sorting by magnetic beads. The acquired naïve B cells were analyzed for phenotypes and for their role in Staphylococcal enterotoxin B (SEB) presentation to naïve CD4+ T cells. The mean (SD) naïve B cell (CD19+/CD27-) purity obtained from this two-step method compared with the one-step method was 97% (1.0) versus 90% (1.2), respectively. This two-step method can be used with a sample of whole blood as small as 10 ml. The isolated naive B cells were phenotypically at a resting state and were able to prime naïve CD4+ T cell activation by Staphylococcal enterotoxin B (SEB) presentation. This two-step non-flow cytometry-based approach improved the purity of isolated naïve B cells compared with conventional one-step magnetic bead method. It also worked well with a small blood volume. In addition, this study showed that the isolated naïve B cells can present a super-antigen "SEB" to activate naïve CD4 cells. These methods may thus be useful for further in vitro characterization of human naïve B cells and their roles as antigen presenting cells in various diseases.
1990-08-01
the guidance in this report. 1-4. Scope This guidance covers selection of projects suitable for a One-Step or Two-Step approach, development of design...conducted, focus on resolving proposal deficiencies; prices are not "negotiated" in the common use of the term. A Request for Proposal (RFP) states project ...carefully examines experience and past performance in the design of similar projects and building types. Quality of
Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.
Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607
Ouyang, Hui; Li, Junmao; Wu, Bei; Zhang, Xiaoyong; Li, Yan; Yang, Shilin; He, Mingzhen; Feng, Yulin
2017-06-16
The chlorogenic acids are the major bioactive constituents of the whole plant of Ainsliaea fragrans Champ. (Xingxiang Tuerfeng). These compounds are usually present as isomeric forms in Xingxiang Tuerfeng. Therefore, an efficient approach should be developed for the rapid discovery and identification of chlorogenic acids isomers through the fragmentation pathway and rules. In this study, the collision induced dissociation tandem mass spectrometry (CID-MS/MS) fragmentation routes of chlorogenic acids were systematically investigated by UHPLC-QTOF-MS/MS in the negative ion mode using eight chlorogenic acids standards. As a result, diagnostic product ions for rapid discovery and classification of chlorogenic acids isomers were determined according to their MS/MS fragmentation patterns and intensity analysis. Based on these findings, a novel two-step data mining strategy was established. The first key step was to screen different kinds of substitution and the skeleton of the quinic acid using the characteristic product ions and neutral losses. The second key step was to screen and classify different types of chlorogenic acids using their diagnostic product ions. It was apply to the rapid investigation, classification, and identification of chlorogenic acids. And the same carbon skeletons from a complex extract of Ainsliaea fragrans Champ. were effectively identified. 88 constituents, including 14 chlorogenic acids types, were rapidly discovered and identified, and in particular, 12 types of chlorogenic acids, including p-CoQC, FQA, BQC, CQA-Glu, CFQA, p-Co-CQC, di-p-CoQC, BCQA, di-CQA-Glu, PCQA, tri-QCA, and P-di-CQA, were first discovered in Ainsliaea fragrans Champ. In conclusion, UHPLC-QTOF-MS/MS method together with a systematic two-step data mining strategy was established as a feasible, effective, and rational technique for analyzing chlorogenic acids. Additionally, this study laid a foundation for the study of the active substances and quality control of Ainsliaea fragrans Champ. Copyright © 2017 Elsevier B.V. All rights reserved.
Rashed-Ul Islam, S M; Jahan, Munira; Tabassum, Shahina
2015-01-01
Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 10 3 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 10 3 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p < 0.0001). Both methods showed good agreement at Bland-Altman plot, with a mean difference of 0.61 log 10 IU/ml and limits of agreement of -1.82 to 3.03 log 10 IU/ml. The intra-assay and interassay coefficients of variation (CV%) of plasma samples (4-7 log 10 IU/ml) for the one-step PCR method ranged between 0.33 to 0.59 and 0.28 to 0.48 respectively, thus demonstrating a high level of concordance between the two methods. Moreover, elimination of the DNA extraction step in the one-step PCR kit allowed time-efficient and significant labor and cost savings for the quantification of HBV DNA in a resource limited setting. Rashed-Ul Islam SM, Jahan M, Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15.
Jahan, Munira; Tabassum, Shahina
2015-01-01
Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 103 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 103 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p < 0.0001). Both methods showed good agreement at Bland-Altman plot, with a mean difference of 0.61 log10 IU/ml and limits of agreement of -1.82 to 3.03 log10 IU/ml. The intra-assay and interassay coefficients of variation (CV%) of plasma samples (4-7 log10 IU/ml) for the one-step PCR method ranged between 0.33 to 0.59 and 0.28 to 0.48 respectively, thus demonstrating a high level of concordance between the two methods. Moreover, elimination of the DNA extraction step in the one-step PCR kit allowed time-efficient and significant labor and cost savings for the quantification of HBV DNA in a resource limited setting. How to cite this article Rashed-Ul Islam SM, Jahan M, Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15. PMID:29201678
Magnetically suspended stepping motors for clean room and vacuum environments
NASA Technical Reports Server (NTRS)
Higuchi, Toshiro
1994-01-01
To answer the growing needs for super-clean or contact free actuators for uses in clean rooms, vacuum chambers, and space, innovative actuators which combine the functions of stepping motors and magnetic bearings in one body were developed. The rotor of the magnetically suspended stepping motor is suspended like a magnetic bearing and rotated and positioned like a stepping motor. The important trait of the motor is that it is not a simple mixture or combination of a stepping motor and conventional magnetic bearing, but an amalgam of a stepping motor and a magnetic bearing. Owing to optimal design and feed-back control, a toothed stator and rotor are all that are needed structurewise for stable suspension. More than ten types of motors such as linear type, high accuracy rotary type, two-dimensional type, and high vacuum type were built and tested. This paper describes the structure and design of these motors and their performance for such applications as precise positioning rotary table, linear conveyor system, and theta-zeta positioner for clean room and high vacuum use.
Lucas, J.N.; Straume, T.; Bogen, K.T.
1998-03-24
A method is provided for detecting nucleic acid sequence aberrations using two immobilization steps. According to the method, a nucleic acid sequence aberration is detected by detecting nucleic acid sequences having both a first nucleic acid sequence type (e.g., from a first chromosome) and a second nucleic acid sequence type (e.g., from a second chromosome), the presence of the first and the second nucleic acid sequence type on the same nucleic acid sequence indicating the presence of a nucleic acid sequence aberration. In the method, immobilization of a first hybridization probe is used to isolate a first set of nucleic acids in the sample which contain the first nucleic acid sequence type. Immobilization of a second hybridization probe is then used to isolate a second set of nucleic acids from within the first set of nucleic acids which contain the second nucleic acid sequence type. The second set of nucleic acids are then detected, their presence indicating the presence of a nucleic acid sequence aberration. 14 figs.
Lucas, Joe N.; Straume, Tore; Bogen, Kenneth T.
1998-01-01
A method is provided for detecting nucleic acid sequence aberrations using two immobilization steps. According to the method, a nucleic acid sequence aberration is detected by detecting nucleic acid sequences having both a first nucleic acid sequence type (e.g., from a first chromosome) and a second nucleic acid sequence type (e.g., from a second chromosome), the presence of the first and the second nucleic acid sequence type on the same nucleic acid sequence indicating the presence of a nucleic acid sequence aberration. In the method, immobilization of a first hybridization probe is used to isolate a first set of nucleic acids in the sample which contain the first nucleic acid sequence type. Immobilization of a second hybridization probe is then used to isolate a second set of nucleic acids from within the first set of nucleic acids which contain the second nucleic acid sequence type. The second set of nucleic acids are then detected, their presence indicating the presence of a nucleic acid sequence aberration.
Kinematic Structural Modelling in Bayesian Networks
NASA Astrophysics Data System (ADS)
Schaaf, Alexander; de la Varga, Miguel; Florian Wellmann, J.
2017-04-01
We commonly capture our knowledge about the spatial distribution of distinct geological lithologies in the form of 3-D geological models. Several methods exist to create these models, each with its own strengths and limitations. We present here an approach to combine the functionalities of two modeling approaches - implicit interpolation and kinematic modelling methods - into one framework, while explicitly considering parameter uncertainties and thus model uncertainty. In recent work, we proposed an approach to implement implicit modelling algorithms into Bayesian networks. This was done to address the issues of input data uncertainty and integration of geological information from varying sources in the form of geological likelihood functions. However, one general shortcoming of implicit methods is that they usually do not take any physical constraints into consideration, which can result in unrealistic model outcomes and artifacts. On the other hand, kinematic structural modelling intends to reconstruct the history of a geological system based on physically driven kinematic events. This type of modelling incorporates simplified, physical laws into the model, at the cost of a substantial increment of usable uncertain parameters. In the work presented here, we show an integration of these two different modelling methodologies, taking advantage of the strengths of both of them. First, we treat the two types of models separately, capturing the information contained in the kinematic models and their specific parameters in the form of likelihood functions, in order to use them in the implicit modelling scheme. We then go further and combine the two modelling approaches into one single Bayesian network. This enables the direct flow of information between the parameters of the kinematic modelling step and the implicit modelling step and links the exclusive input data and likelihoods of the two different modelling algorithms into one probabilistic inference framework. In addition, we use the capabilities of Noddy to analyze the topology of structural models to demonstrate how topological information, such as the connectivity of two layers across an unconformity, can be used as a likelihood function. In an application to a synthetic case study, we show that our approach leads to a successful combination of the two different modelling concepts. Specifically, we show that we derive ensemble realizations of implicit models that now incorporate the knowledge of the kinematic aspects, representing an important step forward in the integration of knowledge and a corresponding estimation of uncertainties in structural geological models.
Ishizaki, Azusa; Ishii, Keizo; Kanematsu, Nobuyuki; Kanai, Tatsuaki; Yonai, Shunsuke; Kase, Yuki; Takei, Yuka; Komori, Masataka
2009-06-01
Passive irradiation methods deliver an extra dose to normal tissues upstream of the target tumor, while in dynamic irradiation methods, interplay effects between dynamic beam delivery and target motion induced by breathing or respiration distort the dose distributions. To solve the problems of those two irradiation methods, the authors have developed a new method that laterally modulates the spread-out Bragg peak (SOBP) width. By reducing scanning in the depth direction, they expect to reduce the interplay effects. They have examined this new irradiation method experimentally. In this system, they used a cone-type filter that consisted of 400 cones in a grid of 20 cones by 20 cones. There were five kinds of cones with different SOBP widths arranged on the frame two dimensionally to realize lateral SOBP modulation. To reduce the number of steps of cones, they used a wheel-type filter to make minipeaks. The scanning intensity was modulated for each SOBP width with a pair of scanning magnets. In this experiment, a stepwise dose distribution and spherical dose distribution of 60 mm in diameter were formed. The nonflatness of the stepwise dose distribution was 5.7% and that of the spherical dose distribution was 3.8%. A 2 mm misalignment of the cone-type filter resulted in a nonflatness of more than 5%. Lateral SOBP modulation with a cone-type filter and a scanned carbon ion beam successfully formed conformal dose distribution with nonflatness of 3.8% for the spherical case. The cone-type filter had to be set to within 1 mm accuracy to maintain nonflatness within 5%. This method will be useful to treat targets moving during breathing and targets in proximity to important organs.
Effect of handpiece maintenance method on bond strength.
Roberts, Howard W; Vandewalle, Kraig S; Charlton, David G; Leonard, Daniel L
2005-01-01
This study evaluated the effect of dental handpiece lubricant on the shear bond strength of three bonding agents to dentin. A lubrication-free handpiece (one that does not require the user to lubricate it) and a handpiece requiring routine lubrication were used in the study. In addition, two different handpiece lubrication methods (automated versus manual application) were also investigated. One hundred and eighty extracted human teeth were ground to expose flat dentin surfaces that were then finished with wet silicon carbide paper. The teeth were randomly divided into 18 groups (n=10). The dentin surface of each specimen was exposed for 30 seconds to water spray from either a lubrication-free handpiece or a lubricated handpiece. Prior to exposure, various lubrication regimens were used on the handpieces that required lubrication. The dentin surfaces were then treated with total-etch, two-step; a self-etch, two-step or a self-etch, one-step bonding agent. Resin composite cylinders were bonded to dentin, the specimens were then thermocycled and tested to failure in shear at seven days. Mean bond strength data were analyzed using Dunnett's multiple comparison test at an 0.05 level of significance. Results indicated that within each of the bonding agents, there were no significant differences in bond strength between the control group and the treatment groups regardless of the type of handpiece or use of routine lubrication.
Automatic initial and final segmentation in cleft palate speech of Mandarin speakers
Liu, Yin; Yin, Heng; Zhang, Junpeng; Zhang, Jing; Zhang, Jiang
2017-01-01
The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with “quasi-unvoiced” or with “quasi-voiced” initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the syllables is 91.24%. PMID:28926572
Automatic initial and final segmentation in cleft palate speech of Mandarin speakers.
He, Ling; Liu, Yin; Yin, Heng; Zhang, Junpeng; Zhang, Jing; Zhang, Jiang
2017-01-01
The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with "quasi-unvoiced" or with "quasi-voiced" initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the syllables is 91.24%.
NASA Astrophysics Data System (ADS)
Aizimu, Tuerxun; Adachi, Makoto; Nakano, Kazuya; Ohnishi, Takashi; Nakaguchi, Toshiya; Takahashi, Nozomi; Nakada, Taka-aki; Oda, Shigeto; Haneishi, Hideaki
2018-02-01
Near-infrared spectroscopy (NIRS) is a noninvasive method for monitoring tissue oxygen saturation (StO2). Many commercial NIRS devices are presently available. However, the precision of those devices is relatively poor because they are using the reflectance-model with which it is difficult to obtain the blood volume and other unchanged components of the tissue. Human webbing is a thin part of the hand and suitable to measure spectral transmittance. In this paper, we present a method for measuring StO2 of human webbing from a transmissive continuous-wave nearinfrared spectroscopy (CW-NIRS) data. The method is based on the modified Beer-Lambert law (MBL) and it consists of two steps. In the first step, we give a pressure to the upstream region of the measurement point to perturb the concentration of deoxy- and oxy-hemoglobin as remaining the other components and measure the spectral signals. From the measured data, spectral absorbance due to the components other than hemoglobin is calculated. In the second step, spectral measurement is performed at arbitrary time instance and the spectral absorbance obtained in the step 1 is subtracted from the measured absorbance. The tissue oxygen saturation (StO2) is estimated from the remained data. The method was evaluated on an arterial occlusion test (AOT) and a venous occlusion test (VOT). In the evaluation experiment, we confirmed that reasonable values of StO2 were obtained by the proposed method.
Bimetallic iron and cobalt incorporated MFI/MCM-41 composite and its catalytic properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Baoshan, E-mail: bsli@mail.buct.edu.cn; Xu, Junqing; Li, Xiao
2012-05-15
Graphical abstract: The formation of FeCo-MFI/MCM-41 composite is based on two steps, the first step of synthesizing the MFI-type proto-zeolite unites under hydrothermal conditions. The second step of assembling these zeolite fragment together new silica and heteroatom source on the CTAB surfactant micelle to synthesize the mesoporous product with hexagonal structure. Highlights: Black-Right-Pointing-Pointer Bimetallic iron and cobalt incorporated MFI/MCM-41 composite was prepared using templating method. Black-Right-Pointing-Pointer FeCo-MFI/MCM-41 composite simultaneously possessed two kinds of meso- and micro-porous structures. Black-Right-Pointing-Pointer Iron and cobalt ions incorporated into the silica framework with tetrahedral coordination. -- Abstract: The MFI/MCM-41 composite material with bimetallic Fe andmore » Co incorporation was prepared using templating method via a two-step hydrothermal crystallization procedure. The obtained products were characterized by a series of techniques including powder X-ray diffraction, N{sub 2} sorption, transmission electron microscopy, scanning electron microscope, H{sub 2} temperature programmed reduction, thermal analyses, and X-ray absorption fine structure spectroscopy of the Fe and Co K-edge. The catalytic properties of the products were investigated by residual oil hydrocracking reactions. Characterization results showed that the FeCo-MFI/MCM-41 composite simultaneously possessed two kinds of stable meso- and micro-porous structures. Iron and cobalt ions were incorporated into the silicon framework, which was confirmed by H{sub 2} temperature programmed reduction and X-ray absorption fine structure spectroscopy. This composite presented excellent activities in hydrocracking of residual oil, which was superior to the pure materials of silicate-1/MCM-41.« less
Spectral embedding finds meaningful (relevant) structure in image and microarray data
Higgs, Brandon W; Weller, Jennifer; Solka, Jeffrey L
2006-01-01
Background Accurate methods for extraction of meaningful patterns in high dimensional data have become increasingly important with the recent generation of data types containing measurements across thousands of variables. Principal components analysis (PCA) is a linear dimensionality reduction (DR) method that is unsupervised in that it relies only on the data; projections are calculated in Euclidean or a similar linear space and do not use tuning parameters for optimizing the fit to the data. However, relationships within sets of nonlinear data types, such as biological networks or images, are frequently mis-rendered into a low dimensional space by linear methods. Nonlinear methods, in contrast, attempt to model important aspects of the underlying data structure, often requiring parameter(s) fitting to the data type of interest. In many cases, the optimal parameter values vary when different classification algorithms are applied on the same rendered subspace, making the results of such methods highly dependent upon the type of classifier implemented. Results We present the results of applying the spectral method of Lafon, a nonlinear DR method based on the weighted graph Laplacian, that minimizes the requirements for such parameter optimization for two biological data types. We demonstrate that it is successful in determining implicit ordering of brain slice image data and in classifying separate species in microarray data, as compared to two conventional linear methods and three nonlinear methods (one of which is an alternative spectral method). This spectral implementation is shown to provide more meaningful information, by preserving important relationships, than the methods of DR presented for comparison. Tuning parameter fitting is simple and is a general, rather than data type or experiment specific approach, for the two datasets analyzed here. Tuning parameter optimization is minimized in the DR step to each subsequent classification method, enabling the possibility of valid cross-experiment comparisons. Conclusion Results from the spectral method presented here exhibit the desirable properties of preserving meaningful nonlinear relationships in lower dimensional space and requiring minimal parameter fitting, providing a useful algorithm for purposes of visualization and classification across diverse datasets, a common challenge in systems biology. PMID:16483359
A two-step method for developing a control rod program for boiling water reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taner, M.S.; Levine, S.H.; Hsiao, M.Y.
1992-01-01
This paper reports on a two-step method that is established for the generation of a long-term control rod program for boiling water reactors (BWRs). The new method assumes a time-variant target power distribution in core depletion. In the new method, the BWR control rod programming is divided into two steps. In step 1, a sequence of optimal, exposure-dependent Haling power distribution profiles is generated, utilizing the spectral shift concept. In step 2, a set of exposure-dependent control rod patterns is developed by using the Haling profiles generated at step 1 as a target. The new method is implemented in amore » computer program named OCTOPUS. The optimization procedure of OCTOPUS is based on the method of approximation programming, in which the SIMULATE-E code is used to determine the nucleonics characteristics of the reactor core state. In a test in cycle length over a time-invariant, target Haling power distribution case because of a moderate application of spectral shift. No thermal limits of the core were violated. The gain in cycle length could be increased further by broadening the extent of the spetral shift.« less
Experimental design methodologies in the optimization of chiral CE or CEC separations: an overview.
Dejaegher, Bieke; Mangelings, Debby; Vander Heyden, Yvan
2013-01-01
In this chapter, an overview of experimental designs to develop chiral capillary electrophoresis (CE) and capillary electrochromatographic (CEC) methods is presented. Method development is generally divided into technique selection, method optimization, and method validation. In the method optimization part, often two phases can be distinguished, i.e., a screening and an optimization phase. In method validation, the method is evaluated on its fit for purpose. A validation item, also applying experimental designs, is robustness testing. In the screening phase and in robustness testing, screening designs are applied. During the optimization phase, response surface designs are used. The different design types and their application steps are discussed in this chapter and illustrated by examples of chiral CE and CEC methods.
NASA Astrophysics Data System (ADS)
Bissadi, Golnaz
Hybrid membranes represent a promising alternative to the limitations of organic and inorganic materials for high productivity and selectivity gas separation membranes. In this study, the previously developed concept of emulsion-polymerized mixed matrix (EPMM) membranes was further advanced by investigating the effects of surfactant and compatibilizer on inorganic loading in poly(2,6-dimethyl-1,4-phenylene oxide) (PPO)-based EPMM membranes, in which inorganic part of the membranes originated from tetraethylorthosilicate (TEOS). The polymerization of TEOS, which consists of hydrolysis of TEOS and condensation of the hydrolyzed TEOS, was carried out as (i) one- and (ii) two-step processes. In the one-step process, the hydrolysis and condensation take place in the same environment of a weak acid provided by the aqueous solution of aluminum hydroxonitrate and sodium carbonate. In the two-step process, the hydrolysis takes place in the environment of a strong acid (solution of hydrochloric acid), whereas the condensation takes place in weak base environment obtained by adding excess of the ammonium hydroxide solution to the acidic solution of the hydrolyzed TEOS. For both one- and two-step processes, the emulsion polymerization of TEOS was carried out in two types of emulsions made of (i) pure trichloroethylene (TCE) solvent, and (ii) 10 w/v% solution of PPO in TCE, using different combinations of the compatibilizer (ethanol) and the surfactant (n-octanol). The experiments with pure TCE, which are referred to as a gravimetric powder method (GPM) allowed assessing the effect of different experimental parameters on the conversion of TEOS. The GPM tests also provided a guide for the synthesis of casting emulsions containing PPO, from which the EPMM membranes were prepared using a spin coating technique. The synthesized EPMM membranes were characterized using 29Si nuclear magnetic resonance (29Si NMR), differential scanning calorimetry (DSC), inductively coupled plasma mass spectrometry (ICP-MS), and gas permeation measurements carried out in a constant pressure (CP) system. The 29Si NMR analysis verified polymerization of TEOS in the emulsions made of pure TCE, and the PPO solution in TCE. The conversions of TEOS in the two-step process in the two types of emulsions were very close to each other. In the case of the one-step process, the conversions in the TCE emulsion were significantly greater than those in the emulsion of the PPO solution in TCE. Consequently, the conversions of TEOS in the EPMM membranes made in the two-step process were greater than those in the EPMM membranes made in the one-step process. The latter ranged between 10 - 20%, while the highest conversion in the two-step process was 74% in the presence of pure compatibilizer with no surfactant. Despite greater conversions and hence the greater inorganic loadings, the EPMM membranes prepared in the two-step process had glass transition temperatures (Tg) only slightly greater than the reference PPO membranes. In contrast, despite relatively low inorganic loadings, the EPMM membranes prepared in the one-step process had Tgs markedly greater than PPO, and showed the expected trend of an increase in Tg with the inorganic loading. These results indicate that in the case of the one-step process the polymerized TEOS was well integrated with the PPO chains and the interactions between the two phases lead to high Tgs. On the other hand, this was not the case for the EPMM membranes prepared in the two-step process, suggesting possible phase separation between the polymerized TEOS and the organic phase. The latter was confirmed by detecting no selectivity in the EPMM membranes prepared by the two-step process. In contrast, the EPMM membranes prepared in the one-step process in the presence of the compatibilizer and no surfactant showed 50% greater O2 permeability coefficient and a slightly greater O2/N2 permeability ratio compared to the reference PPO membranes.
Uemura, Kazuhiro; Yamasaki, Yukari; Onishi, Fumiaki; Kita, Hidetoshi; Ebihara, Masahiro
2010-11-01
A preliminary study of isopropanol (IPA) adsorption/desorption isotherms on a jungle-gym-type porous coordination polymer, [Zn(2)(bdc)(2)(dabco)](n) (1, H(2)bdc = 1,4-benzenedicarboxylic acid, dabco =1,4-diazabicyclo[2.2.2]octane), showed unambiguous two-step profiles via a highly shrunk intermediate framework. The results of adsorption measurements on 1, using probing gas molecules of alcohol (MeOH and EtOH) for the size effect and Me(2)CO for the influence of hydrogen bonding, show that alcohol adsorption isotherms are gradual two-step profiles, whereas the Me(2)CO isotherm is a typical type-I isotherm, indicating that a two-step adsorption/desorption is involved with hydrogen bonds. To further clarify these characteristic adsorption/desorption behaviors, selecting nitroterephthalate (bdc-NO(2)), bromoterephthalate (bdc-Br), and 2,5-dichloroterephthalate (bdc-Cl(2)) as substituted dicarboxylate ligands, isomorphous jungle-gym-type porous coordination polymers, {[Zn(2)(bdc-NO(2))(2)(dabco)]·solvents}(n) (2 ⊃ solvents), {[Zn(2)(bdc-Br)(2)(dabco)]·solvents}(n) (3 ⊃ solvents), and {[Zn(2)(bdc-Cl(2))(2)(dabco)]·solvents}(n) (4 ⊃ solvents), were synthesized and characterized by single-crystal X-ray analyses. Thermal gravimetry, X-ray powder diffraction, and N(2) adsorption at 77 K measurements reveal that [Zn(2)(bdc-NO(2))(2)(dabco)](n) (2), [Zn(2)(bdc-Br)(2)(dabco)](n) (3), and [Zn(2)(bdc-Cl(2))(2)(dabco)](n) (4) maintain their frameworks without guest molecules with Brunauer-Emmett-Teller (BET) surface areas of 1568 (2), 1292 (3), and 1216 (4) m(2) g(-1). As found in results of MeOH, EtOH, IPA, and Me(2)CO adsorption/desorption on 2-4, only MeOH adsorption on 2 shows an obvious two-step profile. Considering the substituent effects and adsorbate sizes, the hydrogen bonds, which are triggers for two-step adsorption, are formed between adsorbates and carboxylate groups at the corners in the pores, inducing wide pores to become narrow pores. Interestingly, such a two-step MeOH adsorption on 2 depends on the temperature, attributed to the small free-energy difference (ΔF(host)) between the two guest-free forms, wide and narrow pores.
Assawamakin, Anunchai; Prueksaaroon, Supakit; Kulawonganunchai, Supasak; Shaw, Philip James; Varavithya, Vara; Ruangrajitpakorn, Taneth; Tongsima, Sissades
2013-01-01
Identification of suitable biomarkers for accurate prediction of phenotypic outcomes is a goal for personalized medicine. However, current machine learning approaches are either too complex or perform poorly. Here, a novel two-step machine-learning framework is presented to address this need. First, a Naïve Bayes estimator is used to rank features from which the top-ranked will most likely contain the most informative features for prediction of the underlying biological classes. The top-ranked features are then used in a Hidden Naïve Bayes classifier to construct a classification prediction model from these filtered attributes. In order to obtain the minimum set of the most informative biomarkers, the bottom-ranked features are successively removed from the Naïve Bayes-filtered feature list one at a time, and the classification accuracy of the Hidden Naïve Bayes classifier is checked for each pruned feature set. The performance of the proposed two-step Bayes classification framework was tested on different types of -omics datasets including gene expression microarray, single nucleotide polymorphism microarray (SNParray), and surface-enhanced laser desorption/ionization time-of-flight (SELDI-TOF) proteomic data. The proposed two-step Bayes classification framework was equal to and, in some cases, outperformed other classification methods in terms of prediction accuracy, minimum number of classification markers, and computational time.
A Novel Two-Step Method for Screening Shade Tolerant Mutant Plants via Dwarfism
Li, Wei; Katin-Grazzini, Lorenzo; Krishnan, Sanalkumar; Thammina, Chandra; El-Tanbouly, Rania; Yer, Huseyin; Merewitz, Emily; Guillard, Karl; Inguagiato, John; McAvoy, Richard J.; Liu, Zongrang; Li, Yi
2016-01-01
When subjected to shade, plants undergo rapid shoot elongation, which often makes them more prone to disease and mechanical damage. Shade-tolerant plants can be difficult to breed; however, they offer a substantial benefit over other varieties in low-light areas. Although perennial ryegrass (Lolium perenne L.) is a popular species of turf grasses because of their good appearance and fast establishment, the plant normally does not perform well under shade conditions. It has been reported that, in turfgrass, induced dwarfism can enhance shade tolerance. Here we describe a two-step procedure for isolating shade tolerant mutants of perennial ryegrass by first screening for dominant dwarf mutants, and then screening dwarf plants for shade tolerance. The two-step screening process to isolate shade tolerant mutants can be done efficiently with limited space at early seedling stages, which enables quick and efficient isolation of shade tolerant mutants, and thus facilitates development of shade tolerant new cultivars of turfgrasses. Using the method, we isolated 136 dwarf mutants from 300,000 mutagenized seeds, with 65 being shade tolerant (0.022%). When screened directly for shade tolerance, we recovered only four mutants from a population of 150,000 (0.003%) mutagenized seeds. One shade tolerant mutant, shadow-1, was characterized in detail. In addition to dwarfism, shadow-1 and its sexual progeny displayed high degrees of tolerance to both natural and artificial shade. We showed that endogenous gibberellin (GA) content in shadow-1 was higher than wild-type controls, and shadow-1 was also partially GA insensitive. Our novel, simple and effective two-step screening method should be applicable to breeding shade tolerant cultivars of turfgrasses, ground covers, and other economically important crop plants that can be used under canopies of existing vegetation to increase productivity per unit area of land. PMID:27752260
NASA Astrophysics Data System (ADS)
Frasch, Jonathan Lemoine
Determining the electrical permittivity and magnetic permeability of materials is an important task in electromagnetics research. The method using reflection and transmission scattering parameters to determine these constants has been widely employed for many years, ever since the work of Nicolson, Ross, and Weir in the 1970's. For general materials that are homogeneous, linear, and isotropic, the method they developed (the NRW method) works very well and provides an analytical solution. For materials which possess a metal backing or are applied as a coating to a metal surface, it can be difficult or even impossible to obtain a transmission measurement, especially when the coating is thin. In such a circumstance, it is common to resort to a method which uses two reflection type measurements. There are several such methods for free-space measurements, using multiple angles or polarizations for example. For waveguide measurements, obtaining two independent sources of information from which to extract two complex parameters can be a challenge. This dissertation covers three different topics. Two of these involve different techniques to characterize conductor-backed materials, and the third proposes a method for designing synthetic validation standards for use with standard NRW measurements. All three of these topics utilize modal expansions of electric and magnetic fields to analyze propagation in stepped rectangular waveguides. Two of the projects utilize evolutionary algorithms (EA) to design waveguide structures. These algorithms were developed specifically for these projects and utilize fairly recent innovations within the optimization community. The first characterization technique uses two different versions of a single vertical step in the waveguide. Samples to be tested lie inside the steps with the conductor reflection plane behind them. If the two reflection measurements are truly independent it should be possible to recover the values of two complex parameters, but success of the technique ultimately depends upon how independent the measurements actually are. Next, a method is demonstrated for developing synthetic verification standards. These standards are created from combinations of vertical steps formed from a single piece of metal or metal coated plastic. These fully insertable structures mimic some of the measurement characteristics of typical lab specimens and thus provide a useful tool for verifying the proper calibration and function of the experimental setup used for NRW characterization. These standards are designed with the use an EA, which compares possible designs based on the quality of the match with target parameter values. Several examples have been fabricated and tested, and the design specifications and results are presented. Finally, a second characterization technique is considered. This method uses multiple vertical steps to construct an error reducing structure within the waveguide, which allows parameters to be reliably extracted using both reflection and transmission measurements. These structures are designed with an EA, measuring fitness by the reduction of error in the extracted parameters. An additional EA is used to assist in the extraction of the material parameters supplying better initial guesses to a secant method solver. This hybrid approach greatly increases the stability of the solver and increases the speed of parameter extractions. Several designs have been identified and are analyzed.
Process of electrolysis and fractional crystallization for aluminum purification
Dawless, R.K.; Bowman, K.A.; Mazgaj, R.M.; Cochran, C.N.
1983-10-25
A method is described for purifying aluminum that contains impurities, the method including the step of introducing such aluminum containing impurities to a charging and melting chamber located in an electrolytic cell of the type having a porous diaphragm permeable by the electrolyte of the cell and impermeable to molten aluminum. The method includes further the steps of supplying impure aluminum from the chamber to the anode area of the cell and electrolytically transferring aluminum from the anode area to the cathode through the diaphragm while leaving impurities in the anode area, thereby purifying the aluminum introduced into the chamber. The method includes the further steps of collecting the purified aluminum at the cathode, and lowering the level of impurities concentrated in the anode area by subjecting molten aluminum and impurities in said chamber to a fractional crystallization treatment wherein eutectic-type impurities crystallize and precipitate out of the aluminum. The eutectic impurities that have crystallized are physically removed from the chamber. The aluminum in the chamber is now suited for further purification as provided in the above step of electrolytically transferring aluminum through the diaphragm. 2 figs.
Process of electrolysis and fractional crystallization for aluminum purification
Dawless, Robert K.; Bowman, Kenneth A.; Mazgaj, Robert M.; Cochran, C. Norman
1983-10-25
A method for purifying aluminum that contains impurities, the method including the step of introducing such aluminum containing impurities to a charging and melting chamber located in an electrolytic cell of the type having a porous diaphragm permeable by the electrolyte of the cell and impermeable to molten aluminum. The method includes further the steps of supplying impure aluminum from the chamber to the anode area of the cell and electrolytically transferring aluminum from the anode area to the cathode through the diaphragm while leaving impurities in the anode area, thereby purifying the aluminum introduced into the chamber. The method includes the further steps of collecting the purified aluminum at the cathode, and lowering the level of impurities concentrated in the anode area by subjecting molten aluminum and impurities in said chamber to a fractional crystallization treatment wherein eutectic-type impurities crystallize and precipitate out of the aluminum. The eutectic impurities that have crystallized are physically removed from the chamber. The aluminum in the chamber is now suited for further purification as provided in the above step of electrolytically transferring aluminum through the diaphragm.
Method of the Determination of Exterior Orientation of Sensors in Hilbert Type Space.
Stępień, Grzegorz
2018-03-17
The following article presents a new isometric transformation algorithm based on the transformation in the newly normed Hilbert type space. The presented method is based on so-called virtual translations, already known in advance, of two relative oblique orthogonal coordinate systems-interior and exterior orientation of sensors-to a common, known in both systems, point. Each of the systems is translated along its axis (the systems have common origins) and at the same time the angular relative orientation of both coordinate systems is constant. The translation of both coordinate systems is defined by the spatial norm determining the length of vectors in the new Hilbert type space. As such, the displacement of two relative oblique orthogonal systems is reduced to zero. This makes it possible to directly calculate the rotation matrix of the sensor. The next and final step is the return translation of the system along an already known track. The method can be used for big rotation angles. The method was verified in laboratory conditions for the test data set and measurement data (field data). The accuracy of the results in the laboratory test is on the level of 10 -6 of the input data. This confirmed the correctness of the assumed calculation method. The method is a further development of the author's 2017 Total Free Station (TFS) transformation to several centroids in Hilbert type space. This is the reason why the method is called Multi-Centroid Isometric Transformation-MCIT. MCIT is very fast and enables, by reducing to zero the translation of two relative oblique orthogonal coordinate systems, direct calculation of the exterior orientation of the sensors.
Detection of low-level DNA mutation by ARMS-blocker-Tm PCR.
Qu, Shoufang; Liu, Licheng; Gan, Shuzhen; Feng, Huahua; Zhao, Jingyin; Zhao, Jing; Liu, Qi; Gao, Shangxiang; Chen, Weijun; Wang, Mengzhao; Jiang, Yongqiang; Huang, Jie
2016-02-01
Low-level DNA mutations play important roles in cancer prognosis and treatment. However, most existing methods for the detection of low-level DNA mutations are insufficient for clinical applications because of the high background of wild-type DNA. In this study, a novel assay based on Tm-dependent inhibition of wild type template amplification was developed. The defining characteristic of this assay is an additional annealing step was introduced into the ARMS-blocker PCR. The temperature of this additional annealing step is equal to the Tm of the blocker. Due to this additional annealing step, the blocker can preferentially and specifically bind the wild-type DNA. Thus, the inhibition of wild type template is realized and the mutant DNA is enriched. The sensitivity of this assay was between 10(-4) and 10(-5), which is approximately 5 to 10 times greater than the sensitivity of the assay without the additional annealing step. To evaluate the performance of this assay in detecting K-ras mutation, we analyzed 100 formalin-fixed paraffin-embedded (FFPE) specimens from colorectal cancer patients using this new assay and Sanger sequencing. Of the clinical samples, 27 samples were positive for K-ras mutation by both methods. Our results indicated that this new assay is a highly selective, convenient, and economical method for detecting rare mutations in the presence of higher concentrations of wild-type DNA. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Thermal quenching effect of an infrared deep level in Mg-doped p-type GaN films
NASA Astrophysics Data System (ADS)
Kim, Keunjoo; Chung, Sang Jo
2002-03-01
The thermal quenching of an infrared deep level of 1.2-1.5 eV has been investigated on Mg-doped p-type GaN films, using one- and two-step annealing processes and photocurrent measurements. The deep level appeared in the one-step annealing process at a relatively high temperature of 900 °C, but disappeared in the two-step annealing process with a low-temperature step and a subsequent high-temperature step. The persistent photocurrent was residual in the sample including the deep level, while it was terminated in the sample without the deep level. This indicates that the deep level is a neutral hole center located above a quasi-Fermi level, estimated with an energy of EpF=0.1-0.15 eV above the valence band at a hole carrier concentration of 2.0-2.5×1017/cm3.
NASA Astrophysics Data System (ADS)
Shirmohamadi, Mohamad; Kadkhodaie, Ali; Rahimpour-Bonab, Hossain; Faraji, Mohammad Ali
2017-04-01
Velocity deviation log (VDL) is a synthetic log used to determine pore types in reservoir rocks based on a combination of the sonic log with neutron-density logs. The current study proposes a two step approach to create a map of porosity and pore types by integrating the results of petrographic studies, well logs and seismic data. In the first step, velocity deviation log was created from the combination of the sonic log with the neutron-density log. The results allowed identifying negative, zero and positive deviations based on the created synthetic velocity log. Negative velocity deviations (below - 500 m/s) indicate connected or interconnected pores and fractures, while positive deviations (above + 500 m/s) are related to isolated pores. Zero deviations in the range of [- 500 m/s, + 500 m/s] are in good agreement with intercrystalline and microporosities. The results of petrographic studies were used to validate the main pore type derived from velocity deviation log. In the next step, velocity deviation log was estimated from seismic data by using a probabilistic neural network model. For this purpose, the inverted acoustic impedance along with the amplitude based seismic attributes were formulated to VDL. The methodology is illustrated by performing a case study from the Hendijan oilfield, northwestern Persian Gulf. The results of this study show that integration of petrographic, well logs and seismic attributes is an instrumental way for understanding the spatial distribution of main reservoir pore types.
Method for isolating chromosomal DNA in preparation for hybridization in suspension
Lucas, Joe N.
2000-01-01
A method is provided for detecting nucleic acid sequence aberrations using two immobilization steps. According to the method, a nucleic acid sequence aberration is detected by detecting nucleic acid sequences having both a first nucleic acid sequence type (e.g., from a first chromosome) and a second nucleic acid sequence type (e.g., from a second chromosome), the presence of the first and the second nucleic acid sequence type on the same nucleic acid sequence indicating the presence of a nucleic acid sequence aberration. In the method, immobilization of a first hybridization probe is used to isolate a first set of nucleic acids in the sample which contain the first nucleic acid sequence type. Immobilization of a second hybridization probe is then used to isolate a second set of nucleic acids from within the first set of nucleic acids which contain the second nucleic acid sequence type. The second set of nucleic acids are then detected, their presence indicating the presence of a nucleic acid sequence aberration. Chromosomal DNA in a sample containing cell debris is prepared for hybridization in suspension by treating the mixture with RNase. The treated DNA can also be fixed prior to hybridization.
Maki, Yuta; Okamoto, Ryo; Izumi, Masayuki; Murase, Takefumi; Kajihara, Yasuhiro
2016-03-16
Attachment of oligosaccharides to proteins is a major post-translational modification. Chemical syntheses of oligosaccharides have contributed to clarifying the functions of these oligosaccharides. However, syntheses of oligosaccharide-linked proteins are still challenging because of their inherent complicated structures, including diverse di- to tetra-antennary forms. We report a highly efficient strategy to access the representative two types of triantennary oligosaccharides through only 9- or 10-step chemical conversions from a biantennary oligosaccharide, which can be isolated in exceptionally homogeneous form from egg yolk. Four benzylidene acetals were successfully introduced to the terminal two galactosides and two core mannosides of the biantennary asialononasaccharide bearing 24 hydroxy groups, followed by protection of the remaining hydroxy groups with acetyl groups. Selective removal of one of the benzylidene acetals gave two types of suitably protected glycosyl acceptors. Glycosylation toward the individual acceptors with protected Gal-β-1,4-GlcN thioglycoside and subsequent deprotection steps successfully yielded two types of complex-type triantennary oligosaccharides.
Deng, Wei; Zhang, Xiujuan; Pan, Huanhuan; Shang, Qixun; Wang, Jincheng; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng
2014-01-01
Single-crystal organic nanostructures show promising applications in flexible and stretchable electronics, while their applications are impeded by the large incompatibility with the well-developed photolithography techniques. Here we report a novel two-step transfer printing (TTP) method for the construction of organic nanowires (NWs) based devices onto arbitrary substrates. Copper phthalocyanine (CuPc) NWs are first transfer-printed from the growth substrate to the desired receiver substrate by contact-printing (CP) method, and then electrode arrays are transfer-printed onto the resulting receiver substrate by etching-assisted transfer printing (ETP) method. By utilizing a thin copper (Cu) layer as sacrificial layer, microelectrodes fabricated on it via photolithography could be readily transferred to diverse conventional or non-conventional substrates that are not easily accessible before with a high transfer yield of near 100%. The ETP method also exhibits an extremely high flexibility; various electrodes such as Au, Ti, and Al etc. can be transferred, and almost all types of organic devices, such as resistors, Schottky diodes, and field-effect transistors (FETs), can be constructed on planar or complex curvilinear substrates. Significantly, these devices can function properly and exhibit closed or even superior performance than the device counterparts fabricated by conventional approach. PMID:24942458
Painter, Thomas O.; Bunn, Jonathon R.; Schoenen, Frank J.; Douglas, Justin T.; Day, Victor W.; Santini, Conrad
2013-01-01
The discovery and application of a new branching pathway synthesis strategy that rapidly produces skeletally diverse scaffolds is described. Two different scaffold types, one a bicyclic iodo-vinylidene tertiary amine/tertiary alcohol and the other, a spirocyclic 3-furanone, are each obtained using a two-step sequence featuring a common first step. Both scaffold types lead to intermediates that can be orthogonally diversified using the same final components. One of the scaffold types was obtained in sufficiently high yield that it was immediately used to produce a 97-compound library. PMID:23510238
Painter, Thomas O; Bunn, Jonathon R; Schoenen, Frank J; Douglas, Justin T; Day, Victor W; Santini, Conrad
2013-04-19
The discovery and application of a new branching pathway synthesis strategy that rapidly produces skeletally diverse scaffolds is described. Two different scaffold types, one a bicyclic iodo-vinylidene tertiary amine/tertiary alcohol and the other, a spirocyclic 3-furanone, are each obtained using a two-step sequence featuring a common first step. Both scaffold types lead to intermediates that can be orthogonally diversified using the same final components. One of the scaffold types was obtained in sufficiently high yield that it was immediately used to produce a 97-compound library.
Development of a reservoir type prolonged release system with felodipine via simplex methodology
IOVANOV, RAREŞ IULIU; TOMUŢĂ, IOAN; LEUCUŢA, SORIN EMILIAN
2016-01-01
Background and aims Felodipine is a dihydropyridine calcium antagonist that presents good characteristics to be formulated as prolonged release preparations. The aim of the study was the formulation and in vitro characterization of a reservoir type prolonged release system with felodipine, over a 12 hours period using the Simplex method. Methods The first step of the Simplex method was to study the influence of the granules coating method on the felodipine release. Furthermore the influence of the coating polymer type, the percent of the coating polymer and the percent of pore forming agent in the coating on the felodipine release were studied. Afterwards these two steps of the experimental design the percent of Surelease applied on the felodipine loaded granules and the percent of pore former in the polymeric coating formulation variables were studied. The in vitro dissolution of model drug was performed in phosphate buffer solution (pH 6.5) with 1% sodium lauryl sulfate. The released drug quantification was done using an HPLC method. The release kinetics of felodipine from the final granules was assessed using different mathematical models. Results A 12 hours release was achieved using granules with the size between 315–500 μm coated with 45% Surelease with different pore former ratios in the coating via the top-spray method. Conclusion We have prepared prolonged release coated granules with felodipine using a fluid bed system based on the Simplex method. The API from the studied final formulations was released over a 12 hours period and the release kinetics of the model drug substance from the optimized preparations fitted best the Higuchi and Peppas kinetic models. PMID:27004036
Bair, Woei-Nan; Prettyman, Michelle G; Beamer, Brock A; Rogers, Mark W
2016-07-01
Protective stepping evoked by externally applied lateral perturbations reveals balance deficits underlying falls. However, a lack of comprehensive information about the control of different stepping strategies in relation to the magnitude of perturbation limits understanding of balance control in relation to age and fall status. The aim of this study was to investigate different protective stepping strategies and their kinematic and behavioral control characteristics in response to different magnitudes of lateral waist-pulls between older fallers and non-fallers. Fifty-two community-dwelling older adults (16 fallers) reacted naturally to maintain balance in response to five magnitudes of lateral waist-pulls. The balance tolerance limit (BTL, waist-pull magnitude where protective steps transitioned from single to multiple steps), first step control characteristics (stepping frequency and counts, spatial-temporal kinematic, and trunk position at landing) of four naturally selected protective step types were compared between fallers and non-fallers at- and above-BTL. Fallers took medial-steps most frequently while non-fallers most often took crossover-back-steps. Only non-fallers varied their step count and first step control parameters by step type at the instants of step initiation (onset time) and termination (trunk position), while both groups modulated step execution parameters (single stance duration and step length) by step type. Group differences were generally better demonstrated above-BTL. Fallers primarily used a biomechanically less effective medial-stepping strategy that may be partially explained by reduced somato-sensation. Fallers did not modulate their step parameters by step type at first step initiation and termination, instances particularly vulnerable to instability, reflecting their limitations in balance control during protective stepping. Copyright © 2016. Published by Elsevier Ltd.
Modeling of nonequilibrium space plasma flows
NASA Technical Reports Server (NTRS)
Gombosi, Tamas
1995-01-01
Godunov-type numerical solution of the 20 moment plasma transport equations. One of the centerpieces of our proposal was the development of a higher order Godunov-type numerical scheme to solve the gyration dominated 20 moment transport equations. In the first step we explored some fundamental analytic properties of the 20 moment transport equations for a low b plasma, including the eigenvectors and eigenvalues of propagating disturbances. The eigenvalues correspond to wave speeds, while the eigenvectors characterize the transported physical quantities. In this paper we also explored the physically meaningful parameter range of the normalized heat flow components. In the second step a new Godunov scheme type numerical method was developed to solve the coupled set of 20 moment transport equations for a quasineutral single-ion plasma. The numerical method and the first results were presented at several national and international meetings and a paper describing the method has been published in the Journal of Computational Physics. To our knowledge this is the first numerical method which is capable of producing stable time-dependent solutions to the full 20 (or 16) moment set of transport equations, including the full heat flow equation. Previous attempts resulted in unstable (oscillating) solutions of the heat flow equations. Our group invested over two man-years into the development and implementation of the new method. The present model solves the 20 moment transport equations for an ion species and thermal electrons in 8 domain extending from a collision dominated to a collisionless region (200 km to 12,000 km). This model has been applied to study O+ acceleration due to Joule heating in the lower ionosphere.
Shen, Heping; Wu, Yiliang; Peng, Jun; Duong, The; Fu, Xiao; Barugkin, Chog; White, Thomas P; Weber, Klaus; Catchpole, Kylie R
2017-02-22
With rapid progress in recent years, organohalide perovskite solar cells (PSC) are promising candidates for a new generation of highly efficient thin-film photovoltaic technologies, for which up-scaling is an essential step toward commercialization. In this work, we propose a modified two-step method to deposit the CH 3 NH 3 PbI 3 (MAPbI 3 ) perovskite film that improves the uniformity, photovoltaic performance, and repeatability of large-area perovskite solar cells. This method is based on the commonly used two-step method, with one additional process involving treating the perovskite film with concentrated methylammonium iodide (MAI) solution. This additional treatment is proved to be helpful for tailoring the residual PbI 2 level to an optimal range that is favorable for both optical absorption and inhibition of recombination. Scanning electron microscopy and photoluminescence image analysis further reveal that, compared to the standard two-step and one-step methods, this method is very robust for achieving uniform and pinhole-free large-area films. This is validated by the photovoltaic performance of the prototype devices with an active area of 1 cm 2 , where we achieved the champion efficiency of ∼14.5% and an average efficiency of ∼13.5%, with excellent reproducibility.
Electrostatic design of protein-protein association rates.
Schreiber, Gideon; Shaul, Yossi; Gottschalk, Kay E
2006-01-01
De novo design and redesign of proteins and protein complexes have made promising progress in recent years. Here, we give an overview of how to use available computer-based tools to design proteins to bind faster and tighter to their protein-complex partner by electrostatic optimization between the two proteins. Electrostatic optimization is possible because of the simple relation between the Debye-Huckel energy of interaction between a pair of proteins and their rate of association. This can be used for rapid, structure-based calculations of the electrostatic attraction between the two proteins in the complex. Using these principles, we developed two computer programs that predict the change in k(on), and as such the affinity, on introducing charged mutations. The two programs have a web interface that is available at
Meyners, Christian; Baud, Matthias G J; Fuchter, Matthew J; Meyer-Almes, Franz-Josef
2014-09-01
Performing kinetic studies on protein ligand interactions provides important information on complex formation and dissociation. Beside kinetic parameters such as association rates and residence times, kinetic experiments also reveal insights into reaction mechanisms. Exploiting intrinsic tryptophan fluorescence a parallelized high-throughput Förster resonance energy transfer (FRET)-based reporter displacement assay with very low protein consumption was developed to enable the large-scale kinetic characterization of the binding of ligands to recombinant human histone deacetylases (HDACs) and a bacterial histone deacetylase-like amidohydrolase (HDAH) from Bordetella/Alcaligenes. For the binding of trichostatin A (TSA), suberoylanilide hydroxamic acid (SAHA), and two other SAHA derivatives to HDAH, two different modes of action, simple one-step binding and a two-step mechanism comprising initial binding and induced fit, were verified. In contrast to HDAH, all compounds bound to human HDAC1, HDAC6, and HDAC8 through a two-step mechanism. A quantitative view on the inhibitor-HDAC systems revealed two types of interaction, fast binding and slow dissociation. We provide arguments for the thesis that the relationship between quantitative kinetic and mechanistic information and chemical structures of compounds will serve as a valuable tool for drug optimization. Copyright © 2014 Elsevier Inc. All rights reserved.
von Kodolitsch, Yskert; Bernhardt, Alexander M.; Robinson, Peter N.; Kölbel, Tilo; Reichenspurner, Hermann; Debus, Sebastian; Detter, Christian
2015-01-01
Background It is the physicians’ task to translate evidence and guidelines into medical strategies for individual patients. Until today, however, there is no formal tool that is instrumental to perform this translation. Methods We introduce the analysis of strengths (S) and weaknesses (W) related to therapy with opportunities (O) and threats (T) related to individual patients as a tool to establish an individualized (I) medical strategy (I-SWOT). The I-SWOT matrix identifies four fundamental types of strategy. These comprise “SO” maximizing strengths and opportunities, “WT” minimizing weaknesses and threats, “WO” minimizing weaknesses and maximizing opportunities, and “ST” maximizing strengths and minimizing threats. Each distinct type of strategy may be considered for individualized medical strategies. Results We describe four steps of I-SWOT to establish an individualized medical strategy to treat aortic disease. In the first step, we define the goal of therapy and identify all evidence-based therapeutic options. In a second step, we assess strengths and weaknesses of each therapeutic option in a SW matrix form. In a third step, we assess opportunities and threats related to the individual patient, and in a final step, we use the I-SWOT matrix to establish an individualized medical strategy through matching “SW” with “OT”. As an example we present two 30-year-old patients with Marfan syndrome with identical medical history and aortic pathology. As a result of I-SWOT analysis of their individual opportunities and threats, we identified two distinct medical strategies in these patients. Conclusion I-SWOT is a formal but easy to use tool to translate medical evidence into individualized medical strategies. PMID:27069939
Vieira, J; Cunha, M C
2011-01-01
This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.
Brion, F; Rogerieux, F; Noury, P; Migeon, B; Flammarion, P; Thybaud, E; Porcher, J M
2000-01-14
A two-step purification protocol was developed to purify rainbow trout (Oncorhynchus mykiss) vitellogenin (Vtg) and was successfully applied to Vtg of chub (Leuciscus cephalus) and gudgeon (Gobio gobio). Capture and intermediate purification were performed by anion-exchange chromatography on a Resource Q column and a polishing step was performed by gel permeation chromatography on Superdex 200 column. This method is a rapid two-step purification procedure that gave a pure solution of Vtg as assessed by silver staining electrophoresis and immunochemical characterisation.
Contact-aware simulations of particulate Stokesian suspensions
NASA Astrophysics Data System (ADS)
Lu, Libin; Rahimian, Abtin; Zorin, Denis
2017-10-01
We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.
Novel Anthropometry Based on 3D-Bodyscans Applied to a Large Population Based Cohort.
Löffler-Wirth, Henry; Willscher, Edith; Ahnert, Peter; Wirkner, Kerstin; Engel, Christoph; Loeffler, Markus; Binder, Hans
2016-01-01
Three-dimensional (3D) whole body scanners are increasingly used as precise measuring tools for the rapid quantification of anthropometric measures in epidemiological studies. We analyzed 3D whole body scanning data of nearly 10,000 participants of a cohort collected from the adult population of Leipzig, one of the largest cities in Eastern Germany. We present a novel approach for the systematic analysis of this data which aims at identifying distinguishable clusters of body shapes called body types. In the first step, our method aggregates body measures provided by the scanner into meta-measures, each representing one relevant dimension of the body shape. In a next step, we stratified the cohort into body types and assessed their stability and dependence on the size of the underlying cohort. Using self-organizing maps (SOM) we identified thirteen robust meta-measures and fifteen body types comprising between 1 and 18 percent of the total cohort size. Thirteen of them are virtually gender specific (six for women and seven for men) and thus reflect most abundant body shapes of women and men. Two body types include both women and men, and describe androgynous body shapes that lack typical gender specific features. The body types disentangle a large variability of body shapes enabling distinctions which go beyond the traditional indices such as body mass index, the waist-to-height ratio, the waist-to-hip ratio and the mortality-hazard ABSI-index. In a next step, we will link the identified body types with disease predispositions to study how size and shape of the human body impact health and disease.
Liang, Bo; Huang, Xuenian; Teng, Yun; Liang, Yajing; Yang, Yong; Zheng, Linghui; Lu, Xuefeng
2018-06-01
Biosynthesis of simvastatin, the active pharmaceutical ingredient of cholesterol-lowering drug Zocor, has drawn increasing global attention in recent years. Although single-step in vivo production of monacolin J, the intermediate biosynthetic precursor of simvastatin, has been realized by utilizing lovastatin hydrolase (PcEST) in our previous study, about 5% of residual lovastatin is still a problem for industrial production and quality control. In order to improve conversion efficiency and reduce lovastatin residues, modification of PcEST is carried out through directed evolution and a novel two-step high-throughput screening method. The mutant Q140L shows 18-fold improved whole-cell activity as compared to the wild-type, and one fold enhanced catalytic efficiency and 3 °C increased T 50 10 over the wild-type are observed by characterizing the purified protein. Finally, the engineered A. terreus strain overexpressing Q140L mutant exhibited the increased conversion efficiency and the reduced lovastatin residues by comparing with A. terreus strain overexpressing the wild-type PcEST, where almost 100% of the produced lovastatin is hydrolyzed to monacolin J. Therefore, this improved microbial cell factory can realize single-step bioproduction of monacolin J in a more efficient way, providing an attractive and eco-friendly substitute over the existing chemical synthetic routes of monacolin J and promoting complete bioproduction of simvastatin at industrial scale. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Preconditioned conjugate gradient methods for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing
1994-01-01
A preconditioned Krylov subspace method (GMRES) is used to solve the linear systems of equations formed at each time-integration step of the unsteady, two-dimensional, compressible Navier-Stokes equations of fluid flow. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux-split formulation. Several preconditioning techniques are investigated to enhance the efficiency and convergence rate of the implicit solver based on the GMRES algorithm. The superiority of the new solver is established by comparisons with a conventional implicit solver, namely line Gauss-Seidel relaxation (LGSR). Computational test results for low-speed (incompressible flow over a backward-facing step at Mach 0.1), transonic flow (trailing edge flow in a transonic turbine cascade), and hypersonic flow (shock-on-shock interactions on a cylindrical leading edge at Mach 6.0) are presented. For the Mach 0.1 case, overall speedup factors of up to 17 (in terms of time-steps) and 15 (in terms of CPU time on a CRAY-YMP/8) are found in favor of the preconditioned GMRES solver, when compared with the LGSR solver. The corresponding speedup factors for the transonic flow case are 17 and 23, respectively. The hypersonic flow case shows slightly lower speedup factors of 9 and 13, respectively. The study of preconditioners conducted in this research reveals that a new LUSGS-type preconditioner is much more efficient than a conventional incomplete LU-type preconditioner.
Zhao, Fanglong; Zhang, Chuanbo; Yin, Jing; Shen, Yueqi; Lu, Wenyu
2015-08-01
In this paper, a two-step resin adsorption technology was investigated for spinosad production and separation as follows: the first step resin addition into the fermentor at early cultivation period to decrease the timely product concentration in the broth; the second step of resin addition was used after fermentation to adsorb and extract the spinosad. Based on this, a two-step macroporous resin adsorption-membrane separation process for spinosad fermentation, separation, and purification was established. Spinosad concentration in 5-L fermentor increased by 14.45 % after adding 50 g/L macroporous at the beginning of fermentation. The established two-step macroporous resin adsorption-membrane separation process got the 95.43 % purity and 87 % yield for spinosad, which were both higher than that of the conventional crystallization of spinosad from aqueous phase that were 93.23 and 79.15 % separately. The two-step macroporous resin adsorption method has not only carried out the coupling of spinosad fermentation and separation but also increased spinosad productivity. In addition, the two-step macroporous resin adsorption-membrane separation process performs better in spinosad yield and purity.
Zhang, Yan; Zhang, Ting; Feng, Yanye; Lu, Xiuxiu; Lan, Wenxian; Wang, Jufang; Wu, Houming; Cao, Chunyang; Wang, Xiaoning
2011-01-01
The production of recombinant proteins in a large scale is important for protein functional and structural studies, particularly by using Escherichia coli over-expression systems; however, approximate 70% of recombinant proteins are over-expressed as insoluble inclusion bodies. Here we presented an efficient method for generating soluble proteins from inclusion bodies by using two steps of denaturation and one step of refolding. We first demonstrated the advantages of this method over a conventional procedure with one denaturation step and one refolding step using three proteins with different folding properties. The refolded proteins were found to be active using in vitro tests and a bioassay. We then tested the general applicability of this method by analyzing 88 proteins from human and other organisms, all of which were expressed as inclusion bodies. We found that about 76% of these proteins were refolded with an average of >75% yield of soluble proteins. This “two-step-denaturing and refolding” (2DR) method is simple, highly efficient and generally applicable; it can be utilized to obtain active recombinant proteins for both basic research and industrial purposes. PMID:21829569
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melius, C
2007-12-05
The epidemiological and economic modeling of poultry diseases requires knowing the size, location, and operational type of each poultry type operation within the US. At the present time, the only national database of poultry operations that is available to the general public is the USDA's 2002 Agricultural Census data, published by the National Agricultural Statistics Service, herein referred to as the 'NASS data'. The NASS data provides census data at the county level on poultry operations for various operation types (i.e., layers, broilers, turkeys, ducks, geese). However, the number of farms and sizes of farms for the various types aremore » not independent since some facilities have more than one type of operation. Furthermore, some data on the number of birds represents the number sold, which does not represent the number of birds present at any given time. In addition, any data tabulated by NASS that could identify numbers of birds or other data reported by an individual respondent is suppressed by NASS and coded with a 'D'. To be useful for epidemiological and economic modeling, the NASS data must be converted into a unique set of facility types (farms having similar operational characteristics). The unique set must not double count facilities or birds. At the same time, it must account for all the birds, including those for which the data has been suppressed. Therefore, several data processing steps are required to work back from the published NASS data to obtain a consistent database for individual poultry operations. This technical report documents data processing steps that were used to convert the NASS data into a national poultry facility database with twenty-six facility types (7 egg-laying, 6 broiler, 1 backyard, 3 turkey, and 9 others, representing ducks, geese, ostriches, emus, pigeons, pheasants, quail, game fowl breeders and 'other'). The process involves two major steps. The first step defines the rules used to estimate the data that is suppressed within the NASS database. The first step is similar to the first step used to estimate suppressed data for livestock [Melius et al (2006)]. The second step converts the NASS poultry types into the operational facility types used by the epidemiological and economic model. We also define two additional facility types for high and low risk poultry backyards, and an additional two facility types for live bird markets and swap meets. The distribution of these additional facility types among counties is based on US population census data. The algorithm defining the number of premises and the corresponding distribution among counties and the resulting premises density plots for the continental US are provided.« less
Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors
Pan, Jin; Ma, Boyuan
2018-01-01
This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323
Optimal pre-scheduling of problem remappings
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1987-01-01
A large class of scientific computational problems can be characterized as a sequence of steps where a significant amount of computation occurs each step, but the work performed at each step is not necessarily identical. Two good examples of this type of computation are: (1) regridding methods which change the problem discretization during the course of the computation, and (2) methods for solving sparse triangular systems of linear equations. Recent work has investigated a means of mapping such computations onto parallel processors; the method defines a family of static mappings with differing degrees of importance placed on the conflicting goals of good load balance and low communication/synchronization overhead. The performance tradeoffs are controllable by adjusting the parameters of the mapping method. To achieve good performance it may be necessary to dynamically change these parameters at run-time, but such changes can impose additional costs. If the computation's behavior can be determined prior to its execution, it can be possible to construct an optimal parameter schedule using a low-order-polynomial-time dynamic programming algorithm. Since the latter can be expensive, the performance is studied of the effect of a linear-time scheduling heuristic on one of the model problems, and it is shown to be effective and nearly optimal.
NASA Astrophysics Data System (ADS)
Chen, Qingfa; Zhao, Fuyu
2017-12-01
Numerous pillars are left after mining of underground mineral resources using the open stope method or after the first step of the partial filling method. The mineral recovery rate can, however, be improved by replacement recovery of pillars. In the present study, the relationships among the pillar type, minimum pillar width, and micro/macroeconomic factors were investigated from two perspectives, namely mechanical stability and micro/macroeconomic benefit. Based on the mechanical stability formulas for ore and artificial pillars, the minimum width for a specific pillar type was determined using a pessimistic criterion. The microeconomic benefit c of setting an ore pillar, the microeconomic benefit w of artificial pillar replacement, and the economic net present value (ENPV) of the replacement process were calculated. The values of c and w were compared with respect to ENPV, based on which the appropriate pillar type and economical benefit were determined.
Assessing the foundation of the Trojan Horse Method
NASA Astrophysics Data System (ADS)
Bertulani, C. A.; Hussein, M. S.; Typel, S.
2018-01-01
We discuss the foundation of the Trojan Horse Method (THM) within the Inclusive Non-Elastic Breakup (INEB) theory. We demonstrate that the direct part of the INEB cross section, which is of two-step character, becomes, in the DWBA limit of the three-body theory with appropriate approximations and redefinitions, similar in structure to the one-step THM cross section. We also discuss the connection of the THM to the Surrogate Method (SM), which is a genuine two-step process.
Kohzuma, Kaori; Chiba, Motoko; Nagano, Soichiro; Anai, Toyoaki; Ueda, Miki U.; Oguchi, Riichi; Shirai, Kazumasa; Hanada, Kousuke; Hikosaka, Kouki; Fujii, Nobuharu
2017-01-01
Radish (Raphanus sativus L. var. sativus), a widely cultivated root vegetable crop, possesses a large sink organ (the root), implying that photosynthetic activity in radish can be enhanced by altering both the source and sink capacity of the plant. However, since radish is a self-incompatible plant, improved mutation-breeding strategies are needed for this crop. TILLING (Targeting Induced Local Lesions IN Genomes) is a powerful method used for reverse genetics. In this study, we developed a new TILLING strategy involving a two-step mutant selection process for mutagenized radish plants: the first selection is performed to identify a BC1M1 line, that is, progenies of M1 plants crossed with wild-type, and the second step is performed to identify BC1M1 individuals with mutations. We focused on Rubisco as a target, since Rubisco is the most abundant plant protein and a key photosynthetic enzyme. We found that the radish genome contains six RBCS genes and one pseudogene encoding small Rubisco subunits. We screened 955 EMS-induced BC1M1 lines using our newly developed TILLING strategy and obtained six mutant lines for the six RsRBCS genes, encoding proteins with four different types of amino acid substitutions. Finally, we selected a homozygous mutant and subjected it to physiological measurements. PMID:28744180
Alternating direction implicit methods for parabolic equations with a mixed derivative
NASA Technical Reports Server (NTRS)
Beam, R. M.; Warming, R. F.
1980-01-01
Alternating direction implicit (ADI) schemes for two-dimensional parabolic equations with a mixed derivative are constructed by using the class of all A(0)-stable linear two-step methods in conjunction with the method of approximate factorization. The mixed derivative is treated with an explicit two-step method which is compatible with an implicit A(0)-stable method. The parameter space for which the resulting ADI schemes are second-order accurate and unconditionally stable is determined. Some numerical examples are given.
Alternating direction implicit methods for parabolic equations with a mixed derivative
NASA Technical Reports Server (NTRS)
Beam, R. M.; Warming, R. F.
1979-01-01
Alternating direction implicit (ADI) schemes for two-dimensional parabolic equations with a mixed derivative are constructed by using the class of all A sub 0-stable linear two-step methods in conjunction with the method of approximation factorization. The mixed derivative is treated with an explicit two-step method which is compatible with an implicit A sub 0-stable method. The parameter space for which the resulting ADI schemes are second order accurate and unconditionally stable is determined. Some numerical examples are given.
Wang, Fen; Yu, Junxia; Xiong, Wanli; Xu, Yuanlai; Chi, Ru-An
2018-01-01
For selective leaching and highly effective recovery of heavy metals from a metallurgical sludge, a two-step leaching method was designed based on the distribution analysis of the chemical fractions of the loaded heavy metal. Hydrochloric acid (HCl) was used as a leaching agent in the first step to leach the relatively labile heavy metals and then ethylenediamine tetraacetic acid (EDTA) was applied to leach the residual metals according to their different fractional distribution. Using the two-step leaching method, 82.89% of Cd, 55.73% of Zn, 10.85% of Cu, and 0.25% of Pb were leached in the first step by 0.7 M HCl at a contact time of 240 min, and the leaching efficiencies for Cd, Zn, Cu, and Pb were elevated up to 99.76, 91.41, 71.85, and 94.06%, by subsequent treatment with 0.2 M EDTA at 480 min, respectively. Furthermore, HCl leaching induced fractional redistribution, which might increase the mobility of the remaining metals and then facilitate the following metal removal by EDTA. The facilitation was further confirmed by the comparison to the one-step leaching method with single HCl or single EDTA, respectively. These results suggested that the designed two-step leaching method by HCl and EDTA could be used for selective leaching and effective recovery of heavy metals from the metallurgical sludge or heavy metal-contaminated solid media.
Algorithms and software for nonlinear structural dynamics
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.
1989-01-01
The objective of this research is to develop efficient methods for explicit time integration in nonlinear structural dynamics for computers which utilize both concurrency and vectorization. As a framework for these studies, the program WHAMS, which is described in Explicit Algorithms for the Nonlinear Dynamics of Shells (T. Belytschko, J. I. Lin, and C.-S. Tsay, Computer Methods in Applied Mechanics and Engineering, Vol. 42, 1984, pp 225 to 251), is used. There are two factors which make the development of efficient concurrent explicit time integration programs a challenge in a structural dynamics program: (1) the need for a variety of element types, which complicates the scheduling-allocation problem; and (2) the need for different time steps in different parts of the mesh, which is here called mixed delta t integration, so that a few stiff elements do not reduce the time steps throughout the mesh.
A spectral approach for discrete dislocation dynamics simulations of nanoindentation
NASA Astrophysics Data System (ADS)
Bertin, Nicolas; Glavas, Vedran; Datta, Dibakar; Cai, Wei
2018-07-01
We present a spectral approach to perform nanoindentation simulations using three-dimensional nodal discrete dislocation dynamics. The method relies on a two step approach. First, the contact problem between an indenter of arbitrary shape and an isotropic elastic half-space is solved using a spectral iterative algorithm, and the contact pressure is fully determined on the half-space surface. The contact pressure is then used as a boundary condition of the spectral solver to determine the resulting stress field produced in the simulation volume. In both stages, the mechanical fields are decomposed into Fourier modes and are efficiently computed using fast Fourier transforms. To further improve the computational efficiency, the method is coupled with a subcycling integrator and a special approach is devised to approximate the displacement field associated with surface steps. As a benchmark, the method is used to compute the response of an elastic half-space using different types of indenter. An example of a dislocation dynamics nanoindentation simulation with complex initial microstructure is presented.
Choi, Jung-Han; Lim, Young-Jun; Kim, Chang-Whe; Kim, Myung-Joo
2009-01-01
This study evaluated the effect of different screw-tightening sequences, forces, and methods on the stresses generated on a well-fitting internal-connection implant (Astra Tech) superstructure. A metal framework directly connected to four parallel implants was fabricated on a fully edentulous mandibular resin model. Six stone casts with four implant replicas were made from a pickup impression of the superstructure to represent a "well-fitting" situation. Stresses generated by four screw-tightening sequences (1-2-3-4, 4-3-2-1, 2-4-3-1, and 2-3-1-4), two forces (10 and 20 Ncm), and two methods (one-step and two-step) were evaluated. In the two-step method, screws were tightened to the initial torque (10 Ncm) in a predetermined screw-tightening sequence and then to the final torque (20 Ncm) in the same sequence. Stresses were recorded twice by three strain gauges attached to the framework (superior face midway between abutments). Deformation data were analyzed using multiple analysis of variance at a .05 level of statistical significance. In all stone casts, stresses were produced by the superstructure connection, regardless of screw-tightening sequence, force, and method. No statistically significant differences for superstructure preload stresses were found based on screw-tightening sequences (-180.0 to -181.6 microm/m) or forces (-163.4 and -169.2 microm/m) (P > .05). However, different screw-tightening methods induced different stresses on the superstructure. The two-step screw-tightening method (-180.1 microm/m) produced significantly higher stress than the one-step method (-169.2 microm/m) (P = .0457). Within the limitations of this in vitro study, screw-tightening sequence and force were not critical factors in the stress generated on a well-fitting internal-connection implant superstructure. The stress caused by the two-step method was greater than that produced using the one-step method. Further studies are needed to evaluate the effect of screw-tightening techniques on preload stress in various different clinical situations.
NASA Astrophysics Data System (ADS)
Gézero, L.; Antunes, C.
2017-05-01
The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.
On nonlinear finite element analysis in single-, multi- and parallel-processors
NASA Technical Reports Server (NTRS)
Utku, S.; Melosh, R.; Islam, M.; Salama, M.
1982-01-01
Numerical solution of nonlinear equilibrium problems of structures by means of Newton-Raphson type iterations is reviewed. Each step of the iteration is shown to correspond to the solution of a linear problem, therefore the feasibility of the finite element method for nonlinear analysis is established. Organization and flow of data for various types of digital computers, such as single-processor/single-level memory, single-processor/two-level-memory, vector-processor/two-level-memory, and parallel-processors, with and without sub-structuring (i.e. partitioning) are given. The effect of the relative costs of computation, memory and data transfer on substructuring is shown. The idea of assigning comparable size substructures to parallel processors is exploited. Under Cholesky type factorization schemes, the efficiency of parallel processing is shown to decrease due to the occasional shared data, just as that due to the shared facilities.
Fast metabolite identification with Input Output Kernel Regression.
Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho
2016-06-15
An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. celine.brouard@aalto.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Fast metabolite identification with Input Output Kernel Regression
Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho
2016-01-01
Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628
Dispersive shock waves in systems with nonlocal dispersion of Benjamin-Ono type
NASA Astrophysics Data System (ADS)
El, G. A.; Nguyen, L. T. K.; Smyth, N. F.
2018-04-01
We develop a general approach to the description of dispersive shock waves (DSWs) for a class of nonlinear wave equations with a nonlocal Benjamin-Ono type dispersion term involving the Hilbert transform. Integrability of the governing equation is not a pre-requisite for the application of this method which represents a modification of the DSW fitting method previously developed for dispersive-hydrodynamic systems of Korteweg-de Vries (KdV) type (i.e. reducible to the KdV equation in the weakly nonlinear, long wave, unidirectional approximation). The developed method is applied to the Calogero-Sutherland dispersive hydrodynamics for which the classification of all solution types arising from the Riemann step problem is constructed and the key physical parameters (DSW edge speeds, lead soliton amplitude, intermediate shelf level) of all but one solution type are obtained in terms of the initial step data. The analytical results are shown to be in excellent agreement with results of direct numerical simulations.
Fate of Fusarium Toxins during Brewing.
Habler, Katharina; Geissinger, Cajetan; Hofer, Katharina; Schüler, Jan; Moghari, Sarah; Hess, Michael; Gastl, Martina; Rychlik, Michael
2017-01-11
Some information is available about the fate of Fusarium toxins during the brewing process, but only little is known about the single processing steps in detail. In our study we produced beer from two different barley cultivars inoculated with three different Fusarium species, namely, Fusarium culmorum, Fusarium sporotrichioides, and Fusarium avenaceum, producing a wide range of mycotoxins such as type B trichothecenes, type A trichothecenes, and enniatins. By the use of multi-mycotoxin LC-MS/MS stable isotope dilution methods we were able to follow the fate of Fusarium toxins during the entire brewing process. In particular, the type B trichothecenes deoxynivalenol, 3-acetyldeoxynivalenol, and 15-acetyldeoxynivalenol showed similar behaviors. Between 35 and 52% of those toxins remained in the beer after filtration. The contents of the potentially hazardous deoxynivalenol-3-glucoside and the type A trichothecenes increased during mashing, but a rapid decrease of deoxynivalenol-3-glucoside content was found during the following steps of lautering and wort boiling. The concentration of enniatins greatly decreased with the discarding of spent grains or finally with the hot break. The results of our study show the retention of diverse Fusarium toxins during the brewing process and allow for assessing the food safety of beer regarding the monitored Fusarium mycotoxins.
Surface evolution in bare bamboo-type metal lines under diffusion and electric field effects
NASA Astrophysics Data System (ADS)
Averbuch, Amir; Israeli, Moshe; Nathan, Menachem; Ravve, Igor
2003-07-01
Irregularities such as voids and cracks often occur in bamboo-type metal lines of microelectronic interconnects. They increase the resistance of the circuits, and may even lead to a fatal failure. In this work, we analyze numerically the electromigration of an unpassivated bamboo-type line with pre-existing irregularities in its top surface (also called a grain-void interface). The bamboo line is subjected to surface diffusion forces and external electric fields. Under these forces, initial defects may either heal or become worse. The grain-void interface is considered to be one-dimensional, and the physical formulation of an electromigration and diffusion model results in two coupled, fourth order, one-dimensional time-dependent PDEs, with the boundary conditions imposed at the electrode points and at the triple point, which belongs to two neighboring grains and the void. These equations are discretized by finite differences on a regular grid in space, and by a Runge-Kutta integration scheme in time, and solved simultaneously with a static Laplace equation describing the voltage distribution throughout each grain, when the substrate conductivity is neglected. Since the voltage distribution is required only along an interface line, the two-dimensional discretization of the grain interior is not needed, and the static problem is solved by the boundary element method at each time step. The motion of the interface line is studied for different ratios between diffusion and electric field forces, and for different initial configurations of the grain-void interface. We study plain and tilted contour lines, considering positive and negative tilts with respect to the external electric field, a stepped contour with field lines entering or exiting the 'step', and a number of modifications of the classical Mullins problem of thermal grooving. We also consider a two-grain Mullins problem with a normal and tilted boundary between the grains, examining positive and negative tilts.
Signal conditioning units for vibration measurement in HUMS
NASA Astrophysics Data System (ADS)
Wu, Kaizhi; Liu, Tingting; Yu, Zirong; Chen, Lijuan; Huang, Xinjie
2018-03-01
A signal conditioning units for vibration measurement in HUMS is proposed in the paper. Due to the frequency of vibrations caused by components in helicopter are different, two steps amplifier and programmable anti-aliasing filter are designed to meet the measurement of different types of helicopter. Vibration signals are converted into measurable electrical signals combing with ICP driver firstly. Then pre-amplifier and programmable gain amplifier is applied to magnify the weak electrical signals. In addition, programmable anti-aliasing filter is utilized to filter the interference of noise. The units were tested using function signal generator and oscilloscope. The experimental results have demonstrated the effectiveness of our proposed method in quantitatively and qualitatively. The method presented in this paper can meet the measurement requirement for different types of helicopter.
Assessing synthetic strategies: total syntheses of (+/-)-neodolabellane-type diterpenoids.
Valente, Cory; Organ, Michael G
2008-01-01
Two strategies, namely a cross-metathesis/ring-closing metathesis and Pd-catalyzed Stille allylation/Nozaki-Hiyama-Kishi coupling, are examined for the preparation of neodolabellane-type diterpenoids 1 and 2. Whereas the first approach possessed synthetic limitations, the latter was successfully employed to provide compounds 1 and 2 in 8.8% (14 steps) and 8% (15 steps) overall yields, respectively.
Two-Step Formal Advertisement: An Examination.
1976-10-01
The purpose of this report is to examine the potential application of the Two-Step Formal Advertisement method of procurement. Emphasis is placed on...Step formal advertising is a method of procurement designed to take advantage of negotiation flexibility and at the same time obtain the benefits of...formal advertising . It is used where the specifications are not sufficiently definite or may be too restrictive to permit full and free competition
Intrinsically water-repellent copper oxide surfaces; An electro-crystallization approach
NASA Astrophysics Data System (ADS)
Akbari, Raziyeh; Ramos Chagas, Gabriela; Godeau, Guilhem; Mohammadizadeh, Mohammadreza; Guittard, Frédéric; Darmanin, Thierry
2018-06-01
Use of metal oxide thin layers is increased due to their good durability under environmental conditions. In this work, the repeatable nanostructured crystalite Cu2O thin films, developed by electrodeposition method without any physical and chemical modifications, demonstrate good hydrophobicity. Copper (I) oxide (Cu2O) layers were fabricated on gold/Si(1 0 0) substrates by different electrodeposition methods i.e. galvanostatic deposition, cyclic voltammetry, and pulse potentiostatic deposition and using copper sulfate (in various concentrations) as a precursor. The greatest crystalline face on prepared Cu2O samples is (1 1 1) which is the most hydrophobic facet of Cu2O cubic structure. Indeed, different crystallite structures such as nanotriangles and truncated octahedrons were formed on the surface for various electrodeposition methods. The increase of the contact angle (θw) measured by the rest time, reaching to about 135°, was seen at different rates and electrodeposition methods. In addition, two-step deposition surfaces were also prepared by applying two of the mentioned methods, alternatively. In general, the morphology of the two-step deposition surfaces showed some changes compared to that of one-step samples, allowing the formation of different crystallite shapes. Moreover, the wettability behavior showd the larger θw of the two-step deposition layers compared to the related one-step deposition layers. Therefore, the highest observed θw was related to the one of two-step deposition layers due to the creation of small octahedral structures on the surface, having narrow and deep valleys. However, there was an exception which was due to the resulted big structures and broad valleys on the surface. So, it is possible to engineer different crystallites shapes using the proposed two-step deposition method. It is expected that hydrophobic crystallite thin films can be used in environmental and electronic applications to save energy and materials properties.
Proteomics with Mass Spectrometry Imaging: Beyond Amyloid Typing.
Lavatelli, Francesca; Merlini, Giampaolo
2018-04-01
Detection and typing of amyloid deposits in tissues are two crucial steps in the management of systemic amyloidoses. The presence of amyloid deposits is routinely evaluated through Congo red staining, whereas proteomics is now a mainstay in the identification of the deposited proteins. In article number 1700236, Winter et al. [Proteomics 2017, 17, Issue 22] describe a novel method based on MALDI-MS imaging coupled to ion mobility separation and peptide filtering, to detect the presence of amyloid in histology samples and to identify its composition, while preserving the spatial distribution of proteins in tissues. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Perfect Color Registration Realized.
ERIC Educational Resources Information Center
Lovedahl, Gerald G.
1979-01-01
Describes apparatus and procedures to design and construct a "printing box" as a graphic arts project to make color prints on T-shirts using photography, indirect and direct photo screen methods, and other types of stencils. Step-by-step photographs illustrate the process. (MF)
NASA Astrophysics Data System (ADS)
Ding, Lu; Chen, Ying-Tian; Hu, Sen; Zhang, Yang
2010-07-01
Following Chen's method [Common. Theor. Phys. 52 (2009) 549] to use 8-step line tilting to realize tip tilting, to achieve finer rotation, it is discovered that a 16-step line tilting method may realize a rotation two order smaller than that achieved by 8-step.
Type 1 Does The Two-Step: Type 1 Secretion Substrates With A Functional Periplasmic Intermediate.
Smith, Timothy J; Sondermann, Holger; O'Toole, George A
2018-06-04
Bacteria have evolved several secretion strategies for polling and responding to environmental flux and insult. Of these, the type 1 secretion system (T1SS) is known to secrete an array of biologically diverse proteins - from small < 10 kDa bacteriocins to gigantic adhesins with a mass over 1 MDa. For the last several decades T1SS have been characterized as a one-step translocation strategy whereby the secreted substrate is transported directly into the extracellular environment from the cytoplasm with no periplasmic intermediate. Recent phylogenetic, biochemical, and genetic evidence point to a distinct sub-group of T1SS machinery linked with a bacterial transglutaminase-like cysteine proteinase (BTLCP), which uses a two-step secretion mechanism. BTLCP-linked T1SS transport a class of repeats-in-toxin (RTX) adhesins that are critical for biofilm formation. The prototype of this RTX adhesin group, LapA of Pseudomonas fluorescens Pf0-1, uses a novel N-terminal retention module to anchor the adhesin at the cell surface as a secretion intermediate threaded through the outer membrane-localized, TolC-like protein LapE. This secretion intermediate is post-translationally cleaved by the BTLCP family LapG protein to release LapA from its cognate T1SS pore. Thus, secretion of LapA and related RTX adhesins into the extracellular environment appears to be a T1SS-mediated, two-step process that involves a periplasmic intermediate. In this review, we contrast the T1SS machinery and substrates of the BLTCP-linked two-step secretion process with those of the classical one-step T1SS to better understand the newly recognized and expanded role of this secretion machinery. Copyright © 2018 American Society for Microbiology.
On the error propagation of semi-Lagrange and Fourier methods for advection problems☆
Einkemmer, Lukas; Ostermann, Alexander
2015-01-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018
NASA Technical Reports Server (NTRS)
Chang, S. C.
1986-01-01
A two-step semidirect procedure is developed to accelerate the one-step procedure described in NASA TP-2529. For a set of constant coefficient model problems, the acceleration factor increases from 1 to 2 as the one-step procedure convergence rate decreases from + infinity to 0. It is also shown numerically that the two-step procedure can substantially accelerate the convergence of the numerical solution of many partial differential equations (PDE's) with variable coefficients.
Numerical solution of second order ODE directly by two point block backward differentiation formula
NASA Astrophysics Data System (ADS)
Zainuddin, Nooraini; Ibrahim, Zarina Bibi; Othman, Khairil Iskandar; Suleiman, Mohamed; Jamaludin, Noraini
2015-12-01
Direct Two Point Block Backward Differentiation Formula, (BBDF2) for solving second order ordinary differential equations (ODEs) will be presented throughout this paper. The method is derived by differentiating the interpolating polynomial using three back values. In BBDF2, two approximate solutions are produced simultaneously at each step of integration. The method derived is implemented by using fixed step size and the numerical results that follow demonstrate the advantage of the direct method as compared to the reduction method.
Efficacy and Safety of the Once-Daily GLP-1 Receptor Agonist Lixisenatide in Monotherapy
Fonseca, Vivian A.; Alvarado-Ruiz, Ricardo; Raccah, Denis; Boka, Gabor; Miossec, Patrick; Gerich, John E.
2012-01-01
OBJECTIVE To assess efficacy and safety of lixisenatide monotherapy in type 2 diabetes. RESEARCH DESIGN AND METHODS Randomized, double-blind, 12-week study of 361 patients not on glucose-lowering therapy (HbA1c 7–10%) allocated to one of four once-daily subcutaneous dose increase regimens: lixisenatide 2-step (10 μg for 1 week, 15 μg for 1 week, and then 20 μg; n = 120), lixisenatide 1-step (10 μg for 2 weeks and then 20 μg; n = 119), placebo 2-step (n = 61), or placebo 1-step (n = 61) (placebo groups were combined for analyses). Primary end point was HbA1c change from baseline to week 12. RESULTS Once-daily lixisenatide significantly improved HbA1c (mean baseline 8.0%) in both groups (least squares mean change vs. placebo: −0.54% for 2-step, −0.66% for 1-step; P < 0.0001). Significantly more lixisenatide patients achieved HbA1c <7.0% (52.2% 2-step, 46.5% 1-step) and ≤6.5% (31.9% 2-step, 25.4% 1-step) versus placebo (26.8% and 12.5%, respectively; P < 0.01). Lixisenatide led to marked significant improvements of 2-h postprandial glucose levels and blood glucose excursions measured during a standardized breakfast test. A significant decrease in fasting plasma glucose was observed in both lixisenatide groups versus placebo. Mean decreases in body weight (∼2 kg) were observed in all groups. The most common adverse events were gastrointestinal—nausea was the most frequent (lixisenatide 23% overall, placebo 4.1%). Symptomatic hypoglycemia occurred in 1.7% of lixisenatide and 1.6% of placebo patients, with no severe episodes. Safety/tolerability was similar for the two dose regimens. CONCLUSIONS Once-daily lixisenatide monotherapy significantly improved glycemic control with a pronounced postprandial effect (75% reduction in glucose excursion) and was safe and well tolerated in type 2 diabetes. PMID:22432104
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takashiri, Masayuki, E-mail: takashiri@tokai-u.jp; Kurita, Kensuke; Hagino, Harutoshi
2015-08-14
A two-step method that combines homogeneous electron beam (EB) irradiation and thermal annealing has been developed to enhance the thermoelectric properties of nanocrystalline bismuth selenium telluride thin films. The thin films, prepared using a flash evaporation method, were treated with EB irradiation in a N{sub 2} atmosphere at room temperature and an acceleration voltage of 0.17 MeV. Thermal annealing was performed under Ar/H{sub 2} (5%) at 300 °C for 60 min. X-ray diffraction was used to determine that compositional phase separation between bismuth telluride and bismuth selenium telluride developed in the thin films exposed to higher EB doses and thermal annealing. We proposemore » that the phase separation was induced by fluctuations in the distribution of selenium atoms after EB irradiation, followed by the migration of selenium atoms to more stable sites during thermal annealing. As a result, thin film crystallinity improved and mobility was significantly enhanced. This indicates that the phase separation resulting from the two-step method enhanced, rather than disturbed, the electron transport. Both the electrical conductivity and the Seebeck coefficient were improved following the two-step method. Consequently, the power factor of thin films that underwent the two-step method was enhanced to 20 times (from 0.96 to 21.0 μW/(cm K{sup 2}) that of the thin films treated with EB irradiation alone.« less
NASA Astrophysics Data System (ADS)
Lafitte, Pauline; Melis, Ward; Samaey, Giovanni
2017-07-01
We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.
Ghanbarian, Maryam; Afzali, Daryoush; Mostafavi, Ali; Fathirad, Fariba
2013-01-01
A new displacement-dispersive liquid-liquid microextraction method based on the solidification of floating organic drop was developed for separation and preconcentration of Pd(ll) in road dust and aqueous samples. This method involves two steps of dispersive liquid-liquid microextraction based on solidification. In Step 1, Cu ions react with diethyldithiocarbamate (DDTC) to form Cu-DDTC complex, which is extracted by dispersive liquid-liquid microextraction based on a solidification procedure using 1-undecanol (extraction solvent) and ethanol (dispersive solvent). In Step 2, the extracted complex is first dispersed using ethanol in a sample solution containing Pd ions, then a dispersive liquid-liquid microextraction based on a solidification procedure is performed creating an organic drop. In this step, Pd(ll) replaces Cu(ll) from the pre-extracted Cu-DDTC complex and goes into the extraction solvent phase. Finally, the Pd(ll)-containing drop is introduced into a graphite furnace using a microsyringe, and Pd(ll) is determined using atomic absorption spectrometry. Several factors that influence the extraction efficiency of Pd and its subsequent determination, such as extraction and dispersive solvent type and volume, pH of sample solution, centrifugation time, and concentration of DDTC, are optimized.
Joining of dissimilar materials
Tucker, Michael C; Lau, Grace Y; Jacobson, Craig P
2012-10-16
A method of joining dissimilar materials having different ductility, involves two principal steps: Decoration of the more ductile material's surface with particles of a less ductile material to produce a composite; and, sinter-bonding the composite produced to a joining member of a less ductile material. The joining method is suitable for joining dissimilar materials that are chemically inert towards each other (e.g., metal and ceramic), while resulting in a strong bond with a sharp interface between the two materials. The joining materials may differ greatly in form or particle size. The method is applicable to various types of materials including ceramic, metal, glass, glass-ceramic, polymer, cermet, semiconductor, etc., and the materials can be in various geometrical forms, such as powders, fibers, or bulk bodies (foil, wire, plate, etc.). Composites and devices with a decorated/sintered interface are also provided.
NASA Astrophysics Data System (ADS)
Jiang, Wei; Zhou, Jianzhong; Zheng, Yang; Liu, Han
2017-11-01
Accurate degradation tendency measurement is vital for the secure operation of mechanical equipment. However, the existing techniques and methodologies for degradation measurement still face challenges, such as lack of appropriate degradation indicator, insufficient accuracy, and poor capability to track the data fluctuation. To solve these problems, a hybrid degradation tendency measurement method for mechanical equipment based on a moving window and Grey-Markov model is proposed in this paper. In the proposed method, a 1D normalized degradation index based on multi-feature fusion is designed to assess the extent of degradation. Subsequently, the moving window algorithm is integrated with the Grey-Markov model for the dynamic update of the model. Two key parameters, namely the step size and the number of states, contribute to the adaptive modeling and multi-step prediction. Finally, three types of combination prediction models are established to measure the degradation trend of equipment. The effectiveness of the proposed method is validated with a case study on the health monitoring of turbine engines. Experimental results show that the proposed method has better performance, in terms of both measuring accuracy and data fluctuation tracing, in comparison with other conventional methods.
The natural history of biocatalytic mechanisms.
Nath, Neetika; Mitchell, John B O; Caetano-Anollés, Gustavo
2014-05-01
Phylogenomic analysis of the occurrence and abundance of protein domains in proteomes has recently showed that the α/β architecture is probably the oldest fold design. This holds important implications for the origins of biochemistry. Here we explore structure-function relationships addressing the use of chemical mechanisms by ancestral enzymes. We test the hypothesis that the oldest folds used the most mechanisms. We start by tracing biocatalytic mechanisms operating in metabolic enzymes along a phylogenetic timeline of the first appearance of homologous superfamilies of protein domain structures from CATH. A total of 335 enzyme reactions were retrieved from MACiE and were mapped over fold age. We define a mechanistic step type as one of the 51 mechanistic annotations given in MACiE, and each step of each of the 335 mechanisms was described using one or more of these annotations. We find that the first two folds, the P-loop containing nucleotide triphosphate hydrolase and the NAD(P)-binding Rossmann-like homologous superfamilies, were α/β architectures responsible for introducing 35% (18/51) of the known mechanistic step types. We find that these two oldest structures in the phylogenomic analysis of protein domains introduced many mechanistic step types that were later combinatorially spread in catalytic history. The most common mechanistic step types included fundamental building blocks of enzyme chemistry: "Proton transfer," "Bimolecular nucleophilic addition," "Bimolecular nucleophilic substitution," and "Unimolecular elimination by the conjugate base." They were associated with the most ancestral fold structure typical of P-loop containing nucleotide triphosphate hydrolases. Over half of the mechanistic step types were introduced in the evolutionary timeline before the appearance of structures specific to diversified organisms, during a period of architectural diversification. The other half unfolded gradually after organismal diversification and during a period that spanned ∼2 billion years of evolutionary history.
Novel Anthropometry Based on 3D-Bodyscans Applied to a Large Population Based Cohort
Löffler-Wirth, Henry; Willscher, Edith; Ahnert, Peter; Wirkner, Kerstin; Engel, Christoph; Loeffler, Markus; Binder, Hans
2016-01-01
Three-dimensional (3D) whole body scanners are increasingly used as precise measuring tools for the rapid quantification of anthropometric measures in epidemiological studies. We analyzed 3D whole body scanning data of nearly 10,000 participants of a cohort collected from the adult population of Leipzig, one of the largest cities in Eastern Germany. We present a novel approach for the systematic analysis of this data which aims at identifying distinguishable clusters of body shapes called body types. In the first step, our method aggregates body measures provided by the scanner into meta-measures, each representing one relevant dimension of the body shape. In a next step, we stratified the cohort into body types and assessed their stability and dependence on the size of the underlying cohort. Using self-organizing maps (SOM) we identified thirteen robust meta-measures and fifteen body types comprising between 1 and 18 percent of the total cohort size. Thirteen of them are virtually gender specific (six for women and seven for men) and thus reflect most abundant body shapes of women and men. Two body types include both women and men, and describe androgynous body shapes that lack typical gender specific features. The body types disentangle a large variability of body shapes enabling distinctions which go beyond the traditional indices such as body mass index, the waist-to-height ratio, the waist-to-hip ratio and the mortality-hazard ABSI-index. In a next step, we will link the identified body types with disease predispositions to study how size and shape of the human body impact health and disease. PMID:27467550
Variation of methods in small-scale safety and thermal testing of improvised explosives
Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.; ...
2014-09-29
Here, one of the first steps in establishing safe handling procedures for explosives is small-scale safety and thermal (SSST) testing. To better understand the response of homemade or improvised explosives (HMEs) to SSST testing, 16 HME materials were compared to 3 standard military explosives in a proficiency-type round robin study among five laboratories, two U.S. Department of Defense and three U.S. Department of Energy, sponsored by the Department of Homeland Security, Science & Technology Directorate, Explosives Division.
Agrawal, S; Christodoulou, C; Gait, M J
1986-01-01
The syntheses are described of two types of linker molecule useful for the specific attachment of non-radioactive labels such as biotin and fluorophores to the 5' terminus of synthetic oligodeoxyribonucleotides. The linkers are designed such that they can be coupled to the oligonucleotide as a final step in solid-phase synthesis using commercial DNA synthesis machines. Increased sensitivity of biotin detection was possible using an anti-biotin hybridoma/peroxidase detection system. PMID:3748808
2006-07-01
dislocation-loop expansion . The new model was used to simulate the thermally reversible flow behaviour for C-S type two-step deformation, and the results are...implemented into the finite element software ABAQUS through a User MATerial subroutine (UMAT). A tangent modulus method [48] was used for the time...locking under a dislocation loop- expansion configuration. This approach was motivated by modern understanding of dislocation mechanisms for Ni3Al
Fineberg, Jeffrey D; Ritter, David M; Covarrubias, Manuel
2012-11-01
A-type voltage-gated K(+) (Kv) channels self-regulate their activity by inactivating directly from the open state (open-state inactivation [OSI]) or by inactivating before they open (closed-state inactivation [CSI]). To determine the inactivation pathways, it is often necessary to apply several pulse protocols, pore blockers, single-channel recording, and kinetic modeling. However, intrinsic hurdles may preclude the standardized application of these methods. Here, we implemented a simple method inspired by earlier studies of Na(+) channels to analyze macroscopic inactivation and conclusively deduce the pathways of inactivation of recombinant and native A-type Kv channels. We investigated two distinct A-type Kv channels expressed heterologously (Kv3.4 and Kv4.2 with accessory subunits) and their native counterparts in dorsal root ganglion and cerebellar granule neurons. This approach applies two conventional pulse protocols to examine inactivation induced by (a) a simple step (single-pulse inactivation) and (b) a conditioning step (double-pulse inactivation). Consistent with OSI, the rate of Kv3.4 inactivation (i.e., the negative first derivative of double-pulse inactivation) precisely superimposes on the profile of the Kv3.4 current evoked by a single pulse because the channels must open to inactivate. In contrast, the rate of Kv4.2 inactivation is asynchronous, already changing at earlier times relative to the profile of the Kv4.2 current evoked by a single pulse. Thus, Kv4.2 inactivation occurs uncoupled from channel opening, indicating CSI. Furthermore, the inactivation time constant versus voltage relation of Kv3.4 decreases monotonically with depolarization and levels off, whereas that of Kv4.2 exhibits a J-shape profile. We also manipulated the inactivation phenotype by changing the subunit composition and show how CSI and CSI combined with OSI might affect spiking properties in a full computational model of the hippocampal CA1 neuron. This work unambiguously elucidates contrasting inactivation pathways in neuronal A-type Kv channels and demonstrates how distinct pathways might impact neurophysiological activity.
Effectiveness of en masse versus two-step retraction: a systematic review and meta-analysis.
Rizk, Mumen Z; Mohammed, Hisham; Ismael, Omar; Bearn, David R
2018-01-05
This review aims to compare the effectiveness of en masse and two-step retraction methods during orthodontic space closure regarding anchorage preservation and anterior segment retraction and to assess their effect on the duration of treatment and root resorption. An electronic search for potentially eligible randomized controlled trials and prospective controlled trials was performed in five electronic databases up to July 2017. The process of study selection, data extraction, and quality assessment was performed by two reviewers independently. A narrative review is presented in addition to a quantitative synthesis of the pooled results where possible. The Cochrane risk of bias tool and the Newcastle-Ottawa Scale were used for the methodological quality assessment of the included studies. Eight studies were included in the qualitative synthesis in this review. Four studies were included in the quantitative synthesis. En masse/miniscrew combination showed a statistically significant standard mean difference regarding anchorage preservation - 2.55 mm (95% CI - 2.99 to - 2.11) and the amount of upper incisor retraction - 0.38 mm (95% CI - 0.70 to - 0.06) when compared to a two-step/conventional anchorage combination. Qualitative synthesis suggested that en masse retraction requires less time than two-step retraction with no difference in the amount of root resorption. Both en masse and two-step retraction methods are effective during the space closure phase. The en masse/miniscrew combination is superior to the two-step/conventional anchorage combination with regard to anchorage preservation and amount of retraction. Limited evidence suggests that anchorage reinforcement with a headgear produces similar results with both retraction methods. Limited evidence also suggests that en masse retraction may require less time and that no significant differences exist in the amount of root resorption between the two methods.
NASA Astrophysics Data System (ADS)
U-thaipan, Kasira; Tedsree, Karaked
2018-06-01
The surface morphology of flower-like Ag/ZnO nanorod can be manipulated by adopting different synthetic routes and also loading different levels of Ag in order to alter their surface structures to achieve the maximum photocatalytic efficiency. In a single-step preparation method Ag/ZnO was prepared by heating directly a mixture of Zn2+ and Ag+ precursors in an aqueous NaOH-ethylene glycol solution, while in the two-step preparation method an intermediate of flower-shaped ZnO nanorod was obtained by a hydrothermal process before depositing Ag particles on the ZnO surfaces by chemical reduction. The structure, morphology and optical properties of the synthesized samples were characterized using TEM, SEM, XRD, DRS and PL techniques. The sample prepared by single-step method are characterized with agglomeration of Ag atoms as clusters on the surface of ZnO, whereas in the sample prepared by two-step method Ag atoms are found uniformly dispersed and deposited as discrete Ag nanoparticles on the surface of ZnO. A significant enhancement in the adsorption of visible light was evident for Ag/ZnO samples prepared by two-step method especially with low Ag content (0.5 mol%). The flower-like Ag/ZnO nanorod prepared with 0.5 mol% Ag by two-step process was found to be the most efficient photocatalyst for the degradation of phenol, which can decompose 90% of phenol within 120 min.
Abstract Interpreters for Free
NASA Astrophysics Data System (ADS)
Might, Matthew
In small-step abstract interpretations, the concrete and abstract semantics bear an uncanny resemblance. In this work, we present an analysis-design methodology that both explains and exploits that resemblance. Specifically, we present a two-step method to convert a small-step concrete semantics into a family of sound, computable abstract interpretations. The first step re-factors the concrete state-space to eliminate recursive structure; this refactoring of the state-space simultaneously determines a store-passing-style transformation on the underlying concrete semantics. The second step uses inference rules to generate an abstract state-space and a Galois connection simultaneously. The Galois connection allows the calculation of the "optimal" abstract interpretation. The two-step process is unambiguous, but nondeterministic: at each step, analysis designers face choices. Some of these choices ultimately influence properties such as flow-, field- and context-sensitivity. Thus, under the method, we can give the emergence of these properties a graph-theoretic characterization. To illustrate the method, we systematically abstract the continuation-passing style lambda calculus to arrive at two distinct families of analyses. The first is the well-known k-CFA family of analyses. The second consists of novel "environment-centric" abstract interpretations, none of which appear in the literature on static analysis of higher-order programs.
Nevo, Daniel; Zucker, David M.; Tamimi, Rulla M.; Wang, Molin
2017-01-01
A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps–clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses’ Health Study to demonstrate the utility of our method. PMID:27558651
Gupta, Nimisha; Tripathi, Abhay Mani; Saha, Sonali; Dhinsa, Kavita; Garg, Aarti
2015-07-01
Newer development of bonding agents have gained a better understanding of factors affecting adhesion of interface between composite and dentin surface to improve longevity of restorations. The present study evaluated the influence of salivary contamination on the tensile bond strength of different generation adhesive systems (two-step etch-and-rinse, two-step self-etch and one-step self-etch) during different bonding stages to dentin where isolation is not maintained. Superficial dentin surfaces of 90 extracted human molars were randomly divided into three study Groups (Group A: Two-step etch-and-rinse adhesive system; Group B: Two-step self-etch adhesive system and Group C: One-step self-etch adhesive system) according to the different generation of adhesives used. According to treatment conditions in different bonding steps, each Group was further divided into three Subgroups containing ten teeth in each. After adhesive application, resin composite blocks were built on dentin and light cured subsequently. The teeth were then stored in water for 24 hours before sending for testing of tensile bond strength by Universal Testing Machine. The collected data were then statistically analysed using one-way ANOVA and Tukey HSD test. One-step self-etch adhesive system revealed maximum mean tensile bond strength followed in descending order by Two-step self-etch adhesive system and Two-step etch-and-rinse adhesive system both in uncontaminated and saliva contaminated conditions respectively. Unlike One-step self-etch adhesive system, saliva contamination could reduce tensile bond strength of the two-step self-etch and two-step etch-and-rinse adhesive system. Furthermore, the step of bonding procedures and the type of adhesive seems to be effective on the bond strength of adhesives contaminated with saliva.
Xu, Wei-Jian; He, Chun-Ting; Ji, Cheng-Min; Chen, Shao-Li; Huang, Rui-Kang; Lin, Rui-Biao; Xue, Wei; Luo, Jun-Hua; Zhang, Wei-Xiong; Chen, Xiao-Ming
2016-07-01
The changeable molecular dynamics of flexible polar cations in the variable confined space between inorganic chains brings about a new type of two-step nonlinear optical (NLO) switch with genuine "off-on-off" second harmonic generation (SHG) conversion between one NLO-active state and two NLO-inactive states. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhou, Caihong; Tong, Shanshan; Chang, Yunxia; Jia, Qiong; Zhou, Weihong
2012-04-01
Ionic liquid (IL) based dispersive liquid-liquid microextraction (DLLME) with back-extraction coupled with capillary electrophoresis ultraviolet detection was developed to determine four phenolic compounds (bisphenol-A, β-naphthol, α-naphthol, 2, 4-dichlorophenol) in aqueous cosmetics. The developed method was used to preconcentrate and clean up the four phenolic compounds including two steps. The analytes were transferred into room temperature ionic liquid (1-octyl-3-methylimidazolium hexafluorophosphate, [C(8) MIM][PF(6) ]) rich-phase in the first step. In the second step, the analytes were back-extracted into the alkaline aqueous phase. The effects of extraction parameters, such as type and volume of extraction solvent, type and volume of disperser, extraction and centrifugal time, sample pH, salt addition, and concentration and volume of NaOH in back-extraction were investigated. Under the optimal experimental conditions, the preconcentration factors were 60.1 for bisphenol-A, 52.7 for β-naphthol, 49.2 for α-naphthol, and 18.0 for 2, 4-dichlorophenol. The limits of detection for bisphenol-A, β-naphthol, α-naphthol and 2, 4-dichlorophenol were 5, 5, 8, and 100 ng mL(-1), respectively. Four kinds of aqueous cosmetics including toner, soften lotion, make-up remover, and perfume were analyzed and yielded recoveries ranging from 81.6% to 119.4%. The main advantages of the proposed method are quick, easy, cheap, and effective. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David
2012-10-01
The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.
Comparative kinetic analysis on thermal degradation of some cephalosporins using TG and DSC data
2013-01-01
Background The thermal decomposition of cephalexine, cefadroxil and cefoperazone under non-isothermal conditions using the TG, respectively DSC methods, was studied. In case of TG, a hyphenated technique, including EGA, was used. Results The kinetic analysis was performed using the TG and DSC data in air for the first step of cephalosporin’s decomposition at four heating rates. The both TG and DSC data were processed according to an appropriate strategy to the following kinetic methods: Kissinger-Akahira-Sunose, Friedman, and NPK, in order to obtain realistic kinetic parameters, even if the decomposition process is a complex one. The EGA data offer some valuable indications about a possible decomposition mechanism. The obtained data indicate a rather good agreement between the activation energy’s values obtained by different methods, whereas the EGA data and the chemical structures give a possible explanation of the observed differences on the thermal stability. A complete kinetic analysis needs a data processing strategy using two or more methods, but the kinetic methods must also be applied to the different types of experimental data (TG and DSC). Conclusion The simultaneous use of DSC and TG data for the kinetic analysis coupled with evolved gas analysis (EGA) provided us a more complete picture of the degradation of the three cephalosporins. It was possible to estimate kinetic parameters by using three different kinetic methods and this allowed us to compare the Ea values obtained from different experimental data, TG and DSC. The thermodegradation being a complex process, the both differential and integral methods based on the single step hypothesis are inadequate for obtaining believable kinetic parameters. Only the modified NPK method allowed an objective separation of the temperature, respective conversion influence on the reaction rate and in the same time to ascertain the existence of two simultaneous steps. PMID:23594763
Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gary D. Egbert
2007-03-22
The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to themore » full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before approaching more computationally cumbersome three-dimensional problems.« less
Molecular simulations of lipid systems: Edge stability and structure in pure and mixed bilayers
NASA Astrophysics Data System (ADS)
Jiang, Yong
2007-12-01
Understanding the structural, mechanical and dynamical properties of lipid self-assembled systems is fundamental to understand the behavior of the cell membrane. This thesis has investigated the equilibrium properties of lipid systems with edge defects through various molecular simulation techniques. The overall goal of this study is to understand the free energy terms of the edges and to develop efficient methods to sample equilibrium distributions of mixed-lipid systems. In the first main part of my thesis, an atomistic molecular model is used to study lipid ribbon which has two edges on both sides. Details of the edge structures, such as area per lipid and tail torsional statistics are presented. Line tension, calculated from pressure tensor in MD simulation has good agreement with result from other sources. To further investigate edge properties on a longer timescale and larger length scale, we have applied a coarse-grained forcefield on mixed lipid systems and try to interpret the edge fluctuations in terms of free energy parameters such as line tension and bending modulus. We have identified two regimes with quite different edge behavior: a high line tension regime and a low line tension regime. The last part of this thesis focuses on a hybrid Molecular dynamics and Configurational-bias Monte Carlo (MCMD) simulation method in which molecules can change their type by growing and shrinking the terminal acyl united carbon atoms. A two-step extension of the MCMD method has been developed to allow for a larger difference in the components' tail lengths. Results agreed well with previous one-step mutation results for a mixture with a length difference of four carbons. The current method can efficiently sample mixtures with a length difference of eight carbons, with a small portion of lipids of intermediate tail length. Preliminary results are obtained for "bicelle"-type (DMPC/DHPC) ribbons.
Comparing the efficacy of metronome beeps and stepping stones to adjust gait: steps to follow!
Bank, Paulina J M; Roerdink, Melvyn; Peper, C E
2011-03-01
Acoustic metronomes and visual targets have been used in rehabilitation practice to improve pathological gait. In addition, they may be instrumental in evaluating and training instantaneous gait adjustments. The aim of this study was to compare the efficacy of two cue types in inducing gait adjustments, viz. acoustic temporal cues in the form of metronome beeps and visual spatial cues in the form of projected stepping stones. Twenty healthy elderly (aged 63.2 ± 3.6 years) were recruited to walk on an instrumented treadmill at preferred speed and cadence, paced by either metronome beeps or projected stepping stones. Gait adaptations were induced using two manipulations: by perturbing the sequence of cues and by imposing switches from one cueing type to the other. Responses to these manipulations were quantified in terms of step-length and step-time adjustments, the percentage correction achieved over subsequent steps, and the number of steps required to restore the relation between gait and the beeps or stepping stones. The results showed that perturbations in a sequence of stepping stones were overcome faster than those in a sequence of metronome beeps. In switching trials, switching from metronome beeps to stepping stones was achieved faster than vice versa, indicating that gait was influenced more strongly by the stepping stones than the metronome beeps. Together these results revealed that, in healthy elderly, the stepping stones induced gait adjustments more effectively than did the metronome beeps. Potential implications for the use of metronome beeps and stepping stones in gait rehabilitation practice are discussed.
NASA Astrophysics Data System (ADS)
Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.
2009-12-01
The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.
Method for Non-Invasive Determination of Chemical Properties of Aqueous Solutions
NASA Technical Reports Server (NTRS)
Jones, Alan (Inventor); Thomas, Nathan A. (Inventor); Todd, Paul W. (Inventor)
2016-01-01
A method for non-invasively determining a chemical property of an aqueous solution is provided. The method provides the steps of providing a colored solute having a light absorbance spectrum and transmitting light through the colored solute at two different wavelengths. The method further provides the steps of measuring light absorbance of the colored solute at the two different transmitted light wavelengths, and comparing the light absorbance of the colored solute at the two different wavelengths to determine a chemical property of an aqueous solution.
Arias, Jean Lucas de Oliveira; Schneider, Antunielle; Batista-Andrade, Jahir Antonio; Vieira, Augusto Alves; Caldas, Sergiane Souza; Primel, Ednei Gilberto
2018-02-01
Clean extracts are essential in LC-MS/MS, since the matrix effect can interfere in the analysis. Alternative materials which can be used as sorbents, such as chitosan in the clean-up step, are cheap and green options. In this study, chitosan from shrimp shell waste was evaluated as a sorbent in the QuEChERS method in order to determine multi-residues of veterinary drugs in different types of milk, i. e., fatty matrices. After optimization, the method showed correlation coefficients above 0.99, LOQs ranged between 1 and 50μgkg -1 and recoveries ranged between 62 and 125%, with RSD<20% for all veterinary drugs in all types of milk under study. The clean-up step which employed chitosan proved to be effective, since it reduced both the matrix effect (from values between -40 and -10% to values from -10 to +10%) and the extract turbidity (up to 95%). When the proposed method was applied to different milk samples, residues of albendazole (49μgkg -1 ), sulfamethazine (
A two-step electrodialysis method for DNA purification from polluted metallic environmental samples.
Rodríguez-Mejía, José Luis; Martínez-Anaya, Claudia; Folch-Mallol, Jorge Luis; Dantán-González, Edgar
2008-08-01
Extracting DNA from samples of polluted environments using standard methods often results in low yields of poor-quality material unsuited to subsequent manipulation and analysis by molecular biological techniques. Here, we report a novel two-step electrodialysis-based method for the extraction of DNA from environmental samples. This technique permits the rapid and efficient isolation of high-quality DNA based on its acidic nature, and without the requirement for phenol-chloroform-isoamyl alcohol cleanup and ethanol precipitation steps. Subsequent PCR, endonuclease restriction, and cloning reactions were successfully performed utilizing DNA obtained by electrodialysis, whereas some or all of these techniques failed using DNA extracted with two alternative methods. We also show that his technique is applicable to purify DNA from a range of polluted and nonpolluted samples.
Protocol for Detection of Yersinia pestis in Environmental ...
Methods Report This is the first ever open-access and detailed protocol available to all government departments and agencies, and their contractors to detect Yersinia pestis, the pathogen that causes plague, from multiple environmental sample types including water. Each analytical method includes sample processing procedure for each sample type in a step-by-step manner. It includes real-time PCR, traditional microbiological culture, and the Rapid Viability PCR (RV-PCR) analytical methods. For large volume water samples it also includes an ultra-filtration-based sample concentration procedure. Because of such a non-restrictive availability of this protocol to all government departments and agencies, and their contractors, the nation will now have increased laboratory capacity to analyze large number of samples during a wide-area plague incident.
Dislocation-induced Charges in Quantum Dots: Step Alignment and Radiative Emission
NASA Technical Reports Server (NTRS)
Leon, R.; Okuno, J.; Lawton, R.; Stevens-Kalceff, M.; Phillips, M.; Zou, J.; Cockayne, D.; Lobo, C.
1999-01-01
A transition between two types of step alignment was observed in a multilayered InGaAs/GaAs quantum-dot (QD) structure. A change to larger QD sizes in smaller concentrations occurred after formation of a dislocation array.
Analytical method for promoting process capability of shock absorption steel.
Sung, Wen-Pei; Shih, Ming-Hsiang; Chen, Kuen-Suan
2003-01-01
Mechanical properties and low cycle fatigue are two factors that must be considered in developing new type steel for shock absorption. Process capability and process control are significant factors in achieving the purpose of research and development programs. Often-used evaluation methods failed to measure process yield and process centering; so this paper uses Taguchi loss function as basis to establish an evaluation method and the steps for assessing the quality of mechanical properties and process control of an iron and steel manufacturer. The establishment of this method can serve the research and development and manufacturing industry and lay a foundation in enhancing its process control ability to select better manufacturing processes that are more reliable than decision making by using the other commonly used methods.
Filter Design and Performance Evaluation for Fingerprint Image Segmentation
Thai, Duy Hoang; Huckemann, Stephan; Gottschlich, Carsten
2016-01-01
Fingerprint recognition plays an important role in many commercial applications and is used by millions of people every day, e.g. for unlocking mobile phones. Fingerprint image segmentation is typically the first processing step of most fingerprint algorithms and it divides an image into foreground, the region of interest, and background. Two types of error can occur during this step which both have a negative impact on the recognition performance: ‘true’ foreground can be labeled as background and features like minutiae can be lost, or conversely ‘true’ background can be misclassified as foreground and spurious features can be introduced. The contribution of this paper is threefold: firstly, we propose a novel factorized directional bandpass (FDB) segmentation method for texture extraction based on the directional Hilbert transform of a Butterworth bandpass (DHBB) filter interwoven with soft-thresholding. Secondly, we provide a manually marked ground truth segmentation for 10560 images as an evaluation benchmark. Thirdly, we conduct a systematic performance comparison between the FDB method and four of the most often cited fingerprint segmentation algorithms showing that the FDB segmentation method clearly outperforms these four widely used methods. The benchmark and the implementation of the FDB method are made publicly available. PMID:27171150
Thickness measurement by two-sided step-heating thermal imaging
NASA Astrophysics Data System (ADS)
Li, Xiaoli; Tao, Ning; Sun, J. G.; Zhang, Cunlin; Zhao, Yuejin
2018-01-01
Infrared thermal imaging is a promising nondestructive technique for thickness prediction. However, it is usually thought to be only appropriate for testing the thickness of thin objects or near-surface structures. In this study, we present a new two-sided step-heating thermal imaging method which employed a low-cost portable halogen lamp as the heating source and verified it with two stainless steel step wedges with thicknesses ranging from 5 mm to 24 mm. We first derived the one-dimensional step-heating thermography theory with the consideration of warm-up time of the lamp, and then applied the nonlinear regression method to fit the experimental data by the derived function to determine the thickness. After evaluating the reliability and accuracy of the experimental results, we concluded that this method is capable of testing thick objects. In addition, we provided the criterions for both the required data length and the applicable thickness range of the testing material. It is evident that this method will broaden the thermal imaging application for thickness measurement.
Dopico-García, M S; Valentão, P; Guerra, L; Andrade, P B; Seabra, R M
2007-01-30
An experimental design was applied for the optimization of extraction and clean-up processes of phenolic compounds and organic acids from white "Vinho Verde" grapes. The developed analytical method consisted in two steps: first a solid-liquid extraction of both phenolic compounds and organic acids and then a clean-up step using solid-phase extraction (SPE). Afterwards, phenolic compounds and organic acids were determined by high-performance liquid chromatography (HPLC) coupled to a diode array detector (DAD) and HPLC-UV, respectively. Plackett-Burman design was carried out to select the significant experimental parameters affecting both the extraction and the clean-up steps. The identified and quantified phenolic compounds were: quercetin-3-O-glucoside, quercetin-3-O-rutinoside, kaempferol-3-O-rutinoside, isorhamnetin-3-O-glucoside, quercetin, kaempferol and epicatechin. The determined organic acids were oxalic, citric, tartaric, malic, shikimic and fumaric acids. The obtained results showed that the most important variables were the temperature (40 degrees C) and the solvent (acid water at pH 2 with 5% methanol) for the extraction step and the type of sorbent (C18 non end-capped) for the clean-up step.
TagDust2: a generic method to extract reads from sequencing data.
Lassmann, Timo
2015-01-28
Arguably the most basic step in the analysis of next generation sequencing data (NGS) involves the extraction of mappable reads from the raw reads produced by sequencing instruments. The presence of barcodes, adaptors and artifacts subject to sequencing errors makes this step non-trivial. Here I present TagDust2, a generic approach utilizing a library of hidden Markov models (HMM) to accurately extract reads from a wide array of possible read architectures. TagDust2 extracts more reads of higher quality compared to other approaches. Processing of multiplexed single, paired end and libraries containing unique molecular identifiers is fully supported. Two additional post processing steps are included to exclude known contaminants and filter out low complexity sequences. Finally, TagDust2 can automatically detect the library type of sequenced data from a predefined selection. Taken together TagDust2 is a feature rich, flexible and adaptive solution to go from raw to mappable NGS reads in a single step. The ability to recognize and record the contents of raw reads will help to automate and demystify the initial, and often poorly documented, steps in NGS data analysis pipelines. TagDust2 is freely available at: http://tagdust.sourceforge.net .
Heidari, Banafsheh; Gifani, Minoo; Shirazi, Abolfazl; Zarnani, Amir-Hassan; Baradaran, Behzad; Naderi, Mohammad Mehdi; Behzadi, Bahareh; Borjian-Boroujeni, Sara; Sarvari, Ali; Lakpour, Niknam; Akhondi, Mohammad Mehdi
2014-01-01
Background The well documented source for adult multipotent stem cells is Spermatogonial Stem Cells (SSCs). They are the foundation of spermatogenesis in the testis throughout adult life by balancing self-renewal and differentiation. The aim of this study was to assess the effect of percoll density gradient and differential plating on enrichment of undifferentiated type A spermatogonia in dissociated cellular suspension of goat testes. Additionally, we evaluated the separated fractions of the gradients in percoll and samples in differential plating at different times for cell number, viability and purification rate of goat SSCs in culture. Methods Testicular cells were successfully isolated from one month old goat testis using two-step enzymatic digestion and followed by two purification protocols, differential plating with different times of culture (3, 4, 5, and 6 hr) and discontinuous percoll density with different gradients (20, 28, 30, and 32%). The difference of percentage of undifferentiated SSCs (PGP9.5 positive) in each method was compared using ANOVA and comparison between the highest percentage of corresponding value between two methods was carried out by t-test using Sigma Stat (ver. 3.5). Results The highest PGP9.5 (94.6±0.4) and the lowest c-Kit positive (25.1±0.7) in Percoll method was significantly (p ≤ 0.001) achieved in 32% percoll gradient. While the corresponding rates in differential plating method for the highest PGP9.5 positive cells (81.3±1.1) and lowest c-Kit (17.1±1.4) was achieved after 5 hr culturing (p < 0.001). The enrichment of undifferentiated type A spermatogonia using Percoll was more efficient than differential plating method (p < 0.001). Conclusion Percoll density gradient and differential plating were efficient and fast methods for enrichment of type A spermatogonial stem cells from goat testes. PMID:24834311
Towards β-globin gene-targeting with integrase-defective lentiviral vectors.
Inanlou, Davoud Nouri; Yakhchali, Bagher; Khanahmad, Hossein; Gardaneh, Mossa; Movassagh, Hesam; Cohan, Reza Ahangari; Ardestani, Mehdi Shafiee; Mahdian, Reza; Zeinali, Sirous
2010-11-01
We have developed an integrase-defective lentiviral (LV) vector in combination with a gene-targeting approach for gene therapy of β-thalassemia. The β-globin gene-targeting construct has two homologous stems including sequence upstream and downstream of the β-globin gene, a β-globin gene positioned between hygromycin and neomycin resistant genes and a herpes simplex virus type 1 thymidine kinase (HSVtk) suicide gene. Utilization of integrase-defective LV as a vector for the β-globin gene increased the number of selected clones relative to non-viral methods. This method represents an important step toward the ultimate goal of a clinical gene therapy for β-thalassemia.
Sweet Potato [Ipomoea batatas (L.) Lam].
Song, Guo-qing; Yamaguchi, Ken-ichi
2006-01-01
Among the available transformation methods reported on sweet potato, Agrobacterium tumefaciens-mediated transformation is more successful and desirable. Stem explants have shown to be ideal for the transformation of sweet potato because of their ready availability as explants, the simple transformation process, and high-frequency-regeneration via somatic embryogenesis. Under the two-step kanamycin-hygromycin selection method and using the appropriate explants type (stem explants), the efficiency of transformation can be considerably improved in cv. Beniazuma. The high efficiency in the transformation of stem explants suggests that the transformation protocol described in this chapter warrants testing for routine stable transformation of diverse varieties of sweet potato.
ERIC Educational Resources Information Center
Daugherty, James F.; Manternach, Jeremy N.; Brunkan, Melissa C.
2013-01-01
Under controlled conditions, we assessed acoustically (long-term average spectra) and perceptually (singer survey, listener survey) six performances of an soprano, alto, tenor, and bass (SATB) choir ("N" = 27) as it sang the same musical excerpt on two portable riser units (standard riser step height, taller riser step height) with…
Study of CdTe quantum dots grown using a two-step annealing method
NASA Astrophysics Data System (ADS)
Sharma, Kriti; Pandey, Praveen K.; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.
2006-02-01
High size dispersion, large average radius of quantum dot and low-volume ratio has been a major hurdle in the development of quantum dot based devices. In the present paper, we have grown CdTe quantum dots in a borosilicate glass matrix using a two-step annealing method. Results of optical characterization and the theoretical model of absorption spectra have shown that quantum dots grown using two-step annealing have lower average radius, lesser size dispersion, higher volume ratio and higher decrease in bulk free energy as compared to quantum dots grown conventionally.
Li, Shandong; Xue, Qian; Duh, Jenq-Gong; Du, Honglei; Xu, Jie; Wan, Yong; Li, Qiang; Lü, Yueguang
2014-01-01
RF/microwave soft magnetic films (SMFs) are key materials for miniaturization and multifunctionalization of monolithic microwave integrated circuits (MMICs) and their components, which demand that the SMFs should have higher self-bias ferromagnetic resonance frequency fFMR, and can be fabricated in an IC compatible process. However, self-biased metallic SMFs working at X-band or higher frequency were rarely reported, even though there are urgent demands. In this paper, we report an IC compatible process with two-step superposition to prepare SMFs, where the FeCoB SMFs were deposited on (011) lead zinc niobate–lead titanate substrates using a composition gradient sputtering method. As a result, a giant magnetic anisotropy field of 1498 Oe, 1–2 orders of magnitude larger than that by conventional magnetic annealing method, and an ultrahigh fFMR of up to 12.96 GHz reaching Ku-band, were obtained at zero magnetic bias field in the as-deposited films. These ultrahigh microwave performances can be attributed to the superposition of two effects: uniaxial stress induced by composition gradient and magnetoelectric coupling. This two-step superposition method paves a way for SMFs to surpass X-band by two-step or multi-step, where a variety of magnetic anisotropy field enhancing methods can be cumulated together to get higher ferromagnetic resonance frequency. PMID:25491374
Group sequential designs for stepped-wedge cluster randomised trials
Grayling, Michael J; Wason, James MS; Mander, Adrian P
2017-01-01
Background/Aims: The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Methods: Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. Results: We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial’s type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. Conclusion: The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial. PMID:28653550
Tsujimoto, Akimasa; Barkmeier, Wayne W; Takamizawa, Toshiki; Watanabe, Hidehiko; Johnson, William W; Latta, Mark A; Miyazaki, Masashi
2017-06-01
This aim of this study was to compare universal adhesives and two-step self-etch adhesives in terms of dentin bond fatigue durability in self-etch mode. Three universal adhesives - Clearfil Universal, G-Premio Bond, and Scotchbond Universal Adhesive - and three-two-step self-etch adhesives - Clearfil SE Bond, Clearfil SE Bond 2, and OptiBond XTR - were used. The initial shear bond strength and shear fatigue strength of resin composite bonded to adhesive on dentin in self-etch mode were determined. Scanning electron microscopy observations of fracture surfaces after bond strength tests were also made. The initial shear bond strength of universal adhesives was material dependent, unlike that of two-step self-etch adhesives. The shear fatigue strength of Scotchbond Universal Adhesive was not significantly different from that of two-step self-etch adhesives, unlike the other universal adhesives. The shear fatigue strength of universal adhesives differed depending on the type of adhesive, unlike those of two-step self-etch adhesives. The results of this study encourage the continued use of two-step self-etch adhesive over some universal adhesives but suggest that changes to the composition of universal adhesives may lead to a dentin bond fatigue durability similar to that of two-step self-etch adhesives. © 2017 Eur J Oral Sci.
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2015-09-01
Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
Mazuet, Christelle; Ezan, Eric; Volland, Hervé; Becher, François
2012-01-01
In two outbreaks of food-borne botulism in France, Clostridium botulinum type A was isolated and characterized from incriminated foods. Botulinum neurotoxin type A was detected in the patients' sera by mouse bioassay and in vitro endopeptidase assay with an immunocapture step and identification of the cleavage products by mass spectrometry. PMID:22993181
A METHOD FOR DETERMINING THE COMPATIBILITY OF HAZARDOUS WASTES
This report describes a method for determining the compatibility of the binary combinations of hazardous wastes. The method consists of two main parts, namely: (1) the step-by-step compatibility analysis procedures, and (2) the hazardous wastes compatibility chart. The key elemen...
Compact mass spectrometer for plasma discharge ion analysis
Tuszewski, M.G.
1997-07-22
A mass spectrometer and methods are disclosed for mass spectrometry which are useful in characterizing a plasma. This mass spectrometer for determining type and quantity of ions present in a plasma is simple, compact, and inexpensive. It accomplishes mass analysis in a single step, rather than the usual two-step process comprised of ion extraction followed by mass filtering. Ions are captured by a measuring element placed in a plasma and accelerated by a known applied voltage. Captured ions are bent into near-circular orbits by a magnetic field such that they strike a collector, producing an electric current. Ion orbits vary with applied voltage and proton mass ratio of the ions, so that ion species may be identified. Current flow provides an indication of quantity of ions striking the collector. 7 figs.
Compact mass spectrometer for plasma discharge ion analysis
Tuszewski, Michel G.
1997-01-01
A mass spectrometer and methods for mass spectrometry which are useful in characterizing a plasma. This mass spectrometer for determining type and quantity of ions present in a plasma is simple, compact, and inexpensive. It accomplishes mass analysis in a single step, rather than the usual two-step process comprised of ion extraction followed by mass filtering. Ions are captured by a measuring element placed in a plasma and accelerated by a known applied voltage. Captured ions are bent into near-circular orbits by a magnetic field such that they strike a collector, producing an electric current. Ion orbits vary with applied voltage and proton mass ratio of the ions, so that ion species may be identified. Current flow provides an indication of quantity of ions striking the collector.
Automatic categorization of diverse experimental information in the bioscience literature
2012-01-01
Background Curation of information from bioscience literature into biological knowledge databases is a crucial way of capturing experimental information in a computable form. During the biocuration process, a critical first step is to identify from all published literature the papers that contain results for a specific data type the curator is interested in annotating. This step normally requires curators to manually examine many papers to ascertain which few contain information of interest and thus, is usually time consuming. We developed an automatic method for identifying papers containing these curation data types among a large pool of published scientific papers based on the machine learning method Support Vector Machine (SVM). This classification system is completely automatic and can be readily applied to diverse experimental data types. It has been in use in production for automatic categorization of 10 different experimental datatypes in the biocuration process at WormBase for the past two years and it is in the process of being adopted in the biocuration process at FlyBase and the Saccharomyces Genome Database (SGD). We anticipate that this method can be readily adopted by various databases in the biocuration community and thereby greatly reducing time spent on an otherwise laborious and demanding task. We also developed a simple, readily automated procedure to utilize training papers of similar data types from different bodies of literature such as C. elegans and D. melanogaster to identify papers with any of these data types for a single database. This approach has great significance because for some data types, especially those of low occurrence, a single corpus often does not have enough training papers to achieve satisfactory performance. Results We successfully tested the method on ten data types from WormBase, fifteen data types from FlyBase and three data types from Mouse Genomics Informatics (MGI). It is being used in the curation work flow at WormBase for automatic association of newly published papers with ten data types including RNAi, antibody, phenotype, gene regulation, mutant allele sequence, gene expression, gene product interaction, overexpression phenotype, gene interaction, and gene structure correction. Conclusions Our methods are applicable to a variety of data types with training set containing several hundreds to a few thousand documents. It is completely automatic and, thus can be readily incorporated to different workflow at different literature-based databases. We believe that the work presented here can contribute greatly to the tremendous task of automating the important yet labor-intensive biocuration effort. PMID:22280404
Automatic categorization of diverse experimental information in the bioscience literature.
Fang, Ruihua; Schindelman, Gary; Van Auken, Kimberly; Fernandes, Jolene; Chen, Wen; Wang, Xiaodong; Davis, Paul; Tuli, Mary Ann; Marygold, Steven J; Millburn, Gillian; Matthews, Beverley; Zhang, Haiyan; Brown, Nick; Gelbart, William M; Sternberg, Paul W
2012-01-26
Curation of information from bioscience literature into biological knowledge databases is a crucial way of capturing experimental information in a computable form. During the biocuration process, a critical first step is to identify from all published literature the papers that contain results for a specific data type the curator is interested in annotating. This step normally requires curators to manually examine many papers to ascertain which few contain information of interest and thus, is usually time consuming. We developed an automatic method for identifying papers containing these curation data types among a large pool of published scientific papers based on the machine learning method Support Vector Machine (SVM). This classification system is completely automatic and can be readily applied to diverse experimental data types. It has been in use in production for automatic categorization of 10 different experimental datatypes in the biocuration process at WormBase for the past two years and it is in the process of being adopted in the biocuration process at FlyBase and the Saccharomyces Genome Database (SGD). We anticipate that this method can be readily adopted by various databases in the biocuration community and thereby greatly reducing time spent on an otherwise laborious and demanding task. We also developed a simple, readily automated procedure to utilize training papers of similar data types from different bodies of literature such as C. elegans and D. melanogaster to identify papers with any of these data types for a single database. This approach has great significance because for some data types, especially those of low occurrence, a single corpus often does not have enough training papers to achieve satisfactory performance. We successfully tested the method on ten data types from WormBase, fifteen data types from FlyBase and three data types from Mouse Genomics Informatics (MGI). It is being used in the curation work flow at WormBase for automatic association of newly published papers with ten data types including RNAi, antibody, phenotype, gene regulation, mutant allele sequence, gene expression, gene product interaction, overexpression phenotype, gene interaction, and gene structure correction. Our methods are applicable to a variety of data types with training set containing several hundreds to a few thousand documents. It is completely automatic and, thus can be readily incorporated to different workflow at different literature-based databases. We believe that the work presented here can contribute greatly to the tremendous task of automating the important yet labor-intensive biocuration effort.
Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures
NASA Astrophysics Data System (ADS)
Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.
2013-05-01
An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.
A Conformational Transition in the Myosin VI Converter Contributes to the Variable Step Size
Ovchinnikov, V.; Cecchini, M.; Vanden-Eijnden, E.; Karplus, M.
2011-01-01
Myosin VI (MVI) is a dimeric molecular motor that translocates backwards on actin filaments with a surprisingly large and variable step size, given its short lever arm. A recent x-ray structure of MVI indicates that the large step size can be explained in part by a novel conformation of the converter subdomain in the prepowerstroke state, in which a 53-residue insert, unique to MVI, reorients the lever arm nearly parallel to the actin filament. To determine whether the existence of the novel converter conformation could contribute to the step-size variability, we used a path-based free-energy simulation tool, the string method, to show that there is a small free-energy difference between the novel converter conformation and the conventional conformation found in other myosins. This result suggests that MVI can bind to actin with the converter in either conformation. Models of MVI/MV chimeric dimers show that the variability in the tilting angle of the lever arm that results from the two converter conformations can lead to step-size variations of ∼12 nm. These variations, in combination with other proposed mechanisms, could explain the experimentally determined step-size variability of ∼25 nm for wild-type MVI. Mutations to test the findings by experiment are suggested. PMID:22098742
Xiao, Shengwei; Zhang, Mingzhen; He, Xiaomin; Huang, Lei; Zhang, Yanxian; Ren, Baiping; Zhong, Mingqiang; Chang, Yung; Yang, Jintao; Zheng, Jie
2018-06-07
Development of smart soft actuators is highly important for fundamental research and industrial applications, but has proved to be extremely challenging. In this work, we present a facile, one-pot, one-step method to prepare dual-responsive bilayer hydrogels, consisting of a thermos-responsive poly(N-isopropyl acrylamide) (polyNIPAM) layer and a salt-responsive poly(3-(1-(4-vinylbenzyl)-1H-imidazol-3-ium-3-yl)propane-1-sulfonat) (polyVBIPS) layer. Both polyNIPAM and polyVBIPs layers exhibit a completely opposite swelling/shrinking behavior, where polyNIPAM shrinks (swells) but polyVBIPS swells (shrinks) in salt solution (water) or at high (low) temperatures. By tuning NIPAM:VBIPS ratios, the resulting polyNIPAM/polyVBIPS bilayer hydrogels enable to achieve fast and large-amplitude bidirectional bending in response to temperatures, salt concentrations, and salt types. Such bidirectional bending, bending orientation and degree can be reversibly, repeatedly, and precisely controlled by salt- or temperature-induced cooperative, swelling-shrinking properties from both layers. Based on their fast, reversible, bidirectional bending behavior, we further design two conceptual hybrid hydrogel actuators, serving as a six-arm gripper to capture, transport, and release an object and an electrical circuit switch to turn on-and-off a lamp. Different from the conventional two or multi-step methods for preparation of bilayer hydrogels, our simple, one-pot, one-step method and a new bilayer hydrogel system provide an innovative concept to explore new hydrogel-based actuators through combining different responsive materials that allow to program different stimulus for soft and intelligent materials applications.
Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645
Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.
NASA Astrophysics Data System (ADS)
Benea, Lidia
2018-06-01
There are two applied electrochemical methods in our group in order to obtain advanced functional surfaces on materials: (i) direct electrochemical synthesis by electro-codeposition process and (ii) anodization of materials to form nanoporous oxide layers followed by electrodeposition of hydroxyapatite or other bioactive molecules and compounds into porous film. Electrodeposition is a process of low energy consumption, and therefore very convenient for the surface modification of various types of materials. Electrodeposition is a powerful method compared with other methods, which led her to be adopted and spread rapidly in nanotechnology to obtain nanostructured layers and films. Nanoporous thin oxide layers on titanum alloys as support for hydroxyapatite or other biomolecules electrodeposition in view of biomedical applications could be obtained by electrochemical methods. For surface modification of titanium or titanium alloys to improve the biocompatibility or osseointegration, the two steps must be fulfilled; the first is controlled growth of oxide layer followed by second being biomolecule electrodeposition into nanoporous formed titanium oxide layer.
Niklitschek, Mauricio; Baeza, Marcelo; Fernández-Lobato, María; Cifuentes, Víctor
2012-01-01
Generally two selection markers are required to obtain homozygous mutations in a diploid background, one for each gene copy that is interrupted. In this chapter is described a method that allows the double gene deletions of the two copies of a gene from a diploid organism, a wild-type strain of the Xanthophyllomyces dendrorhous yeast, using hygromycin B resistance as the only selection marker. To accomplish this, in a first step, a heterozygous hygromycin B-resistant strain is obtained by a single process of transformation (carrying the inserted hph gene). Following, the heterozygous mutant is grown in media with increasing concentrations of the antibiotic. In this way, the strains that became homozygous (by mitotic recombination) for the antibiotic marker would able to growth at higher concentration of the antibiotic than the heterozygous. The method can be potentially applied for obtaining double mutants of other diploid organisms.
ERIC Educational Resources Information Center
Perna, Mark C.
2005-01-01
A smart marketing plan creates emotional attachment and loyalty in a school's prospective students, but how does a school go about creating this type of positive environment?. In this brief paper, the author describes a step-by-step approach that he created--the enrollment funnel. The enrollment funnel is a systematic method of moving…
Tang, Zheng; Peng, Sha; Hu, Shuya; Hong, Song
2017-06-01
Adsorption removal of bisphenol-AF (BPAF) from aqueous solutions by synthesized activated carbon-alginate beads (AC-AB) with cetyltrimethyl ammonium bromide (CTAB) has been studied using two ways. The traditional method (two-step) first synthesized CTAB-modified AC-AB (AC-AB-CTAB), then used it to remove BPAF by adsorption. And one-step method dispersed AC-AB and CTAB in wastewater, followed by the removal of BPAF accompanied with the synthesis of AC-AB-CTAB. The one-step method showed a better performance than the two-step method, achieving a maximum removal of BPAF with 284.6mg/g. Kinetic studies and adsorption isotherms indicated that adsorption process of BPAF on AC-AB by the one-step method could be expressed by a pseudo-second-order model and a Dubinin-Ashtakhov (D-A) isotherm, respectively. The effects of pH, ionic strength, and inorganic ions on BPAF adsorption were also investigated. Furthermore, hydrophobic interactions, hydrogen bonds, and π-π electron donor-acceptor (EDA) interactions were discussed to explain the enhanced adsorption behavior of BPAF on AC-AB with CTAB. The findings verified the effectiveness of AC-AB for the removal of BPAF from wastewater and its high stability within five regeneration cycles. Copyright © 2017 Elsevier Inc. All rights reserved.
Exploratory Development of Corrosion Inhibiting Primers
1977-07-01
Phenolic Hardener From previous studies, phenol formaldehyde resins of the novolac (two-step) type have given superior properties when used to cure epoxy...novolacs and three resole (one-step) type phenol- formaldehyde resins which also perform as epoxide curing agents. First, Model #1, as de;crihed in Section...results. Varcum 4326 resin was chosen at this stage for further use with the model systems. It is a low molecular weight phenol- formaldehyde resin used
A SiC LDMOS with electric field modulation by a step compound drift region
NASA Astrophysics Data System (ADS)
Bao, Meng-tian; Wang, Ying; Yu, Cheng-hao; Cao, Fei
2018-07-01
In this paper, we propose a SiC LDMOS structure with a step compound drift region (SC-LDMOS). The proposed device has a compound drift region which consists of an n-type top layer, a step p-type middle layer and an n-type bottom layer. The step p-type middle layer can introduce two new electric field peaks and uniform the distribution of the electric field in the n-type top layer, which can modulate the surface electric field and improve the breakdown voltage of the proposed structure. In addition, the n-type bottom layer is applied under the heavy doping p-type middle layer,which contributes to realize the charge balance. Furthermore, it can also increase the doping concentration of the n-type top layer, which can decrease the on resistance of the proposed device. As a simulated result, the proposed device obtain a high BV of 976 V and a low Rsp,on of 7.74 mΩ·cm2. Compared with the conventional single REUSRF LDMOS and triple RESURF LDMOS, BV of proposed device is enhanced by 42.5% and 14.7%, respectively and Rsp,on is reduced by 37.3% and 30.9%, respectively. Meanwhile, the switching delays of the proposed device are significantly shorter than the conventional triple RESURF LDMOS.
Valdivielso, Izaskun; Bustamante, María Ángeles; Ruiz de Gordoa, Juan Carlos; Nájera, Ana Isabel; de Renobales, Mertxe; Barron, Luis Javier R
2015-04-15
Carotenoids and tocopherols from botanical species abundant in Atlantic mountain grasslands were simultaneously extracted using one-step solid-liquid phase. A single n-hexane/2-propanol extract containing both types of compounds was injected twice under two different sets of HPLC conditions to separate the tocopherols by normal-phase chromatography and carotenoids by reverse-phase mode. The method allowed reproducible quantification in plant samples of very low amounts of α-, β-, γ- and δ-tocopherols (LOD from 0.0379 to 0.0720 μg g(-1) DM) and over 15 different xanthophylls and carotene isomers. The simplified one-step extraction without saponification significantly increased the recovery of tocopherols and carotenoids, thereby enabling the determination of α-tocopherol acetate in plant samples. The two different sets of chromatographic analysis provided near baseline separation of individual compounds without interference from other lipid compounds extracted from plants, and a very sensitive and accurate detection of tocopherols and carotenoids. The detection of minor individual components in botanical species from grasslands is nowadays of high interest in searching for biomarkers for foods derived from grazing animals. Copyright © 2014 Elsevier Ltd. All rights reserved.
Biofunctionalized anti-corrosive silane coatings for magnesium alloys.
Liu, Xiao; Yue, Zhilian; Romeo, Tony; Weber, Jan; Scheuermann, Torsten; Moulton, Simon; Wallace, Gordon
2013-11-01
Biodegradable magnesium alloys are advantageous in various implant applications, as they reduce the risks associated with permanent metallic implants. However, a rapid corrosion rate is usually a hindrance in biomedical applications. Here we report a facile two step procedure to introduce multifunctional, anti-corrosive coatings on Mg alloys, such as AZ31. The first step involves treating the NaOH-activated Mg with bistriethoxysilylethane to immobilize a layer of densely crosslinked silane coating with good corrosion resistance; the second step is to impart amine functionality to the surface by treating the modified Mg with 3-amino-propyltrimethoxysilane. We characterized the two-layer anticorrosive coating of Mg alloy AZ31 by Fourier transform infrared spectroscopy, static contact angle measurement and optical profilometry, potentiodynamic polarization and AC impedance measurements. Furthermore, heparin was covalently conjugated onto the silane-treated AZ31 to render the coating haemocompatible, as demonstrated by reduced platelet adhesion on the heparinized surface. The method reported here is also applicable to the preparation of other types of biofunctional, anti-corrosive coatings and thus of significant interest in biodegradable implant applications. Copyright © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Zhang; Renping, Zhang; Weihua, Han; Jian, Liu; Xiang, Yang; Ying, Wang; Chian Chiu, Li; Fuhua, Yang
2009-11-01
A two-step exposure method to effectively reduce the proximity effect in fabricating nanometer-spaced nanopillars is presented. In this method, nanopillar patterns on poly-methylmethacrylate (PMMA) were partly cross-linked in the first-step exposure. After development, PMMA between nanopillar patterns was removed, and hence the proximity effect would not take place there in the subsequent exposure. In the second-step exposure, PMMA masks were completely cross-linked to achieve good resistance in inductively coupled plasma etching. Accurate pattern transfer of rows of nanopillars with spacing down to 40 nm was realized on a silicon-on-insulator substrate.
User Interaction in Semi-Automatic Segmentation of Organs at Risk: a Case Study in Radiotherapy.
Ramkumar, Anjana; Dolz, Jose; Kirisli, Hortense A; Adebahr, Sonja; Schimek-Jasch, Tanja; Nestle, Ursula; Massoptier, Laurent; Varga, Edit; Stappers, Pieter Jan; Niessen, Wiro J; Song, Yu
2016-04-01
Accurate segmentation of organs at risk is an important step in radiotherapy planning. Manual segmentation being a tedious procedure and prone to inter- and intra-observer variability, there is a growing interest in automated segmentation methods. However, automatic methods frequently fail to provide satisfactory result, and post-processing corrections are often needed. Semi-automatic segmentation methods are designed to overcome these problems by combining physicians' expertise and computers' potential. This study evaluates two semi-automatic segmentation methods with different types of user interactions, named the "strokes" and the "contour", to provide insights into the role and impact of human-computer interaction. Two physicians participated in the experiment. In total, 42 case studies were carried out on five different types of organs at risk. For each case study, both the human-computer interaction process and quality of the segmentation results were measured subjectively and objectively. Furthermore, different measures of the process and the results were correlated. A total of 36 quantifiable and ten non-quantifiable correlations were identified for each type of interaction. Among those pairs of measures, 20 of the contour method and 22 of the strokes method were strongly or moderately correlated, either directly or inversely. Based on those correlated measures, it is concluded that: (1) in the design of semi-automatic segmentation methods, user interactions need to be less cognitively challenging; (2) based on the observed workflows and preferences of physicians, there is a need for flexibility in the interface design; (3) the correlated measures provide insights that can be used in improving user interaction design.
Ito, Toshifumi; Tsuji, Yukitaka; Aramaki, Kenji; Tonooka, Noriaki
2012-01-01
Multiple emulsions, also called complex emulsions or multiphase emulsions, include water-in-oil-in-water (W/O/W)-type and oil-in-water-in-oil (O/W/O)-type emulsions. W/O/W-type multiple emulsions, obtained by utilizing lamellar liquid crystal with a layer structure showing optical anisotropy at the periphery of emulsion droplets, are superior in stability to O/W/O-type emulsions. In this study, we investigated a two-step emulsification process for a W/O/W-type multiple emulsion utilizing liquid crystal emulsification. We found that a W/O/W-type multiple emulsion containing lamellar liquid crystal can be prepared by mixing a W/O-type emulsion (prepared by primary emulsification) with a lamellar liquid crystal obtained from poly(oxyethylene) stearyl ether, cetyl alcohol, and water, and by dispersing and emulsifying the mixture in an outer aqueous phase. When poly(oxyethylene) stearyl ether and cetyl alcohol are each used in a given amount and the amount of water added is varied from 0 to 15 g (total amount of emulsion, 100 g), a W/O/W-type multiple emulsion is efficiently prepared. When the W/O/W-type multiple emulsion was held in a thermostatic bath at 25°C, the droplet size distribution showed no change 0, 30, or 60 days after preparation. Moreover, the W/O/W-type multiple emulsion strongly encapsulated Uranine in the inner aqueous phase as compared with emulsions prepared by one-step emulsification.
NASA Astrophysics Data System (ADS)
Przybylak, Marcin; Maciejewski, Hieronim; Dutkiewicz, Agnieszka
2016-11-01
The surface modification of cotton fabrics was carried out using two types of bifunctional fluorinated silsesquioxanes with different ratios of functional groups. The modification was performed either by one- or two-step process. Two methods, the sol-gel and the dip coating method were used in different configurations. The heat treatment and the washing process were applied after modification. The wettability of cotton fabric was evaluated by measuring water contact angles (WCA). Changes in the surface morphology were examined by scanning electron microscopy (SEM, SEM-LFD) and atomic force microscopy (AFM). Moreover, the modified fabrics were subjected to analysis of elemental composition of the applied coatings using SEM-EDS techniques. Highly hydrophobic textiles were obtained in all cases studied and one of the modifications resulted in imparting superhydrophobic properties. Most of impregnated textiles remained hydrophobic even after multiple washing process which shows that the studied modification is durable.
Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.
Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T
2010-03-10
Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.
Heidari, Banafsheh; Gifani, Minoo; Shirazi, Abolfazl; Zarnani, Amir-Hassan; Baradaran, Behzad; Naderi, Mohammad Mehdi; Behzadi, Bahareh; Borjian-Boroujeni, Sara; Sarvari, Ali; Lakpour, Niknam; Akhondi, Mohammad Mehdi
2014-04-01
The well documented source for adult multipotent stem cells is Spermatogonial Stem Cells (SSCs). They are the foundation of spermatogenesis in the testis throughout adult life by balancing self-renewal and differentiation. The aim of this study was to assess the effect of percoll density gradient and differential plating on enrichment of undifferentiated type A spermatogonia in dissociated cellular suspension of goat testes. Additionally, we evaluated the separated fractions of the gradients in percoll and samples in differential plating at different times for cell number, viability and purification rate of goat SSCs in culture. Testicular cells were successfully isolated from one month old goat testis using two-step enzymatic digestion and followed by two purification protocols, differential plating with different times of culture (3, 4, 5, and 6 hr) and discontinuous percoll density with different gradients (20, 28, 30, and 32%). The difference of percentage of undifferentiated SSCs (PGP9.5 positive) in each method was compared using ANOVA and comparison between the highest percentage of corresponding value between two methods was carried out by t-test using Sigma Stat (ver. 3.5). The highest PGP9.5 (94.6±0.4) and the lowest c-Kit positive (25.1±0.7) in Percoll method was significantly (p ≤ 0.001) achieved in 32% percoll gradient. While the corresponding rates in differential plating method for the highest PGP9.5 positive cells (81.3±1.1) and lowest c-Kit (17.1±1.4) was achieved after 5 hr culturing (p < 0.001). The enrichment of undifferentiated type A spermatogonia using Percoll was more efficient than differential plating method (p < 0.001). Percoll density gradient and differential plating were efficient and fast methods for enrichment of type A spermatogonial stem cells from goat testes.
Examination of the steps leading up to the physical developer process for developing fingerprints.
Wilson, Jeffrey Daniel; Cantu, Antonio A; Antonopoulos, George; Surrency, Marc J
2007-03-01
This is a systematic study that examines several acid prewashes and water rinses on paper bearing latent prints before its treatment with a silver physical developer. Specimens or items processed with this method are usually pretreated with an acid wash to neutralize calcium carbonate from the paper before the treatment with a physical developer. Two different acids at varying concentrations were tested on fingerprints. Many different types of paper were examined in order to determine which acid prewash was the most beneficial. Various wash times as well as the addition of a water rinse step before the development were also examined. A pH study was included that monitored the acidity of the solution during the wash step. Scanning electron microscopy was used to verify surface calcium levels for the paper samples throughout the experiment. Malic acid at a concentration of 2.5% proved to be an ideal acid for most papers, providing good fingerprint development with minimal background development. Water rinses were deemed unnecessary before physical development.
Xiong, Ai-Sheng; Yao, Quan-Hong; Peng, Ri-He; Li, Xian; Fan, Hui-Qin; Cheng, Zong-Ming; Li, Yi
2004-07-07
Chemical synthesis of DNA sequences provides a powerful tool for modifying genes and for studying gene function, structure and expression. Here, we report a simple, high-fidelity and cost-effective PCR-based two-step DNA synthesis (PTDS) method for synthesis of long segments of DNA. The method involves two steps. (i) Synthesis of individual fragments of the DNA of interest: ten to twelve 60mer oligonucleotides with 20 bp overlap are mixed and a PCR reaction is carried out with high-fidelity DNA polymerase Pfu to produce DNA fragments that are approximately 500 bp in length. (ii) Synthesis of the entire sequence of the DNA of interest: five to ten PCR products from the first step are combined and used as the template for a second PCR reaction using high-fidelity DNA polymerase pyrobest, with the two outermost oligonucleotides as primers. Compared with the previously published methods, the PTDS method is rapid (5-7 days) and suitable for synthesizing long segments of DNA (5-6 kb) with high G + C contents, repetitive sequences or complex secondary structures. Thus, the PTDS method provides an alternative tool for synthesizing and assembling long genes with complex structures. Using the newly developed PTDS method, we have successfully obtained several genes of interest with sizes ranging from 1.0 to 5.4 kb.
Video-Recorded Validation of Wearable Step Counters under Free-living Conditions.
Toth, Lindsay P; Park, Susan; Springer, Cary M; Feyerabend, McKenzie D; Steeves, Jeremy A; Bassett, David R
2018-06-01
The purpose of this study was to determine the accuracy of 14-step counting methods under free-living conditions. Twelve adults (mean ± SD age, 35 ± 13 yr) wore a chest harness that held a GoPro camera pointed down at the feet during all waking hours for 1 d. The GoPro continuously recorded video of all steps taken throughout the day. Simultaneously, participants wore two StepWatch (SW) devices on each ankle (all programmed with different settings), one activPAL on each thigh, four devices at the waist (Fitbit Zip, Yamax Digi-Walker SW-200, New Lifestyles NL-2000, and ActiGraph GT9X (AG)), and two devices on the dominant and nondominant wrists (Fitbit Charge and AG). The GoPro videos were downloaded to a computer and researchers counted steps using a hand tally device, which served as the criterion method. The SW devices recorded between 95.3% and 102.8% of actual steps taken throughout the day (P > 0.05). Eleven step counting methods estimated less than 100% of actual steps; Fitbit Zip, Yamax Digi-Walker SW-200, and AG with the moving average vector magnitude algorithm on both wrists recorded 71% to 91% of steps (P > 0.05), whereas the activPAL, New Lifestyles NL-2000, and AG (without low-frequency extension (no-LFE), moving average vector magnitude) worn on the hip, and Fitbit Charge recorded 69% to 84% of steps (P < 0.05). Five methods estimated more than 100% of actual steps; AG (no-LFE) on both wrists recorded 109% to 122% of steps (P > 0.05), whereas the AG (LFE) on both wrists and the hip recorded 128% to 220% of steps (P < 0.05). Across all waking hours of 1 d, step counts differ between devices. The SW, regardless of settings, was the most accurate method of counting steps.
The Keyword Method of Vocabulary Acquisition: An Experimental Evaluation.
ERIC Educational Resources Information Center
Griffith, Douglas
The keyword method of vocabulary acquisition is a two-step mnemonic technique for learning vocabulary terms. The first step, the acoustic link, generates a keyword based on the sound of the foreign word. The second step, the imagery link, ties the keyword to the meaning of the item to be learned, via an interactive visual image or other…
Autocorrelation techniques for soft photogrammetry
NASA Astrophysics Data System (ADS)
Yao, Wu
In this thesis research is carried out on image processing, image matching searching strategies, feature type and image matching, and optimal window size in image matching. To make comparisons, the soft photogrammetry package SoftPlotter is used. Two aerial photographs from the Iowa State University campus high flight 94 are scanned into digital format. In order to create a stereo model from them, interior orientation, single photograph rectification and stereo rectification are done. Two new image matching methods, multi-method image matching (MMIM) and unsquare window image matching are developed and compared. MMIM is used to determine the optimal window size in image matching. Twenty four check points from four different types of ground features are used for checking the results from image matching. Comparison between these four types of ground feature shows that the methods developed here improve the speed and the precision of image matching. A process called direct transformation is described and compared with the multiple steps in image processing. The results from image processing are consistent with those from SoftPlotter. A modified LAN image header is developed and used to store the information about the stereo model and image matching. A comparison is also made between cross correlation image matching (CCIM), least difference image matching (LDIM) and least square image matching (LSIM). The quality of image matching in relation to ground features are compared using two methods developed in this study, the coefficient surface for CCIM and the difference surface for LDIM. To reduce the amount of computation in image matching, the best-track searching algorithm, developed in this research, is used instead of the whole range searching algorithm.
Photodiodes integration on a suspended ridge structure VOA using 2-step flip-chip bonding method
NASA Astrophysics Data System (ADS)
Kim, Seon Hoon; Kim, Tae Un; Ki, Hyun Chul; Kim, Doo Gun; Kim, Hwe Jong; Lim, Jung Woon; Lee, Dong Yeol; Park, Chul Hee
2015-01-01
In this works, we have demonstrated a VOA integrated with mPDs, based on silica-on-silicon PLC and flip-chip bonding technologies. The suspended ridge structure was applied to reduce the power consumption. It achieves the attenuation of 30dB in open loop operation with the power consumption of below 30W. We have applied two-step flipchip bonding method using passive alignment to perform high density multi-chip integration on a VOA with eutectic AuSn solder bumps. The average bonding strength of the two-step flip-chip bonding method was about 90gf.
Surface energy and surface stress on vicinals by revisiting the Shuttleworth relation
NASA Astrophysics Data System (ADS)
Hecquet, Pascal
2018-04-01
In 1998 [Surf. Sci. 412/413, 639 (1998)], we showed that the step stress on vicinals varies as 1/L, L being the distance between steps, while the inter-step interaction energy primarily follows the law as 1/L2 from the well-known Marchenko-Parshin model. In this paper, we give a better understanding of the interaction term of the step stress. The step stress is calculated with respect to the nominal surface stress. Consequently, we calculate the diagonal surface stresses in both the vicinal system (x, y, z) where z is normal to the vicinal and the projected system (x, b, c) where b is normal to the nominal terrace. Moreover, we calculate the surface stresses by using two methods: the first called the 'Zero' method, from the surface pressure forces and the second called the 'One' method, by homogeneously deforming the vicinal in the parallel direction, x or y, and by calculating the surface energy excess proportional to the deformation. By using the 'One' method on the vicinal Cu(0 1 M), we find that the step deformations, due to the applied deformation, vary as 1/L by the same factor for the tensor directions bb and cb, and by twice the same factor for the parallel direction yy. Due to the vanishing of the surface stress normal to the vicinal, the variation of the step stress in the direction yy is better described by using only the step deformation in the same direction. We revisit the Shuttleworth formula, for while the variation of the step stress in the direction xx is the same between the two methods, the variation in the direction yy is higher by 76% for the 'Zero' method with respect to the 'One' method. In addition to the step energy, we confirm that the variation of the step stress must be taken into account for the understanding of the equilibrium of vicinals when they are not deformed.
Do placebo based validation standards mimic real batch products behaviour? Case studies.
Bouabidi, A; Talbi, M; Bouklouze, A; El Karbane, M; Bourichi, H; El Guezzar, M; Ziemons, E; Hubert, Ph; Rozet, E
2011-06-01
Analytical methods validation is a mandatory step to evaluate the ability of developed methods to provide accurate results for their routine application. Validation usually involves validation standards or quality control samples that are prepared in placebo or reconstituted matrix made of a mixture of all the ingredients composing the drug product except the active substance or the analyte under investigation. However, one of the main concerns that can be made with this approach is that it may lack an important source of variability that come from the manufacturing process. The question that remains at the end of the validation step is about the transferability of the quantitative performance from validation standards to real authentic drug product samples. In this work, this topic is investigated through three case studies. Three analytical methods were validated using the commonly spiked placebo validation standards at several concentration levels as well as using samples coming from authentic batch samples (tablets and syrups). The results showed that, depending on the type of response function used as calibration curve, there were various degrees of differences in the results accuracy obtained with the two types of samples. Nonetheless the use of spiked placebo validation standards was showed to mimic relatively well the quantitative behaviour of the analytical methods with authentic batch samples. Adding these authentic batch samples into the validation design may help the analyst to select and confirm the most fit for purpose calibration curve and thus increase the accuracy and reliability of the results generated by the method in routine application. Copyright © 2011 Elsevier B.V. All rights reserved.
Luting of CAD/CAM ceramic inlays: direct composite versus dual-cure luting cement.
Kameyama, Atsushi; Bonroy, Kim; Elsen, Caroline; Lührs, Anne-Katrin; Suyama, Yuji; Peumans, Marleen; Van Meerbeek, Bart; De Munck, Jan
2015-01-01
The aim of this study was to investigate bonding effectiveness in direct restorations. A two-step self-etch adhesive and a light-cure resin composite was compared with luting with a conventional dual-cure resin cement and a two-step etch and rinse adhesive. Class-I box-type cavities were prepared. Identical ceramic inlays were designed and fabricated with a computer-aided design/computer-aided manufacturing (CAD/CAM) device. The inlays were seated with Clearfil SE Bond/Clearfil AP-X (Kuraray Medical) or ExciTE F DSC/Variolink II (Ivoclar Vivadent), each by two operators (five teeth per group). The inlays were stored in water for one week at 37°C, whereafter micro-tensile bond strength testing was conducted. The micro-tensile bond strength of the direct composite was significantly higher than that from conventional luting, and was independent of the operator (P<0.0001). Pre-testing failures were only observed with the conventional method. High-power light-curing of a direct composite may be a viable alternative to luting lithium disilicate glass-ceramic CAD/CAM restorations.
Ren, Yan; Yang, Min; Li, Qian; Pan, Jay; Chen, Fei; Li, Xiaosong; Meng, Qun
2017-01-01
Objectives To introduce multilevel repeated measures (RM) models and compare them with multilevel difference-in-differences (DID) models in assessing the linear relationship between the length of the policy intervention period and healthcare outcomes (dose–response effect) for data from a stepped-wedge design with a hierarchical structure. Design The implementation of national essential medicine policy (NEMP) in China was a stepped-wedge-like design of five time points with a hierarchical structure. Using one key healthcare outcome from the national NEMP surveillance data as an example, we illustrate how a series of multilevel DID models and one multilevel RM model can be fitted to answer some research questions on policy effects. Setting Routinely and annually collected national data on China from 2008 to 2012. Participants 34 506 primary healthcare facilities in 2675 counties of 31 provinces. Outcome measures Agreement and differences in estimates of dose–response effect and variation in such effect between the two methods on the logarithm-transformed total number of outpatient visits per facility per year (LG-OPV). Results The estimated dose–response effect was approximately 0.015 according to four multilevel DID models and precisely 0.012 from one multilevel RM model. Both types of model estimated an increase in LG-OPV by 2.55 times from 2009 to 2012, but 2–4.3 times larger SEs of those estimates were found by the multilevel DID models. Similar estimates of mean effects of covariates and random effects of the average LG-OPV among all levels in the example dataset were obtained by both types of model. Significant variances in the dose–response among provinces, counties and facilities were estimated, and the ‘lowest’ or ‘highest’ units by their dose–response effects were pinpointed only by the multilevel RM model. Conclusions For examining dose–response effect based on data from multiple time points with hierarchical structure and the stepped wedge-like designs, multilevel RM models are more efficient, convenient and informative than the multilevel DID models. PMID:28399510
Practical training framework for fitting a function and its derivatives.
Pukrittayakamee, Arjpolson; Hagan, Martin; Raff, Lionel; Bukkapatnam, Satish T S; Komanduri, Ranga
2011-06-01
This paper describes a practical framework for using multilayer feedforward neural networks to simultaneously fit both a function and its first derivatives. This framework involves two steps. The first step is to train the network to optimize a performance index, which includes both the error in fitting the function and the error in fitting the derivatives. The second step is to prune the network by removing neurons that cause overfitting and then to retrain it. This paper describes two novel types of overfitting that are only observed when simultaneously fitting both a function and its first derivatives. A new pruning algorithm is proposed to eliminate these types of overfitting. Experimental results show that the pruning algorithm successfully eliminates the overfitting and produces the smoothest responses and the best generalization among all the training algorithms that we have tested.
Optimizing Fungal DNA Extraction Methods from Aerosol Filters
NASA Astrophysics Data System (ADS)
Jimenez, G.; Mescioglu, E.; Paytan, A.
2016-12-01
Fungi and fungal spores can be picked up from terrestrial ecosystems, transported long distances, and deposited into marine ecosystems. It is important to study dust-borne fungal communities, because they can stay viable and effect the ambient microbial populations, which are key players in biogeochemical cycles. One of the challenges of studying dust-borne fungal populations is that aerosol samples contain low biomass, making extracting good quality DNA very difficult. The aim of this project was to increase DNA yield by optimizing DNA extraction methods. We tested aerosol samples collected from Haifa, Israel (polycarbonate filter), Monterey Bay, CA (quartz filter) and Bermuda (quartz filter). Using the Qiagen DNeasy Plant Kit, we tested the effect of altering bead beating times and incubation times, adding three freeze and thaw steps, initially washing the filters with buffers for various lengths of time before using the kit, and adding a step with 30 minutes of sonication in 65C water. Adding three freeze/thaw steps, adding a sonication step, washing with a phosphate buffered saline overnight, and increasing incubation time to two hours, in that order, resulted in the highest increase in DNA for samples from Israel (polycarbonate). DNA yield of samples from Monterey (quart filter) increased about 5 times when washing with buffers overnight (phosphate buffered saline and potassium phophate buffer), adding a sonication step, and adding three freeze and thaw steps. Samples collected in Bermuda (quartz filter) had the highest increase in DNA yield from increasing incubation to 2 hours, increasing bead beating time to 6 minutes, and washing with buffers overnight (phosphate buffered saline and potassium phophate buffer). Our results show that DNA yield can be increased by altering various steps of the Qiagen DNeasy Plant Kit protocol, but different types of filters collected at different sites respond differently to alterations. These results can be used as preliminary results to continue developing fungi DNA extraction methods. Developing these methods will be important as dust storms are predicted to increase due to increased draughts and anthropogenic activity, and the fungal communities of these dust-storms are currently relatively understudied.
Marondedze, Claudius; Wong, Aloysius; Groen, Arnoud; Serrano, Natalia; Jankovic, Boris; Lilley, Kathryn; Gehring, Christoph; Thomas, Ludivine
2014-12-31
The study of proteomes provides new insights into stimulus-specific responses of protein synthesis and turnover, and the role of post-translational modifications at the systems level. Due to the diverse chemical nature of proteins and shortcomings in the analytical techniques used in their study, only a partial display of the proteome is achieved in any study, and this holds particularly true for plant proteomes. Here we show that different solubilization and separation methods have profound effects on the resulting proteome. In particular, we observed that the type of detergents employed in the solubilization buffer preferentially enriches proteins in different functional categories. These include proteins with a role in signaling, transport, response to temperature stimuli and metabolism. This data may offer a functional bias on comparative analysis studies. In order to obtain a broader coverage, we propose a two-step solubilization protocol with first a detergent-free buffer and then a second step utilizing a combination of two detergents to solubilize proteins.
Exploring the Arabidopsis Proteome: Influence of Protein Solubilization Buffers on Proteome Coverage
Marondedze, Claudius; Wong, Aloysius; Groen, Arnoud; Serrano, Natalia; Jankovic, Boris; Lilley, Kathryn; Gehring, Christoph; Thomas, Ludivine
2014-01-01
The study of proteomes provides new insights into stimulus-specific responses of protein synthesis and turnover, and the role of post-translational modifications at the systems level. Due to the diverse chemical nature of proteins and shortcomings in the analytical techniques used in their study, only a partial display of the proteome is achieved in any study, and this holds particularly true for plant proteomes. Here we show that different solubilization and separation methods have profound effects on the resulting proteome. In particular, we observed that the type of detergents employed in the solubilization buffer preferentially enriches proteins in different functional categories. These include proteins with a role in signaling, transport, response to temperature stimuli and metabolism. This data may offer a functional bias on comparative analysis studies. In order to obtain a broader coverage, we propose a two-step solubilization protocol with first a detergent-free buffer and then a second step utilizing a combination of two detergents to solubilize proteins. PMID:25561235
Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.
Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen
2017-12-01
In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.
A general probabilistic model for group independent component analysis and its estimation methods
Guo, Ying
2012-01-01
SUMMARY Independent component analysis (ICA) has become an important tool for analyzing data from functional magnetic resonance imaging (fMRI) studies. ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix and the uncertainty in between-subjects variability in fMRI data. We present a general probabilistic ICA (PICA) model that can accommodate varying group structures of multi-subject spatio-temporal processes. An advantage of the proposed model is that it can flexibly model various types of group structures in different underlying neural source signals and under different experimental conditions in fMRI studies. A maximum likelihood method is used for estimating this general group ICA model. We propose two EM algorithms to obtain the ML estimates. The first method is an exact EM algorithm which provides an exact E-step and an explicit noniterative M-step. The second method is an variational approximation EM algorithm which is computationally more efficient than the exact EM. In simulation studies, we first compare the performance of the proposed general group PICA model and the existing probabilistic group ICA approach. We then compare the two proposed EM algorithms and show the variational approximation EM achieves comparable accuracy to the exact EM with significantly less computation time. An fMRI data example is used to illustrate application of the proposed methods. PMID:21517789
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahunbay, Ergun E., E-mail: eahunbay@mcw.edu; Ates,
Purpose: In a situation where a couch shift for patient positioning is not preferred or prohibited (e.g., MR-linac), segment aperture morphing (SAM) can address target dislocation and deformation. For IMRT/VMAT with flattening-filter-free (FFF) beams, however, SAM method would lead to an adverse translational dose effect due to the beam unflattening. Here the authors propose a new two-step process to address both the translational effect of FFF beams and the target deformation. Methods: The replanning method consists of an offline and an online step. The offline step is to create a series of preshifted-plans (PSPs) obtained by a so-called “warm start”more » optimization (starting optimization from the original plan, rather than from scratch) at a series of isocenter shifts. The PSPs all have the same number of segments with very similar shapes, since the warm start optimization only adjusts the MLC positions instead of regenerating them. In the online step, a new plan is obtained by picking the closest PSP or linearly interpolating the MLC positions and the monitor units of the closest PSPs for the shift determined from the image of the day. This two-step process is completely automated and almost instantaneous (no optimization or dose calculation needed). The previously developed SAM algorithm is then applied for daily deformation. The authors tested the method on sample prostate and pancreas cases. Results: The two-step interpolation method can account for the adverse dose effects from FFF beams, while SAM corrects for the target deformation. Plan interpolation method is effective in diminishing the unflat beam effect and may allow reducing the required number of PSPs. The whole process takes the same time as the previously reported SAM process (5–10 min). Conclusions: The new two-step method plus SAM can address both the translation effects of FFF beams and target deformation, and can be executed in full automation except the delineation of target contour required by the SAM process.« less
Recent advances in high-order WENO finite volume methods for compressible multiphase flows
NASA Astrophysics Data System (ADS)
Dumbser, Michael
2013-10-01
We present two new families of better than second order accurate Godunov-type finite volume methods for the solution of nonlinear hyperbolic partial differential equations with nonconservative products. One family is based on a high order Arbitrary-Lagrangian-Eulerian (ALE) formulation on moving meshes, which allows to resolve the material contact wave in a very sharp way when the mesh is moved at the speed of the material interface. The other family of methods is based on a high order Adaptive Mesh Refinement (AMR) strategy, where the mesh can be strongly refined in the vicinity of the material interface. Both classes of schemes have several building blocks in common, in particular: a high order WENO reconstruction operator to obtain high order of accuracy in space; the use of an element-local space-time Galerkin predictor step which evolves the reconstruction polynomials in time and that allows to reach high order of accuracy in time in one single step; the use of a path-conservative approach to treat the nonconservative terms of the PDE. We show applications of both methods to the Baer-Nunziato model for compressible multiphase flows.
USDA-ARS?s Scientific Manuscript database
The current methods of euthanizing neonatal piglets are raising concerns from the public and scientists. Our experiment tests the use of a two-step euthanasia method using nitrous oxide (N2O) for six minutes and then carbon dioxide (CO2) as a more humane way to euthanize piglets compared to just usi...
Non-smooth Hopf-type bifurcations arising from impact–friction contact events in rotating machinery
Mora, Karin; Budd, Chris; Glendinning, Paul; Keogh, Patrick
2014-01-01
We analyse the novel dynamics arising in a nonlinear rotor dynamic system by investigating the discontinuity-induced bifurcations corresponding to collisions with the rotor housing (touchdown bearing surface interactions). The simplified Föppl/Jeffcott rotor with clearance and mass unbalance is modelled by a two degree of freedom impact–friction oscillator, as appropriate for a rigid rotor levitated by magnetic bearings. Two types of motion observed in experiments are of interest in this paper: no contact and repeated instantaneous contact. We study how these are affected by damping and stiffness present in the system using analytical and numerical piecewise-smooth dynamical systems methods. By studying the impact map, we show that these types of motion arise at a novel non-smooth Hopf-type bifurcation from a boundary equilibrium bifurcation point for certain parameter values. A local analysis of this bifurcation point allows us a complete understanding of this behaviour in a general setting. The analysis identifies criteria for the existence of such smooth and non-smooth bifurcations, which is an essential step towards achieving reliable and robust controllers that can take compensating action. PMID:25383034
Liang, Xiaoping; Zhang, Qizhi; Jiang, Huabei
2006-11-10
We show that a two-step reconstruction method can be adapted to improve the quantitative accuracy of the refractive index reconstruction in phase-contrast diffuse optical tomography (PCDOT). We also describe the possibility of imaging tissue glucose concentration with PCDOT. In this two-step method, we first use our existing finite-element reconstruction algorithm to recover the position and shape of a target. We then use the position and size of the target as a priori information to reconstruct a single value of the refractive index within the target and background regions using a region reconstruction method. Due to the extremely low contrast available in the refractive index reconstruction, we incorporate a data normalization scheme into the two-step reconstruction to combat the associated low signal-to-noise ratio. Through a series of phantom experiments we find that this two-step reconstruction method can considerably improve the quantitative accuracy of the refractive index reconstruction. The results show that the relative error of the reconstructed refractive index is reduced from 20% to within 1.5%. We also demonstrate the possibility of PCDOT for recovering glucose concentration using these phantom experiments.
NASA Astrophysics Data System (ADS)
Yang, Erqi; Qi, Xiaosi; Xie, Ren; Bai, Zhongchen; Jiang, Yang; Qin, Shuijie; Zhong, Wei; Du, Youwei
2018-06-01
It is widely recognized that constructing multiple interface structures to enhance interface polarization is very good for the attenuation of electromagnetic (EM) wave. Here, a novel "203" type of heterostructured nanohybrid consisting of two-dimensional (2D) MoS2 nanosheets, zero-dimensional (0D) Fe3O4 nanoparticles and three-dimensional (3D) carbon layers was elaborately designed and successfully synthesized by a two-step method: Fe3O4 nanoparticles were deposited onto the surface of few-layer MoS2 nanosheets by a hydrothermal method, followed by the carbonation process by a chemical vapor deposition method. Compared to that of "20" type MoS2-Fe3O4, the as-prepared heterostructured "203" type MoS2-Fe3O4-C ternary nanohybrid exhibited remarkably enhanced EM and microwave absorption properties. And the minimum reflection loss (RL) value of the obtained MoS2-Fe3O4-C ternary nanohybrid could reach -53.03 dB at 14.4 GHz with a matching thickness of 7.86 mm. Moreover, the excellent EM wave absorption property of the as-prepared ternary nanohybrid was proved to be attributed to the quarter-wavelength matching model. Therefore, a simple and effective route was proposed to produce MoS2-based mixed-dimensional van der Waals heterostructure, which provided a new platform for the designing and production of high performance microwave absorption materials.
Monolithic prestressed ceramic devices and method for making same
NASA Technical Reports Server (NTRS)
Haertling, Gene H. (Inventor)
1996-01-01
Monolithic, internally asymmetrically stress biased electrically active ceramic devices and a method for making same is disclosed. The first step in the method of the present invention is to fabricate a ceramic element having first and second opposing surfaces. Next, only the first surface is chemically reduced by heat treatment in a reducing atmosphere. This produces a concave shaped, internally asymmetrically stress biased ceramic element and an electrically conducting, chemically reduced layer on the first surface which serves as one of the electrodes of the device. Another electrode can be deposited on the second surface to complete the device. In another embodiment of the present invention two dome shaped ceramic devices can be placed together to form a completed clamshell structure or an accordion type structure. In a further embodiment, the clamshell or accordion type structures can be placed on top of one another. In another embodiment, a pair of dome shaped ceramic devices having opposing temperature characteristics can be placed on top of each other to produce an athermalized ceramic device.
Purification of functionalized DNA origami nanostructures.
Shaw, Alan; Benson, Erik; Högberg, Björn
2015-05-26
The high programmability of DNA origami has provided tools for precise manipulation of matter at the nanoscale. This manipulation of matter opens up the possibility to arrange functional elements for a diverse range of applications that utilize the nanometer precision provided by these structures. However, the realization of functionalized DNA origami still suffers from imperfect production methods, in particular in the purification step, where excess material is separated from the desired functionalized DNA origami. In this article we demonstrate and optimize two purification methods that have not previously been applied to DNA origami. In addition, we provide a systematic study comparing the purification efficacy of these and five other commonly used purification methods. Three types of functionalized DNA origami were used as model systems in this study. DNA origami was patterned with either small molecules, antibodies, or larger proteins. With the results of our work we aim to provide a guideline in quality fabrication of various types of functionalized DNA origami and to provide a route for scalable production of these promising tools.
2012-01-01
Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to recognize texture, it would be possible to replace this with other texture identifiers, and we plan to explore this in future work. PMID:22321695
Development and Validation of a New Reliable Method for the Diagnosis of Avian Botulism.
Le Maréchal, Caroline; Rouxel, Sandra; Ballan, Valentine; Houard, Emmanuelle; Poezevara, Typhaine; Bayon-Auboyer, Marie-Hélène; Souillard, Rozenn; Morvan, Hervé; Baudouard, Marie-Agnès; Woudstra, Cédric; Mazuet, Christelle; Le Bouquin, Sophie; Fach, Patrick; Popoff, Michel; Chemaly, Marianne
2017-01-01
Liver is a reliable matrix for laboratory confirmation of avian botulism using real-time PCR. Here, we developed, optimized, and validated the analytical steps preceding PCR to maximize the detection of Clostridium botulinum group III in avian liver. These pre-PCR steps included enrichment incubation of the whole liver (maximum 25 g) at 37°C for at least 24 h in an anaerobic chamber and DNA extraction using an enzymatic digestion step followed by a DNA purification step. Conditions of sample storage before analysis appear to have a strong effect on the detection of group III C. botulinum strains and our results recommend storage at temperatures below -18°C. Short-term storage at 5°C is possible for up to 24 h, but a decrease in sensitivity was observed at 48 h of storage at this temperature. Analysis of whole livers (maximum 25 g) is required and pooling samples before enrichment culturing must be avoided. Pooling is however possible before or after DNA extraction under certain conditions. Whole livers should be 10-fold diluted in enrichment medium and homogenized using a Pulsifier® blender (Microgen, Surrey, UK) instead of a conventional paddle blender. Spiked liver samples showed a limit of detection of 5 spores/g liver for types C and D and 250 spores/g for type E. Using the method developed here, the analysis of 268 samples from 73 suspected outbreaks showed 100% specificity and 95.35% sensitivity compared with other PCR-based methods considered as reference. The mosaic type C/D was the most common neurotoxin type found in examined samples, which included both wild and domestic birds.
Development and Validation of a New Reliable Method for the Diagnosis of Avian Botulism
Le Maréchal, Caroline; Rouxel, Sandra; Ballan, Valentine; Houard, Emmanuelle; Poezevara, Typhaine; Bayon-Auboyer, Marie-Hélène; Souillard, Rozenn; Morvan, Hervé; Baudouard, Marie-Agnès; Woudstra, Cédric; Mazuet, Christelle; Le Bouquin, Sophie; Fach, Patrick; Popoff, Michel; Chemaly, Marianne
2017-01-01
Liver is a reliable matrix for laboratory confirmation of avian botulism using real-time PCR. Here, we developed, optimized, and validated the analytical steps preceding PCR to maximize the detection of Clostridium botulinum group III in avian liver. These pre-PCR steps included enrichment incubation of the whole liver (maximum 25 g) at 37°C for at least 24 h in an anaerobic chamber and DNA extraction using an enzymatic digestion step followed by a DNA purification step. Conditions of sample storage before analysis appear to have a strong effect on the detection of group III C. botulinum strains and our results recommend storage at temperatures below -18°C. Short-term storage at 5°C is possible for up to 24 h, but a decrease in sensitivity was observed at 48 h of storage at this temperature. Analysis of whole livers (maximum 25 g) is required and pooling samples before enrichment culturing must be avoided. Pooling is however possible before or after DNA extraction under certain conditions. Whole livers should be 10-fold diluted in enrichment medium and homogenized using a Pulsifier® blender (Microgen, Surrey, UK) instead of a conventional paddle blender. Spiked liver samples showed a limit of detection of 5 spores/g liver for types C and D and 250 spores/g for type E. Using the method developed here, the analysis of 268 samples from 73 suspected outbreaks showed 100% specificity and 95.35% sensitivity compared with other PCR-based methods considered as reference. The mosaic type C/D was the most common neurotoxin type found in examined samples, which included both wild and domestic birds. PMID:28076405
Maurer, Willi; Jones, Byron; Chen, Ying
2018-05-10
In a 2×2 crossover trial for establishing average bioequivalence (ABE) of a generic agent and a currently marketed drug, the recommended approach to hypothesis testing is the two one-sided test (TOST) procedure, which depends, among other things, on the estimated within-subject variability. The power of this procedure, and therefore the sample size required to achieve a minimum power, depends on having a good estimate of this variability. When there is uncertainty, it is advisable to plan the design in two stages, with an interim sample size reestimation after the first stage, using an interim estimate of the within-subject variability. One method and 3 variations of doing this were proposed by Potvin et al. Using simulation, the operating characteristics, including the empirical type I error rate, of the 4 variations (called Methods A, B, C, and D) were assessed by Potvin et al and Methods B and C were recommended. However, none of these 4 variations formally controls the type I error rate of falsely claiming ABE, even though the amount of inflation produced by Method C was considered acceptable. A major disadvantage of assessing type I error rate inflation using simulation is that unless all possible scenarios for the intended design and analysis are investigated, it is impossible to be sure that the type I error rate is controlled. Here, we propose an alternative, principled method of sample size reestimation that is guaranteed to control the type I error rate at any given significance level. This method uses a new version of the inverse-normal combination of p-values test, in conjunction with standard group sequential techniques, that is more robust to large deviations in initial assumptions regarding the variability of the pharmacokinetic endpoints. The sample size reestimation step is based on significance levels and power requirements that are conditional on the first-stage results. This necessitates a discussion and exploitation of the peculiar properties of the power curve of the TOST testing procedure. We illustrate our approach with an example based on a real ABE study and compare the operating characteristics of our proposed method with those of Method B of Povin et al. Copyright © 2018 John Wiley & Sons, Ltd.
Rousanoglou, Elissavet; Noutsos, Konstantinos; Bayios, Ioannis; Boudolos, Konstantinos
2014-01-01
The purpose of this study was to examine the differences in the ground reaction force (GRF) patterns between elite and novice players during two types of handball shots, as well as the relationships between throwing performance and the GRF variables. Ball velocity and throwing accuracy were measured during jump shots and 3-step shots performed by 15 elite and 15 novice players. The GRF pattern was recorded for the vertical and the anterior-posterior GRF components (Kistler forceplate type-9281, 750Hz). One-way ANOVA was used for the group differences and the Pearson coefficient for the correlation between throwing performance and GRF variables (SPSS 21.0, p ≤ 0.05). The elite players performed better in both types of shot. Both groups developed consistent and similar GRF patterns, except for the novices’ inconsistent Fz pattern in the 3-step shot. The GRF variables differed significantly between groups in the 3-step shot (p ≤ 0.05). Significant correlations were found only for ball velocity and predominantly for the novice players during the 3-step shot (p ≤ 0.05). The results possibly highlight a shortage in the novice ability to effectively reduce their forward momentum so as to provide a stable base of support for the momentum transfer up the kinetic chain, a situation that may predispose athletes to injury. PMID:25031672
Rousanoglou, Elissavet; Noutsos, Konstantinos; Bayios, Ioannis; Boudolos, Konstantinos
2014-03-27
The purpose of this study was to examine the differences in the ground reaction force (GRF) patterns between elite and novice players during two types of handball shots, as well as the relationships between throwing performance and the GRF variables. Ball velocity and throwing accuracy were measured during jump shots and 3-step shots performed by 15 elite and 15 novice players. The GRF pattern was recorded for the vertical and the anterior-posterior GRF components (Kistler forceplate type-9281, 750Hz). One-way ANOVA was used for the group differences and the Pearson coefficient for the correlation between throwing performance and GRF variables (SPSS 21.0, p ≤ 0.05). The elite players performed better in both types of shot. Both groups developed consistent and similar GRF patterns, except for the novices' inconsistent Fz pattern in the 3-step shot. The GRF variables differed significantly between groups in the 3-step shot (p ≤ 0.05). Significant correlations were found only for ball velocity and predominantly for the novice players during the 3-step shot (p ≤ 0.05). The results possibly highlight a shortage in the novice ability to effectively reduce their forward momentum so as to provide a stable base of support for the momentum transfer up the kinetic chain, a situation that may predispose athletes to injury.
The effects of a two-step transfer on a visuomotor adaptation task.
Aiken, Christopher A; Pan, Zhujun; Van Gemmert, Arend W A
2017-11-01
The literature has shown robust effects of transfer-of-learning to the contralateral side and more recently transfer-of-learning effects to a new effector type on the ipsilateral side. Few studies have investigated the effects of transfer-of-learning when skills transfer to both a new effector type and the contralateral side (two-step transfer). The purpose of the current study was to investigate the effects of two-step transfer and to examine which aspects of the movement transfer and which aspects do not. Individuals practiced a 30° visual rotation task with either the dominant or non-dominant limb and with either the use of the fingers and wrist or elbow and shoulder. Following practice, participants performed the task with the untrained effector type on the contralateral side. Results showed that initial direction error and trajectory length transferred from the dominant to the non-dominant side and movement time transferred from the elbow and shoulder condition to the wrist and finger conditions irrespective of which limb was used during practice. The results offer a unique perspective on the current theoretical and practical implications for transfer-of-learning and are further discussed in this paper.
The Natural History of Biocatalytic Mechanisms
Nath, Neetika; Mitchell, John B. O.; Caetano-Anollés, Gustavo
2014-01-01
Phylogenomic analysis of the occurrence and abundance of protein domains in proteomes has recently showed that the α/β architecture is probably the oldest fold design. This holds important implications for the origins of biochemistry. Here we explore structure-function relationships addressing the use of chemical mechanisms by ancestral enzymes. We test the hypothesis that the oldest folds used the most mechanisms. We start by tracing biocatalytic mechanisms operating in metabolic enzymes along a phylogenetic timeline of the first appearance of homologous superfamilies of protein domain structures from CATH. A total of 335 enzyme reactions were retrieved from MACiE and were mapped over fold age. We define a mechanistic step type as one of the 51 mechanistic annotations given in MACiE, and each step of each of the 335 mechanisms was described using one or more of these annotations. We find that the first two folds, the P-loop containing nucleotide triphosphate hydrolase and the NAD(P)-binding Rossmann-like homologous superfamilies, were α/β architectures responsible for introducing 35% (18/51) of the known mechanistic step types. We find that these two oldest structures in the phylogenomic analysis of protein domains introduced many mechanistic step types that were later combinatorially spread in catalytic history. The most common mechanistic step types included fundamental building blocks of enzyme chemistry: “Proton transfer,” “Bimolecular nucleophilic addition,” “Bimolecular nucleophilic substitution,” and “Unimolecular elimination by the conjugate base.” They were associated with the most ancestral fold structure typical of P-loop containing nucleotide triphosphate hydrolases. Over half of the mechanistic step types were introduced in the evolutionary timeline before the appearance of structures specific to diversified organisms, during a period of architectural diversification. The other half unfolded gradually after organismal diversification and during a period that spanned ∼2 billion years of evolutionary history. PMID:24874434
Determination of nonylphenol and nonylphenol ethoxylates in wastewater using MEKC.
Núñez, Laura; Wiedmer, Susanne K; Parshintsev, Jevgeni; Hartonen, Kari; Riekkola, Marja-Liisa; Tadeo, José L; Turiel, Esther
2009-06-01
Nonylphenol ethoxylates (NPEO(x)) are surfactants which are used worldwide and can be transformed in the environment by microorganisms to form nonylphenol (NP). Analysis of these compounds was carried out with micellar electrokinetic capillary chromatography (MEKC). Different parameters such as background electrolyte (BGE) solution, pH, type of surfactant, and sample stacking were optimized. The use of CHES (20 mM, pH 9.1) in combination with 50 mM sodium cholate as a surfactant as BGE solution, together with sample stacking using 50 mM NaCl in the sample and an injection time of 20 s, provided the best separation of the compounds studied. The method was applied to the determination of target analytes in two types of sludge water coming from two steps of a wastewater treatment plant. Liquid-liquid extraction was carried out using toluene as solvent, resulting in recoveries around 100% for all studied analytes. The presence of NPEO(x) was observed in the first step of the sludge water treatment, based on migration time and UV spectra. Identification was confirmed using tandem MS. LOQs of the studied compounds were in the range of 12.7 to 30.8 ng/mL, which is satisfactory for the analysis of real wastewater samples.
Mapping Saldana's Coding Methods onto the Literature Review Process
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Frels, Rebecca K.; Hwang, Eunjin
2016-01-01
Onwuegbuzie and Frels (2014) provided a step-by-step guide illustrating how discourse analysis can be used to analyze literature. However, more works of this type are needed to address the way that counselor researchers conduct literature reviews. Therefore, we present a typology for coding and analyzing information extracted for literature…
Purifying Nucleic Acids from Samples of Extremely Low Biomass
NASA Technical Reports Server (NTRS)
La Duc, Myron; Osman, Shariff; Venkateswaran, Kasthuri
2008-01-01
A new method is able to circumvent the bias to which one commercial DNA extraction method falls prey with regard to the lysing of certain types of microbial cells, resulting in a truncated spectrum of microbial diversity. By prefacing the protocol with glass-bead-beating agitation (mechanically lysing a much more encompassing array of cell types and spores), the resulting microbial diversity detection is greatly enhanced. In preliminary studies, a commercially available automated DNA extraction method is effective at delivering total DNA yield, but only the non-hardy members of the bacterial bisque were represented in clone libraries, suggesting that this method was ineffective at lysing the hardier cell types. To circumvent such a bias in cells, yet another extraction method was devised. In this technique, samples are first subjected to a stringent bead-beating step, and then are processed via standard protocols. Prior to being loaded into extraction vials, samples are placed in micro-centrifuge bead tubes containing 50 micro-L of commercially produced lysis solution. After inverting several times, tubes are agitated at maximum speed for two minutes. Following agitation, tubes are centrifuged at 10,000 x g for one minute. At this time, the aqueous volumes are removed from the bead tubes and are loaded into extraction vials to be further processed via extraction regime. The new method couples two independent methodologies in such as way as to yield the highest concentration of PCR-amplifiable DNA with consistent and reproducible results and with the most accurate and encompassing report of species richness.
NASA Astrophysics Data System (ADS)
Ivanova, A.; Tokmakov, A.; Lebedeva, K.; Roze, M.; Kaulachs, I.
2017-08-01
Organometal halide perovskites are promising materials for lowcost, high-efficiency solar cells. The method of perovskite layer deposition and the interfacial layers play an important role in determining the efficiency of perovskite solar cells (PSCs). In the paper, we demonstrate inverted planar perovskite solar cells where perovskite layers are deposited by two-step modified interdiffusion and one-step methods. We also demonstrate how PSC parameters change by doping of charge transport layers (CTL). We used dimethylsupoxide (DMSO) as dopant for the hole transport layer (PEDOT:PSS) but for the electron transport layer [6,6]-phenyl C61 butyric acid methyl ester (PCBM)) we used N,N-dimethyl-N-octadecyl(3-aminopropyl)trimethoxysilyl chloride (DMOAP). The highest main PSC parameters (PCE, EQE, VOC) were obtained for cells prepared by the one-step method with fast crystallization and doped CTLs but higher fill factor (FF) and shunt resistance (Rsh) values were obtained for cells prepared by the two-step method with undoped CTLs.
NASA Astrophysics Data System (ADS)
Sun, Zhizhong; Niu, Xiaoping; Hu, Henry
In this work, a different wall-thickness 5-step (with thicknesses as 3, 5, 8, 12, 20 mm) casting mold was designed, and squeeze casting of magnesium alloy AM60 was performed in a hydraulic press. The casting-die interfacial heat transfer coefficients (IHTC) in 5-step casting were determined based on experimental thermal histories data throughout the die and inside the casting which were recorded by fine type-K thermocouples. With measured temperatures, heat flux and IHTC were evaluated using the polynomial curve fitting method. The results show that the wall thickness affects IHTC peak values significantly. The IHTC value for the thick step is higher than that for the thin steps.
Basic features of boron isotope separation by SILARC method in the two-step iterative static model
NASA Astrophysics Data System (ADS)
Lyakhov, K. A.; Lee, H. J.
2013-05-01
In this paper we develop a new static model for boron isotope separation by the laser assisted retardation of condensation method (SILARC) on the basis of model proposed by Jeff Eerkens. Our model is thought to be adequate to so-called two-step iterative scheme for isotope separation. This rather simple model helps to understand combined action on boron separation by SILARC method of all important parameters and relations between them. These parameters include carrier gas, molar fraction of BCl3 molecules in carrier gas, laser pulse intensity, gas pulse duration, gas pressure and temperature in reservoir and irradiation cells, optimal irradiation cell and skimmer chamber volumes, and optimal nozzle throughput. A method for finding optimal values of these parameters based on some objective function global minimum search was suggested. It turns out that minimum of this objective function is directly related to the minimum of total energy consumed, and total setup volume. Relations between nozzle throat area, IC volume, laser intensity, number of nozzles, number of vacuum pumps, and required isotope production rate were derived. Two types of industrial scale irradiation cells are compared. The first one has one large throughput slit nozzle, while the second one has numerous small nozzles arranged in parallel arrays for better overlap with laser beam. It is shown that the last one outperforms the former one significantly. It is argued that NO2 is the best carrier gas for boron isotope separation from the point of view of energy efficiency and Ar from the point of view of setup compactness.
Ahn, Joonghee; Jung, Kyoung-Hwa; Son, Sung-Ae; Hur, Bock; Kwon, Yong-Hoon
2015-01-01
Objectives This study examined the effects of additional acid etching on the dentin bond strength of one-step self-etch adhesives with different compositions and pH. The effect of ethanol wetting on etched dentin bond strength of self-etch adhesives was also evaluated. Materials and Methods Forty-two human permanent molars were classified into 21 groups according to the adhesive types (Clearfil SE Bond [SE, control]; G-aenial Bond [GB]; Xeno V [XV]; Beauti Bond [BB]; Adper Easy Bond [AE]; Single Bond Universal [SU]; All Bond Universal [AU]), and the dentin conditioning methods. Composite resins were placed on the dentin surfaces, and the teeth were sectioned. The microtensile bond strength was measured, and the failure mode of the fractured specimens was examined. The data were analyzed statistically using two-way ANOVA and Duncan's post hoc test. Results In GB, XV and SE (pH ≤ 2), the bond strength was decreased significantly when the dentin was etched (p < 0.05). In BB, AE and SU (pH 2.4 - 2.7), additional etching did not affect the bond strength (p > 0.05). In AU (pH = 3.2), additional etching increased the bond strength significantly (p < 0.05). When adhesives were applied to the acid etched dentin with ethanol-wet bonding, the bond strength was significantly higher than that of the no ethanol-wet bonding groups, and the incidence of cohesive failure was increased. Conclusions The effect of additional acid etching on the dentin bond strength was influenced by the pH of one-step self-etch adhesives. Ethanol wetting on etched dentin could create a stronger bonding performance of one-step self-etch adhesives for acid etched dentin. PMID:25671215
GWAS with longitudinal phenotypes: performance of approximate procedures
Sikorska, Karolina; Montazeri, Nahid Mostafavi; Uitterlinden, André; Rivadeneira, Fernando; Eilers, Paul HC; Lesaffre, Emmanuel
2015-01-01
Analysis of genome-wide association studies with longitudinal data using standard procedures, such as linear mixed model (LMM) fitting, leads to discouragingly long computation times. There is a need to speed up the computations significantly. In our previous work (Sikorska et al: Fast linear mixed model computations for genome-wide association studies with longitudinal data. Stat Med 2012; 32.1: 165–180), we proposed the conditional two-step (CTS) approach as a fast method providing an approximation to the P-value for the longitudinal single-nucleotide polymorphism (SNP) effect. In the first step a reduced conditional LMM is fit, omitting all the SNP terms. In the second step, the estimated random slopes are regressed on SNPs. The CTS has been applied to the bone mineral density data from the Rotterdam Study and proved to work very well even in unbalanced situations. In another article (Sikorska et al: GWAS on your notebook: fast semi-parallel linear and logistic regression for genome-wide association studies. BMC Bioinformatics 2013; 14: 166), we suggested semi-parallel computations, greatly speeding up fitting many linear regressions. Combining CTS with fast linear regression reduces the computation time from several weeks to a few minutes on a single computer. Here, we explore further the properties of the CTS both analytically and by simulations. We investigate the performance of our proposal in comparison with a related but different approach, the two-step procedure. It is analytically shown that for the balanced case, under mild assumptions, the P-value provided by the CTS is the same as from the LMM. For unbalanced data and in realistic situations, simulations show that the CTS method does not inflate the type I error rate and implies only a minimal loss of power. PMID:25712081
ERIC Educational Resources Information Center
Wang, Tianyou
2008-01-01
Von Davier, Holland, and Thayer (2004) laid out a five-step framework of test equating that can be applied to various data collection designs and equating methods. In the continuization step, they presented an adjusted Gaussian kernel method that preserves the first two moments. This article proposes an alternative continuization method that…
R&D on dental implants breakage
NASA Astrophysics Data System (ADS)
Croitoru, Sorin Mihai; Popovici, Ion Alexandru
2017-09-01
Most used dental implants for human dental prostheses are of two steps type: first step means implantation and, after several months healing and osseointegration, second step is prosthesis fixture. For sure, dental implants and prostheses are meant to last for a lifetime. Still, there are unfortunate cases when dental implants break. This paper studies two steps dental implants breakage and proposes a set of instruments for replacement and restoration of the broken implant. First part of the paper sets the input data of the study: structure of the studied two steps dental implants based on two Romanian patents and values of the loading forces found in practice and specialty papers. In the second part of the paper, using DEFORM 2D™ FEM simulation software, worst case scenarios of loading dental implants are studied in order to determine which zones and components of the dental implant set are affected (broken). Last part of the paper is dedicated to design and presentation of a set for extracting and cutting tools used to restore the broken implant set.
NASA Astrophysics Data System (ADS)
Ding, Liang; Gao, Haibo; Liu, Zhen; Deng, Zongquan; Liu, Guangjun
2015-12-01
Identifying the mechanical property parameters of planetary soil based on terramechanics models using in-situ data obtained from autonomous planetary exploration rovers is both an important scientific goal and essential for control strategy optimization and high-fidelity simulations of rovers. However, identifying all the terrain parameters is a challenging task because of the nonlinear and coupling nature of the involved functions. Three parameter identification methods are presented in this paper to serve different purposes based on an improved terramechanics model that takes into account the effects of slip, wheel lugs, etc. Parameter sensitivity and coupling of the equations are analyzed, and the parameters are grouped according to their sensitivity to the normal force, resistance moment and drawbar pull. An iterative identification method using the original integral model is developed first. In order to realize real-time identification, the model is then simplified by linearizing the normal and shearing stresses to derive decoupled closed-form analytical equations. Each equation contains one or two groups of soil parameters, making step-by-step identification of all the unknowns feasible. Experiments were performed using six different types of single-wheels as well as a four-wheeled rover moving on planetary soil simulant. All the unknown model parameters were identified using the measured data and compared with the values obtained by conventional experiments. It is verified that the proposed iterative identification method provides improved accuracy, making it suitable for scientific studies of soil properties, whereas the step-by-step identification methods based on simplified models require less calculation time, making them more suitable for real-time applications. The models have less than 10% margin of error comparing with the measured results when predicting the interaction forces and moments using the corresponding identified parameters.
A study of increasing radical density and etch rate using remote plasma generator system
NASA Astrophysics Data System (ADS)
Lee, Jaewon; Kim, Kyunghyun; Cho, Sung-Won; Chung, Chin-Wook
2013-09-01
To improve radical density without changing electron temperature, remote plasma generator (RPG) is applied. Multistep dissociation of the polyatomic molecule was performed using RPG system. RPG is installed to inductively coupled type processing reactor; electrons, positive ions, radicals and polyatomic molecule generated in RPG and they diffused to processing reactor. The processing reactor dissociates the polyatomic molecules with inductively coupled power. The polyatomic molecules are dissociated by the processing reactor that is operated by inductively coupled power. Therefore, the multistep dissociation system generates more radicals than single-step system. The RPG was composed with two cylinder type inductively coupled plasma (ICP) using 400 kHz RF power and nitrogen gas. The processing reactor composed with two turn antenna with 13.56 MHz RF power. Plasma density, electron temperature and radical density were measured with electrical probe and optical methods.
NASA Astrophysics Data System (ADS)
Malý, J.; Lampová, H.; Semerádtová, A.; Štofik, M.; Kováčik, L.
2009-09-01
This paper presents a synthesis of a novel nanoparticle label with selective biorecognition properties based on a biotinylated silver-dendrimer nanocomposite (AgDNC). Two types of labels, a biotin-AgDNC (bio-AgDNC) and a biotinylated AgDNC with a poly(ethylene)glycol spacer (bio-PEG-AgDNC), were synthesized from a generation 7 (G7) hydroxyl-terminated ethylenediamine-core-type (2-carbon core) PAMAM dendrimer (DDM) by an N,N'-dicyclohexylcarbodiimide (DDC) biotin coupling and a NaBH4 silver reduction method. Synthesized conjugates were characterized by several analytical methods, such as UV-vis, FTIR, AFM, TEM, ELISA, HABA assay and SPR. The results show that stable biotinylated nanocomposites can be formed either with internalized silver nanoparticles (AgNPs) in a DMM polymer backbone ('type I') or as externally protected ('type E'), depending on the molar ratio of the silver/DMM conjugate and type of conjugate. Furthermore, the selective biorecognition function of the biotin is not affected by the AgNPs' synthesis step, which allows a potential application of silver nanocomposite conjugates as biospecific labels in various bioanalytical assays, or potentially as fluorescence cell biomarkers. An exploitation of the presented label in the development of electrochemical immunosensors is anticipated.
Petzold, Markus; Ehricht, Ralf; Slickers, Peter; Pleischl, Stefan; Brockmann, Ansgar; Exner, Martin; Monecke, Stefan; Lück, Christian
2017-06-01
Between 1 August and 6 September 2013, an outbreak of Legionnaires' disease (LD) with 78 cases confirmed by positive urinary antigen tests occurred in Warstein, North Rhine-Westphalia, Germany. Legionella (L.) pneumophila, serogroup (Sg) 1, monoclonal antibody (mAb) subgroup Knoxville, sequence type (ST) 345, was identified as the epidemic strain. This strain was isolated from seven patients. To detect the source of the infection, epidemiological typing of clinical and environmental strains was performed in two consecutive steps. First, strains were typed by monoclonal antibodies. Indistinguishable strains were further subtyped by sequence-based typing (SBT) which is the internationally recognized standard method for epidemiological genotyping of L. pneumophila. In an early stage of the outbreak investigation, many environmental isolates were found to belong to the mAb subgroup Knoxville, but to two different STs, namely to ST 345, the epidemic strain, and to ST 600. A majority of environmental isolates belonged to ST 600 whereas the epidemic ST 345 strain was less common in environmental samples. To rapidly distinguish both Knoxville strains, we applied a novel typing method based on DNA-hybridization on glass chips. The new assay can easily and rapidly discriminate L. pneumophila Sg 1 strains. Thus, we were able to quickly identify the sources harboring the epidemic strain, i.e., two cooling towers of different companies, the waste water treatment plants (WWTP) of the city and one company as well as water samples of the river Wester and its branches. Copyright © 2016 Elsevier GmbH. All rights reserved.
Fabrication of X-ray Microcalorimeter Focal Planes Composed of Two Distinct Pixel Types.
Wassell, E J; Adams, J S; Bandler, S R; Betancourt-Martinez, G L; Chiao, M P; Chang, M P; Chervenak, J A; Datesman, A M; Eckart, M E; Ewin, A J; Finkbeiner, F M; Ha, J Y; Kelley, R; Kilbourne, C A; Miniussi, A R; Sakai, K; Porter, F; Sadleir, J E; Smith, S J; Wakeham, N A; Yoon, W
2017-06-01
We are developing superconducting transition-edge sensor (TES) microcalorimeter focal planes for versatility in meeting specifications of X-ray imaging spectrometers including high count-rate, high energy resolution, and large field-of-view. In particular, a focal plane composed of two sub-arrays: one of fine-pitch, high count-rate devices and the other of slower, larger pixels with similar energy resolution, offers promise for the next generation of astrophysics instruments, such as the X-ray Integral Field Unit (X-IFU) instrument on the European Space Agency's Athena mission. We have based the sub-arrays of our current design on successful pixel designs that have been demonstrated separately. Pixels with an all gold X-ray absorber on 50 and 75 micron scales where the Mo/Au TES sits atop a thick metal heatsinking layer have shown high resolution and can accommodate high count-rates. The demonstrated larger pixels use a silicon nitride membrane for thermal isolation, thinner Au and an added bismuth layer in a 250 micron square absorber. To tune the parameters of each sub-array requires merging the fabrication processes of the two detector types. We present the fabrication process for dual production of different X-ray absorbers on the same substrate, thick Au on the small pixels and thinner Au with a Bi capping layer on the larger pixels to tune their heat capacities. The process requires multiple electroplating and etching steps, but the absorbers are defined in a single ion milling step. We demonstrate methods for integrating heatsinking of the two types of pixel into the same focal plane consistent with the requirements for each sub-array, including the limiting of thermal crosstalk. We also discuss fabrication process modifications for tuning the intrinsic transition temperature (T c ) of the bilayers for the different device types through variation of the bilayer thicknesses. The latest results on these "hybrid" arrays will be presented.
Fabrication of X-ray Microcalorimeter Focal Planes Composed of Two Distinct Pixel Types
Wassell, E. J.; Adams, J. S.; Bandler, S. R.; Betancourt-Martinez, G. L.; Chiao, M. P.; Chang, M. P.; Chervenak, J. A.; Datesman, A. M.; Eckart, M. E.; Ewin, A. J.; Finkbeiner, F. M.; Ha, J. Y.; Kelley, R.; Kilbourne, C. A.; Miniussi, A. R.; Sakai, K.; Porter, F.; Sadleir, J. E.; Smith, S. J.; Wakeham, N. A.; Yoon, W.
2017-01-01
We are developing superconducting transition-edge sensor (TES) microcalorimeter focal planes for versatility in meeting specifications of X-ray imaging spectrometers including high count-rate, high energy resolution, and large field-of-view. In particular, a focal plane composed of two sub-arrays: one of fine-pitch, high count-rate devices and the other of slower, larger pixels with similar energy resolution, offers promise for the next generation of astrophysics instruments, such as the X-ray Integral Field Unit (X-IFU) instrument on the European Space Agency’s Athena mission. We have based the sub-arrays of our current design on successful pixel designs that have been demonstrated separately. Pixels with an all gold X-ray absorber on 50 and 75 micron scales where the Mo/Au TES sits atop a thick metal heatsinking layer have shown high resolution and can accommodate high count-rates. The demonstrated larger pixels use a silicon nitride membrane for thermal isolation, thinner Au and an added bismuth layer in a 250 micron square absorber. To tune the parameters of each sub-array requires merging the fabrication processes of the two detector types. We present the fabrication process for dual production of different X-ray absorbers on the same substrate, thick Au on the small pixels and thinner Au with a Bi capping layer on the larger pixels to tune their heat capacities. The process requires multiple electroplating and etching steps, but the absorbers are defined in a single ion milling step. We demonstrate methods for integrating heatsinking of the two types of pixel into the same focal plane consistent with the requirements for each sub-array, including the limiting of thermal crosstalk. We also discuss fabrication process modifications for tuning the intrinsic transition temperature (Tc) of the bilayers for the different device types through variation of the bilayer thicknesses. The latest results on these “hybrid” arrays will be presented. PMID:28804229
Fabrication of X-ray Microcalorimeter Focal Planes Composed of Two Distinct Pixel Types
NASA Technical Reports Server (NTRS)
Wassell, Edward J.; Adams, Joseph S.; Bandler, Simon R.; Betancour-Martinez, Gabriele L; Chiao, Meng P.; Chang, Meng Ping; Chervenak, James A.; Datesman, Aaron M.; Eckart, Megan E.; Ewin, Audrey J.;
2016-01-01
We develop superconducting transition-edge sensor (TES) microcalorimeter focal planes for versatility in meeting the specifications of X-ray imaging spectrometers, including high count rate, high energy resolution, and large field of view. In particular, a focal plane composed of two subarrays: one of fine pitch, high count-rate devices and the other of slower, larger pixels with similar energy resolution, offers promise for the next generation of astrophysics instruments, such as the X-ray Integral Field Unit Instrument on the European Space Agencys ATHENA mission. We have based the subarrays of our current design on successful pixel designs that have been demonstrated separately. Pixels with an all-gold X-ray absorber on 50 and 75 micron pitch, where the Mo/Au TES sits atop a thick metal heatsinking layer, have shown high resolution and can accommodate high count rates. The demonstrated larger pixels use a silicon nitride membrane for thermal isolation, thinner Au, and an added bismuth layer in a 250-sq micron absorber. To tune the parameters of each subarray requires merging the fabrication processes of the two detector types. We present the fabrication process for dual production of different X-ray absorbers on the same substrate, thick Au on the small pixels and thinner Au with a Bi capping layer on the larger pixels to tune their heat capacities. The process requires multiple electroplating and etching steps, but the absorbers are defined in a single-ion milling step. We demonstrate methods for integrating the heatsinking of the two types of pixel into the same focal plane consistent with the requirements for each subarray, including the limiting of thermal crosstalk. We also discuss fabrication process modifications for tuning the intrinsic transition temperature (T(sub c)) of the bilayers for the different device types through variation of the bilayer thicknesses. The latest results on these 'hybrid' arrays will be presented.
A computational method for sharp interface advection.
Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje
2016-11-01
We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.
Design improvement of a pump wear ring labyrinth seal
NASA Technical Reports Server (NTRS)
Rhode, David L.; Morrison, G. L.; Ko, S. H.; Waughtal, S. P.
1987-01-01
The investigation was successful in obtaining two improved designs for the impeller wear ring seal of the liquid hydrogen turbopump of interest. A finite difference computer code was extensively used in a parametric computational study in determining a cavity configuration with high flow resistance due to turbulence dissipation. These two designs, along with that currently used, were fabricated and tested. The improved designs were denoted Type O and Type S. The measurements showed that Type O and Type S given 67 and 30 percent reduction in leakage over the current design, respectively. It was found that the number of cavities, the step height and the presence of a small stator groove are quite important design features. Also, the tooth thickness is of some significance. Finally, the tooth height and an additional large cavity cut out from the stator (upstream of the step) are of negligible importance.
Branching Patterns and Stepped Leaders in an Electric-Circuit Model for Creeping Discharge
NASA Astrophysics Data System (ADS)
Hidetsugu Sakaguchi,; Sahim M. Kourkouss,
2010-06-01
We construct a two-dimensional electric circuit model for creeping discharge. Two types of discharge, surface corona and surface leader, are modeled by a two-step function of conductance. Branched patterns of surface leaders surrounded by the surface corona appear in numerical simulation. The fractal dimension of branched discharge patterns is calculated by changing voltage and capacitance. We find that surface leaders often grow stepwise in time, as is observed in lightning leaders of thunder.
A two-step spin crossover mononuclear iron(II) complex with a [HS-LS-LS] intermediate phase.
Bonnet, Sylvestre; Siegler, Maxime A; Costa, José Sánchez; Molnár, Gábor; Bousseksou, Azzedine; Spek, Anthony L; Gamez, Patrick; Reedijk, Jan
2008-11-21
The two-step spin crossover of a new mononuclear iron(ii) complex is studied by magnetic, crystallographic and calorimetric methods revealing two successive first-order phase transitions and an ordered intermediate phase built by the repetition of the unprecedented [HS-LS-LS] motif.
Study of the Charge Transfer Process of LaNi5 Type Electrodes in Ni-MH Batteries
NASA Astrophysics Data System (ADS)
Le, Xuan Que; Nguyen, Phu Thuy
2002-12-01
As a result of the charge process of LaNi5 type electrode, hydrogen is reversibly absorbed on the electrode surface. The process consists two principal steps. During the both processes, the first reaction step occurs in the interface solid/liquid, negatively charged, with high static electric field, where the double layer structure became more compact. The transfer of charge under high electric field depends on many factors, principally on compositions of the electrode materials. Effects on that of Co, Fe, Mn substitutes, with different concentrations, have been comparatively studied using electrochemical technique. The analyse of interface C -.V study results has been realised, respecting Mott-Schottky relation. Optimal contents of some additives have been discussed. Some advantages of the applied electrochemical methods have been confirmed. The mechanism of the charges transfer and of the hydrogen reversible storage in the crystal structure in the batteries has been discussed. With the proposed mechanism, one can more explicitly understand the difference of the magnetic effect of the electrode materials before and after charge-discharge process can be explained.
Attention Guiding in Multimedia Learning
ERIC Educational Resources Information Center
Jamet, Eric; Gavota, Monica; Quaireau, Christophe
2008-01-01
Comprehension of an illustrated document can involve complex visual scanning in order to locate the relevant information on the screen when this is evoked in spoken explanations. The present study examined the effects of two types of attention-guiding means (color change or step-by-step presentation of diagram elements synchronized with a spoken…
A facile synthesis of the basic steroidal skeleton using a Pauson-Khand reaction as a key step.
Kim, Do Han; Kim, Kwang; Chung, Young Keun
2006-10-13
A high-yield synthesis of steroid-type molecules under mild reaction conditions is achieved in two steps involving nucleophilic addition of alkynyl cerium reagent to an easily enolizable carbonyl compound (beta-tetralone) followed by an intramolecular Pauson-Khand reaction.
Fratz-Berilla, Erica J; Ketcham, Stephanie A; Parhiz, Hamideh; Ashraf, Muhammad; Madhavarao, Chikkathur N
2017-12-01
Human β-glucuronidase (GUS; EC 3.2.1.31) is a lysosomal enzyme that catalyzes the hydrolysis of β-d-glucuronic acid residues from the non-reducing termini of glycosaminoglycans. Impairment in GUS function leads to the metabolic disorder mucopolysaccharidosis type VII, also known as Sly syndrome. We produced GUS from a CHO cell line grown in suspension in a 15 L perfused bioreactor and developed a three step purification procedure that yields ∼99% pure enzyme with a recovery of more than 40%. The method can be completed in two days and has the potential to be integrated into a continuous manufacturing scheme. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qi, E-mail: wqtjmu@gmail.com; Xiong, Bin, E-mail: herrxiong@126.com; Zheng, ChuanSheng, E-mail: hqzcsxh@sina.com
ObjectiveThis retrospective study reports our experience using splenic arterial particle embolization and coil embolization for the treatment of sinistral portal hypertension (SPH) in patients with and without gastric bleeding.MethodsFrom August 2009 to May 2012, 14 patients with SPH due to pancreatic disease were diagnosed and treated with splenic arterial embolization. Two different embolization strategies were applied; either combined distal splenic bed particle embolization and proximal splenic artery coil embolization in the same procedure for acute hemorrhage (1-step) or interval staged distal embolization and proximal embolization in the stable patient (2-step). The patients were clinically followed.ResultsIn 14 patients, splenic arterial embolizationmore » was successful. The one-step method was performed in three patients suffering from massive gastric bleeding, and the bleeding was relieved after embolization. The two-step method was used in 11 patients, who had chronic gastric variceal bleeding or gastric varices only. The gastric varices disappeared in the enhanced CT scan and the patients had no gastric bleeding during follow-up.ConclusionsSplenic arterial embolization, particularly the two-step method, proved feasible and effective for the treatment of SPH patients with gastric varices or gastric variceal bleeding.« less
An Optimal Method for Detecting Internal and External Intrusion in MANET
NASA Astrophysics Data System (ADS)
Rafsanjani, Marjan Kuchaki; Aliahmadipour, Laya; Javidi, Mohammad M.
Mobile Ad hoc Network (MANET) is formed by a set of mobile hosts which communicate among themselves through radio waves. The hosts establish infrastructure and cooperate to forward data in a multi-hop fashion without a central administration. Due to their communication type and resources constraint, MANETs are vulnerable to diverse types of attacks and intrusions. In this paper, we proposed a method for prevention internal intruder and detection external intruder by using game theory in mobile ad hoc network. One optimal solution for reducing the resource consumption of detection external intruder is to elect a leader for each cluster to provide intrusion service to other nodes in the its cluster, we call this mode moderate mode. Moderate mode is only suitable when the probability of attack is low. Once the probability of attack is high, victim nodes should launch their own IDS to detect and thwart intrusions and we call robust mode. In this paper leader should not be malicious or selfish node and must detect external intrusion in its cluster with minimum cost. Our proposed method has three steps: the first step building trust relationship between nodes and estimation trust value for each node to prevent internal intrusion. In the second step we propose an optimal method for leader election by using trust value; and in the third step, finding the threshold value for notifying the victim node to launch its IDS once the probability of attack exceeds that value. In first and third step we apply Bayesian game theory. Our method due to using game theory, trust value and honest leader can effectively improve the network security, performance and reduce resource consumption.
NASA Astrophysics Data System (ADS)
Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto
2017-10-01
This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.
Two-level image authentication by two-step phase-shifting interferometry and compressive sensing
NASA Astrophysics Data System (ADS)
Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2018-01-01
A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.
Bürger, Raimund; Diehl, Stefan; Mejías, Camilo
2016-01-01
The main purpose of the recently introduced Bürger-Diehl simulation model for secondary settling tanks was to resolve spatial discretization problems when both hindered settling and the phenomena of compression and dispersion are included. Straightforward time integration unfortunately means long computational times. The next step in the development is to introduce and investigate time-integration methods for more efficient simulations, but where other aspects such as implementation complexity and robustness are equally considered. This is done for batch settling simulations. The key findings are partly a new time-discretization method and partly its comparison with other specially tailored and standard methods. Several advantages and disadvantages for each method are given. One conclusion is that the new linearly implicit method is easier to implement than another one (semi-implicit method), but less efficient based on two types of batch sedimentation tests.
The influence of tightening sequence and method on screw preload in implant superstructures.
Al-Sahan, Maha M; Al Maflehi, Nassr S; Akeel, Riyadh F
2014-01-01
This study evaluated the effect of six screw-tightening sequences and two tightening methods on the screw preload in implant-supported superstructures. The preload was measured using strain gauges following the screw tightening of a metal framework connected to four implants. The experiment included six sequences ([1] 1-2-3-4, [2] 4-2-3-1, [3] 4-3-1-2, [4] 1-4-2-3, [5] 2-3-4-1, and [6] 3-2-4-1), two methods (onestep, three-step), and five replications. Significant differences were found between tightening sequences and methods. In the three-step method, a higher total preload was found in sequences 2 (312 ± 85 N), 3 (246 ± 54 N), and 4 (310 ± 96 N). In the one-step method, a higher total preload was found in sequences 1 (286 ± 94 N), 5 (764 ± 142 N), and 6 (350 ± 69 N). It is concluded that the highest total screw preload was achieved when anterior implants of the superstructure were first tightened in one step, followed by posterior implants.
A more accurate analysis and design of coaxial-to-rectangular waveguide end launcher
NASA Astrophysics Data System (ADS)
Saad, Saad Michael
1990-02-01
An electromagnetic model is developed for the analysis of the coaxial-to-rectangular waveguide transition of the end-launcher type. The model describes the coupling mechanism in terms of an excitation probe which is fed by a transmission line intermediate section. The model is compared with a coupling loop model. The two models have a few analytical steps in common, but expressions for the probe model are easier to derive and compute. The two models are presented together with numerical examples and experimental verification. The superiority of the probe model is illustrated, and a design method yielding a maximum voltage standing wave ratio of 1.035 over 13 percent bandwidth is outlined.
Multigrid Methods for Fully Implicit Oil Reservoir Simulation
NASA Technical Reports Server (NTRS)
Molenaar, J.
1996-01-01
In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for two-phase flow problems with strong heterogeneities and anisotropies is studied. Here we consider both possibilities. Moreover we present a novel way for constructing the coarse grid correction operator in linear multigrid algorithms. This approach has the advantage in that it preserves the sparsity pattern of the fine grid matrix and it can be extended to systems of equations in a straightforward manner. We compare the linear and nonlinear multigrid algorithms by means of a numerical experiment.
NASA Astrophysics Data System (ADS)
Lopez-Sanchez, Marco; Llana-Fúnez, Sergio
2016-04-01
The understanding of creep behaviour in rocks requires knowledge of 3D grain size distributions (GSD) that result from dynamic recrystallization processes during deformation. The methods to estimate directly the 3D grain size distribution -serial sectioning, synchrotron or X-ray-based tomography- are expensive, time-consuming and, in most cases and at best, challenging. This means that in practice grain size distributions are mostly derived from 2D sections. Although there are a number of methods in the literature to derive the actual 3D grain size distributions from 2D sections, the most popular in highly deformed rocks is the so-called Saltykov method. It has though two major drawbacks: the method assumes no interaction between grains, which is not true in the case of recrystallised mylonites; and uses histograms to describe distributions, which limits the quantification of the GSD. The first aim of this contribution is to test whether the interaction between grains in mylonites, i.e. random grain packing, affects significantly the GSDs estimated by the Saltykov method. We test this using the random resampling technique in a large data set (n = 12298). The full data set is built from several parallel thin sections that cut a completely dynamically recrystallized quartz aggregate in a rock sample from a Variscan shear zone in NW Spain. The results proved that the Saltykov method is reliable as long as the number of grains is large (n > 1000). Assuming that a lognormal distribution is an optimal approximation for the GSD in a completely dynamically recrystallized rock, we introduce an additional step to the Saltykov method, which allows estimating a continuous probability distribution function of the 3D grain size population. The additional step takes the midpoints of the classes obtained by the Saltykov method and fits a lognormal distribution with a trust region using a non-linear least squares algorithm. The new protocol is named the two-step method. The conclusion of this work is that both the Saltykov and the two-step methods are accurate and simple enough to be useful in practice in rocks, alloys or ceramics with near-equant grains and expected lognormal distributions. The Saltykov method is particularly suitable to estimate the volumes of particular grain fractions, while the two-step method to quantify the full GSD (mean and standard deviation in log grain size). The two-step method is implemented in a free, open-source and easy-to-handle script (see http://marcoalopez.github.io/GrainSizeTools/).
Trigonometrically-fitted Scheifele two-step methods for perturbed oscillators
NASA Astrophysics Data System (ADS)
You, Xiong; Zhang, Yonghui; Zhao, Jinxi
2011-07-01
In this paper, a new family of trigonometrically-fitted Scheifele two-step (TFSTS) methods for the numerical integration of perturbed oscillators is proposed and investigated. An essential feature of TFSTS methods is that they are exact in both the internal stages and the updates when solving the unperturbed harmonic oscillator y″ = -ω2 y for known frequency ω. Based on the linear operator theory, the necessary and sufficient conditions for TFSTS methods of up to order five are derived. Two specific TFSTS methods of orders four and five respectively are constructed and their stability and phase properties are examined. In the five numerical experiments carried out the new integrators are shown to be more efficient and competent than some well-known methods in the literature.
Riedel, Melanie; Speer, Karl; Stuke, Sven; Schmeer, Karl
2010-01-01
Since 2003, two new multipesticide residue methods for screening crops for a large number of pesticides, developed by Klein and Alder and Anastassiades et al. (Quick, Easy, Cheap, Effective, Rugged, and Safe; QuEChERS), have been published. Our intention was to compare these two important methods on the basis of their extraction efficiency, reproducibility, ruggedness, ease of use, and speed. In total, 70 pesticides belonging to numerous different substance classes were analyzed at two concentration levels by applying both methods, using five different representative matrixes. In the case of the QuEChERS method, the results of the three sample preparation steps (crude extract, extract after SPE, and extract after SPE and acidification) were compared with each other and with the results obtained with the Klein and Alder method. The extraction efficiencies of the QuEChERS method were far higher, and the sample preparation was much quicker when the last two steps were omitted. In most cases, the extraction efficiencies after the first step were approximately 100%. With extraction efficiencies of mostly less than 70%, the Klein and Alder method did not compare favorably. Some analytes caused problems during evaluation, mostly due to matrix influences.
Edge detection and localization with edge pattern analysis and inflection characterization
NASA Astrophysics Data System (ADS)
Jiang, Bo
2012-05-01
In general edges are considered to be abrupt changes or discontinuities in two dimensional image signal intensity distributions. The accuracy of front-end edge detection methods in image processing impacts the eventual success of higher level pattern analysis downstream. To generalize edge detectors designed from a simple ideal step function model to real distortions in natural images, research on one dimensional edge pattern analysis to improve the accuracy of edge detection and localization proposes an edge detection algorithm, which is composed by three basic edge patterns, such as ramp, impulse, and step. After mathematical analysis, general rules for edge representation based upon the classification of edge types into three categories-ramp, impulse, and step (RIS) are developed to reduce detection and localization errors, especially reducing "double edge" effect that is one important drawback to the derivative method. But, when applying one dimensional edge pattern in two dimensional image processing, a new issue is naturally raised that the edge detector should correct marking inflections or junctions of edges. Research on human visual perception of objects and information theory pointed out that a pattern lexicon of "inflection micro-patterns" has larger information than a straight line. Also, research on scene perception gave an idea that contours have larger information are more important factor to determine the success of scene categorization. Therefore, inflections or junctions are extremely useful features, whose accurate description and reconstruction are significant in solving correspondence problems in computer vision. Therefore, aside from adoption of edge pattern analysis, inflection or junction characterization is also utilized to extend traditional derivative edge detection algorithm. Experiments were conducted to test my propositions about edge detection and localization accuracy improvements. The results support the idea that these edge detection method improvements are effective in enhancing the accuracy of edge detection and localization.
a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
He, H.; Khoshelham, K.; Fraser, C.
2017-09-01
Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.
NASA Astrophysics Data System (ADS)
Pang, Xiaomin; Wang, Xiaotao; Dai, Wei; Li, Haibing; Wu, Yinong; Luo, Ercang
2018-06-01
A compact and high efficiency cooler working at liquid hydrogen temperature has many important applications such as cooling superconductors and mid-infrared sensors. This paper presents a two-stage gas-coupled pulse tube cooler system with a completely co-axial configuration. A stepped warm displacer, working as the phase shifter for both stages, has been studied theoretically and experimentally in this paper. Comparisons with the traditional phase shifter (double inlet) are also made. Compared with the double inlet type, the stepped warm displacer has the advantages of recovering the expansion work from the pulse tube hot end (especially from the first stage) and easily realizing an appropriate phase relationship between the pressure wave and volume flow rate at the pulse tube hot end. Experiments are then carried out to investigate the performance. The pressure ratio at the compression space is maintained at 1.37, for the double inlet type, the system obtains 1.1 W cooling power at 20 K with 390 W acoustic power input and the relative Carnot efficiency is only 3.85%; while for the stepped warm displacer type, the system obtains 1.06 W cooling power at 20 K with only 224 W acoustic power input and the relative Carnot efficiency can reach 6.5%.
Kustarci, Alper; Akdemir, Neslihan; Siso, Seyda Herguner; Altunbas, Demet
2008-01-01
Objectives The purpose of this study was to compare in-vitro the amount of debris extruded apically from extracted teeth, using K3, Protaper rotary instruments and manual step-back technique. Methods Forty five human single-rooted mandibular premolar teeth were randomly divided into 3 groups. The teeth in 3 groups were instrumented until reaching the working length with K3, Protaper rotary instruments and K-type stainless steel instruments with manual step-back technique, respectively. Debris extruded from the apical foramen was collected into centrifuge tubes and the amount was determined. The data obtained were analyzed using Kruskal-Wallis one-way analysis of variance and Mann-Whitney U tests, with P=.05 as the level for statistical significance. Results Statistically significant difference was observed between K3, Protaper and step-back groups in terms of debris extrusion (P<.05). Step-back group had the highest mean debris weight, which was significantly different from the K3 and Protaper groups (P<.05). The lowest mean debris weight was related to K3 group, which was significantly different from the Protaper group (P<.05). Conclusions: Based on the results, all instrumentation techniques produced debris extrusion. The engine-driven Ni-Ti systems extruded significantly less apical debris than step-back technique. However, Protaper rotary instruments extruded significantly more debris than K3 rotary instruments. PMID:19212528
A multi-layer steganographic method based on audio time domain segmented and network steganography
NASA Astrophysics Data System (ADS)
Xue, Pengfei; Liu, Hanlin; Hu, Jingsong; Hu, Ronggui
2018-05-01
Both audio steganography and network steganography are belong to modern steganography. Audio steganography has a large capacity. Network steganography is difficult to detect or track. In this paper, a multi-layer steganographic method based on the collaboration of them (MLS-ATDSS&NS) is proposed. MLS-ATDSS&NS is realized in two covert layers (audio steganography layer and network steganography layer) by two steps. A new audio time domain segmented steganography (ATDSS) method is proposed in step 1, and the collaboration method of ATDSS and NS is proposed in step 2. The experimental results showed that the advantage of MLS-ATDSS&NS over others is better trade-off between capacity, anti-detectability and robustness, that means higher steganographic capacity, better anti-detectability and stronger robustness.
The effect of a novel minimally invasive strategy for infected necrotizing pancreatitis.
Tong, Zhihui; Shen, Xiao; Ke, Lu; Li, Gang; Zhou, Jing; Pan, Yiyuan; Li, Baiqiang; Yang, Dongliang; Li, Weiqin; Li, Jieshou
2017-11-01
Step-up approach consisting of multiple minimally invasive techniques has gradually become the mainstream for managing infected pancreatic necrosis (IPN). In the present study, we aimed to compare the safety and efficacy of a novel four-step approach and the conventional approach in managing IPN. According to the treatment strategy, consecutive patients fulfilling the inclusion criteria were put into two time intervals to conduct a before-and-after comparison: the conventional group (2010-2011) and the novel four-step group (2012-2013). The conventional group was essentially open necrosectomy for any patient who failed percutaneous drainage of infected necrosis. And the novel drainage approach consisted of four different steps including percutaneous drainage, negative pressure irrigation, endoscopic necrosectomy and open necrosectomy in sequence. The primary endpoint was major complications (new-onset organ failure, sepsis or local complications, etc.). Secondary endpoints included mortality during hospitalization, need of emergency surgery, duration of organ failure and sepsis, etc. Of the 229 recruited patients, 92 were treated with the conventional approach and the remaining 137 were managed with the novel four-step approach. New-onset major complications occurred in 72 patients (78.3%) in the two-step group and 75 patients (54.7%) in the four-step group (p < 0.001). For other important endpoints, although there was no statistical difference in mortality between the two groups (p = 0.403), significantly fewer patients in the four-step group required emergency surgery when compared with the conventional group [14.6% (20/137) vs. 45.6% (42/92), p < 0.001]. In addition, stratified analysis revealed that the four-step approach group presented significantly lower incidence of new-onset organ failure and other major complications in patients with the most severe type of AP. Comparing with the conventional approach, the novel four-step approach significantly reduced the rate of new-onset major complications and requirement of emergency operations in treating IPN, especially in those with the most severe type of acute pancreatitis.
Particle simulation of Coulomb collisions: Comparing the methods of Takizuka and Abe and Nanbu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Chiaming; Lin, Tungyou; Caflisch, Russel
2008-04-20
The interactions of charged particles in a plasma are governed by long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and statistical error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.
Steepest descent method implementation on unconstrained optimization problem using C++ program
NASA Astrophysics Data System (ADS)
Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.
2018-03-01
Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.
Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel
2017-01-01
Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607
Ozden, V Emre; Dikmen, G; Beksac, B; Tozun, I Remzi
2017-06-01
The results of cementless stems in total hip arthroplasty (THA) done because of congenital dislocation with step-cut osteotomy is not well known, particularly the influence of the design and the role of extent of porous coating. Therefore we performed a retrospective study to evaluate the mid to long-term results THA performed with a single type acetabular component and different geometry and fixation type stems with ceramic bearings in the setting of step-cut subtrochanteric osteotomy in high hip dislocated (HHD) patients. We asked if the stem type affect the outcomes in terms of (1) intra and postoperative complication rates (2) radiographic outcomes (3) prosthesis survival in step-cut subtrochanteric shortening osteotomy. The type of the stem, whether cylindrical or tapered does not affect the outcome if the femoral canal fit and fill is obtained and the step-cut femoral shortening osteotomy is primarily fixed. Forty-five hips in 35 patients with a mean follow up of 10 years (range, 7-14 years) were evaluated. The single type cementless cup was placed at the level of the true acetabulum, a step-cut shortening femoral osteotomy was performed and reconstruction was performed with two different types of tapered stem in twenty-two hips (Synergy™ and Image™ proximally coated, Smith and Nephew, Menphis, TN, USA) and one type of cylindrical stem (Echelon™ with 2/3 coated, Smith and Nephew, Menphis, TN, USA) in twenty-three hips. Harris hip scores (HHS) and a University of California Los Angeles (UCLA) activity scores were calculated for all patients and successive X-rays were evaluated regarding component loosening and osteolysis, along with complications related to bearing, step-cut osteotomy and stem types. Forty-one hips (91%) had good and excellent clinical outcome according to HHS. The mean UCLA activity scores improved from 3.2±0.6 points (range, 2-4) preoperatively to 6.3 points±0.5 (range, 5-7) at the latest follow-up. The mean femoral shortening was 36±10mm (range, 20-65mm). Four (9%) dislocations were observed. There were five (11%) intra-operative femoral fractures and three (7%) cases of non-union, which were observed in tapered stems. Cylindrical stems had superior neutral alignment primarily. With any stem revision as the end point, cylindrical stems had a higher survival rate (100%) than all tapered stems (82%; 95% confident interval [CI] 77-97%) at ten years. With any revision as the end point, the 10-year survival rate for acetabular component (Reflection-Ceramic Interfit) and for femoral components were 98% (95% CI, 85-99%) and 91% (95% CI, 78-97%), respectively. There were more implant related complications in HHD patients undergoing THA when tapered stems with 1/3 proximal coating were used to reconstruct a step cut osteotomized femur, compared to cylindrical stems 2/3 coated. IV, retrospective study. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Fineberg, Jeffrey D.; Ritter, David M.
2012-01-01
A-type voltage-gated K+ (Kv) channels self-regulate their activity by inactivating directly from the open state (open-state inactivation [OSI]) or by inactivating before they open (closed-state inactivation [CSI]). To determine the inactivation pathways, it is often necessary to apply several pulse protocols, pore blockers, single-channel recording, and kinetic modeling. However, intrinsic hurdles may preclude the standardized application of these methods. Here, we implemented a simple method inspired by earlier studies of Na+ channels to analyze macroscopic inactivation and conclusively deduce the pathways of inactivation of recombinant and native A-type Kv channels. We investigated two distinct A-type Kv channels expressed heterologously (Kv3.4 and Kv4.2 with accessory subunits) and their native counterparts in dorsal root ganglion and cerebellar granule neurons. This approach applies two conventional pulse protocols to examine inactivation induced by (a) a simple step (single-pulse inactivation) and (b) a conditioning step (double-pulse inactivation). Consistent with OSI, the rate of Kv3.4 inactivation (i.e., the negative first derivative of double-pulse inactivation) precisely superimposes on the profile of the Kv3.4 current evoked by a single pulse because the channels must open to inactivate. In contrast, the rate of Kv4.2 inactivation is asynchronous, already changing at earlier times relative to the profile of the Kv4.2 current evoked by a single pulse. Thus, Kv4.2 inactivation occurs uncoupled from channel opening, indicating CSI. Furthermore, the inactivation time constant versus voltage relation of Kv3.4 decreases monotonically with depolarization and levels off, whereas that of Kv4.2 exhibits a J-shape profile. We also manipulated the inactivation phenotype by changing the subunit composition and show how CSI and CSI combined with OSI might affect spiking properties in a full computational model of the hippocampal CA1 neuron. This work unambiguously elucidates contrasting inactivation pathways in neuronal A-type Kv channels and demonstrates how distinct pathways might impact neurophysiological activity. PMID:23109714
COMPARISON OF TWO METHODS FOR DETECTION OF GIARDIA CYSTS AND CRYTOSPORIDIUM OOCYSTS IN WATER
The steps of two immunofluorescent-antibody-based detection methods were evaluated for their efficiencies in detecting Giardia cysts and Cryptosporidium oocysts. The two methods evaluated were the American Society for Testing and Materials proposed test method for Giardia cysts a...
Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows
NASA Astrophysics Data System (ADS)
Chen, Z.; Shu, C.; Tan, D.
2018-05-01
An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.
Design of a Two-Step Calibration Method of Kinematic Parameters for Serial Robots
NASA Astrophysics Data System (ADS)
WANG, Wei; WANG, Lei; YUN, Chao
2017-03-01
Serial robots are used to handle workpieces with large dimensions, and calibrating kinematic parameters is one of the most efficient ways to upgrade their accuracy. Many models are set up to investigate how many kinematic parameters can be identified to meet the minimal principle, but the base frame and the kinematic parameter are indistinctly calibrated in a one-step way. A two-step method of calibrating kinematic parameters is proposed to improve the accuracy of the robot's base frame and kinematic parameters. The forward kinematics described with respect to the measuring coordinate frame are established based on the product-of-exponential (POE) formula. In the first step the robot's base coordinate frame is calibrated by the unit quaternion form. The errors of both the robot's reference configuration and the base coordinate frame's pose are equivalently transformed to the zero-position errors of the robot's joints. The simplified model of the robot's positioning error is established in second-power explicit expressions. Then the identification model is finished by the least square method, requiring measuring position coordinates only. The complete subtasks of calibrating the robot's 39 kinematic parameters are finished in the second step. It's proved by a group of calibration experiments that by the proposed two-step calibration method the average absolute accuracy of industrial robots is updated to 0.23 mm. This paper presents that the robot's base frame should be calibrated before its kinematic parameters in order to upgrade its absolute positioning accuracy.
Patch-based frame interpolation for old films via the guidance of motion paths
NASA Astrophysics Data System (ADS)
Xia, Tianran; Ding, Youdong; Yu, Bing; Huang, Xi
2018-04-01
Due to improper preservation, traditional films will appear frame loss after digital. To deal with this problem, this paper presents a new adaptive patch-based method of frame interpolation via the guidance of motion paths. Our method is divided into three steps. Firstly, we compute motion paths between two reference frames using optical flow estimation. Then, the adaptive bidirectional interpolation with holes filled is applied to generate pre-intermediate frames. Finally, using patch match to interpolate intermediate frames with the most similar patches. Since the patch match is based on the pre-intermediate frames that contain the motion paths constraint, we show a natural and inartificial frame interpolation. We test different types of old film sequences and compare with other methods, the results prove that our method has a desired performance without hole or ghost effects.
Duan, Ran; Fu, Haoda
2015-08-30
Recurrent event data are an important data type for medical research. In particular, many safety endpoints are recurrent outcomes, such as hypoglycemic events. For such a situation, it is important to identify the factors causing these events and rank these factors by their importance. Traditional model selection methods are not able to provide variable importance in this context. Methods that are able to evaluate the variable importance, such as gradient boosting and random forest algorithms, cannot directly be applied to recurrent events data. In this paper, we propose a two-step method that enables us to evaluate the variable importance for recurrent events data. We evaluated the performance of our proposed method by simulations and applied it to a data set from a diabetes study. Copyright © 2015 John Wiley & Sons, Ltd.
Christensen, Ole F
2012-12-03
Single-step methods provide a coherent and conceptually simple approach to incorporate genomic information into genetic evaluations. An issue with single-step methods is compatibility between the marker-based relationship matrix for genotyped animals and the pedigree-based relationship matrix. Therefore, it is necessary to adjust the marker-based relationship matrix to the pedigree-based relationship matrix. Moreover, with data from routine evaluations, this adjustment should in principle be based on both observed marker genotypes and observed phenotypes, but until now this has been overlooked. In this paper, I propose a new method to address this issue by 1) adjusting the pedigree-based relationship matrix to be compatible with the marker-based relationship matrix instead of the reverse and 2) extending the single-step genetic evaluation using a joint likelihood of observed phenotypes and observed marker genotypes. The performance of this method is then evaluated using two simulated datasets. The method derived here is a single-step method in which the marker-based relationship matrix is constructed assuming all allele frequencies equal to 0.5 and the pedigree-based relationship matrix is constructed using the unusual assumption that animals in the base population are related and inbred with a relationship coefficient γ and an inbreeding coefficient γ / 2. Taken together, this γ parameter and a parameter that scales the marker-based relationship matrix can handle the issue of compatibility between marker-based and pedigree-based relationship matrices. The full log-likelihood function used for parameter inference contains two terms. The first term is the REML-log-likelihood for the phenotypes conditional on the observed marker genotypes, whereas the second term is the log-likelihood for the observed marker genotypes. Analyses of the two simulated datasets with this new method showed that 1) the parameters involved in adjusting marker-based and pedigree-based relationship matrices can depend on both observed phenotypes and observed marker genotypes and 2) a strong association between these two parameters exists. Finally, this method performed at least as well as a method based on adjusting the marker-based relationship matrix. Using the full log-likelihood and adjusting the pedigree-based relationship matrix to be compatible with the marker-based relationship matrix provides a new and interesting approach to handle the issue of compatibility between the two matrices in single-step genetic evaluation.
Boehm, A B; Griffith, J; McGee, C; Edge, T A; Solo-Gabriele, H M; Whitman, R; Cao, Y; Getrich, M; Jay, J A; Ferguson, D; Goodwin, K D; Lee, C M; Madison, M; Weisberg, S B
2009-11-01
The absence of standardized methods for quantifying faecal indicator bacteria (FIB) in sand hinders comparison of results across studies. The purpose of the study was to compare methods for extraction of faecal bacteria from sands and recommend a standardized extraction technique. Twenty-two methods of extracting enterococci and Escherichia coli from sand were evaluated, including multiple permutations of hand shaking, mechanical shaking, blending, sonication, number of rinses, settling time, eluant-to-sand ratio, eluant composition, prefiltration and type of decantation. Tests were performed on sands from California, Florida and Lake Michigan. Most extraction parameters did not significantly affect bacterial enumeration. anova revealed significant effects of eluant composition and blending; with both sodium metaphosphate buffer and blending producing reduced counts. The simplest extraction method that produced the highest FIB recoveries consisted of 2 min of hand shaking in phosphate-buffered saline or deionized water, a 30-s settling time, one-rinse step and a 10 : 1 eluant volume to sand weight ratio. This result was consistent across the sand compositions tested in this study but could vary for other sand types. Method standardization will improve the understanding of how sands affect surface water quality.
Using an interference spectrum as a short-range absolute rangefinder with fiber and wideband source
NASA Astrophysics Data System (ADS)
Hsieh, Tsung-Han; Han, Pin
2018-06-01
Recently, a new type of displacement instrument using spectral-interference has been found, which utilizes fiber and a wideband light source to produce an interference spectrum. In this work, we develop a method that measures the absolute air-gap distance by taking wavelengths at two interference spectra minima. The experimental results agree with the theoretical calculations. It is also utilized to produce and control the spectral switch, which is much easier than other previous methods using other control mechanisms. A scanning mode of this scheme for stepped surface measurement is suggested, which is verified by a standard thickness gauge test. Our scheme is different to one available on the market that may use a curve-fitting method, and some comparisons are made between our scheme and that one.
Zhang, Jie; Wei, Shimin; Ayres, David W; Smith, Harold T; Tse, Francis L S
2011-09-01
Although it is well known that automation can provide significant improvement in the efficiency of biological sample preparation in quantitative LC-MS/MS analysis, it has not been widely implemented in bioanalytical laboratories throughout the industry. This can be attributed to the lack of a sound strategy and practical procedures in working with robotic liquid-handling systems. Several comprehensive automation assisted procedures for biological sample preparation and method validation were developed and qualified using two types of Hamilton Microlab liquid-handling robots. The procedures developed were generic, user-friendly and covered the majority of steps involved in routine sample preparation and method validation. Generic automation procedures were established as a practical approach to widely implement automation into the routine bioanalysis of samples in support of drug-development programs.
Karasawa, N; Mitsutake, A; Takano, H
2017-12-01
Proteins implement their functionalities when folded into specific three-dimensional structures, and their functions are related to the protein structures and dynamics. Previously, we applied a relaxation mode analysis (RMA) method to protein systems; this method approximately estimates the slow relaxation modes and times via simulation and enables investigation of the dynamic properties underlying the protein structural fluctuations. Recently, two-step RMA with multiple evolution times has been proposed and applied to a slightly complex homopolymer system, i.e., a single [n]polycatenane. This method can be applied to more complex heteropolymer systems, i.e., protein systems, to estimate the relaxation modes and times more accurately. In two-step RMA, we first perform RMA and obtain rough estimates of the relaxation modes and times. Then, we apply RMA with multiple evolution times to a small number of the slowest relaxation modes obtained in the previous calculation. Herein, we apply this method to the results of principal component analysis (PCA). First, PCA is applied to a 2-μs molecular dynamics simulation of hen egg-white lysozyme in aqueous solution. Then, the two-step RMA method with multiple evolution times is applied to the obtained principal components. The slow relaxation modes and corresponding relaxation times for the principal components are much improved by the second RMA.
NASA Astrophysics Data System (ADS)
Karasawa, N.; Mitsutake, A.; Takano, H.
2017-12-01
Proteins implement their functionalities when folded into specific three-dimensional structures, and their functions are related to the protein structures and dynamics. Previously, we applied a relaxation mode analysis (RMA) method to protein systems; this method approximately estimates the slow relaxation modes and times via simulation and enables investigation of the dynamic properties underlying the protein structural fluctuations. Recently, two-step RMA with multiple evolution times has been proposed and applied to a slightly complex homopolymer system, i.e., a single [n ] polycatenane. This method can be applied to more complex heteropolymer systems, i.e., protein systems, to estimate the relaxation modes and times more accurately. In two-step RMA, we first perform RMA and obtain rough estimates of the relaxation modes and times. Then, we apply RMA with multiple evolution times to a small number of the slowest relaxation modes obtained in the previous calculation. Herein, we apply this method to the results of principal component analysis (PCA). First, PCA is applied to a 2-μ s molecular dynamics simulation of hen egg-white lysozyme in aqueous solution. Then, the two-step RMA method with multiple evolution times is applied to the obtained principal components. The slow relaxation modes and corresponding relaxation times for the principal components are much improved by the second RMA.
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.
Chen, Chi-Kan
2017-07-26
The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step algorithms can potentially incorporate with different nonlinear differential equation models to reconstruct the GRN.
Integrating ethics in health technology assessment: many ways to Rome.
Hofmann, Björn; Oortwijn, Wija; Bakke Lysdahl, Kristin; Refolo, Pietro; Sacchini, Dario; van der Wilt, Gert Jan; Gerhardus, Ansgar
2015-01-01
The aim of this study was to identify and discuss appropriate approaches to integrate ethical inquiry in health technology assessment (HTA). The key question is how ethics can be integrated in HTA. This is addressed in two steps: by investigating what it means to integrate ethics in HTA, and by assessing how suitable the various methods in ethics are to be integrated in HTA according to these meanings of integration. In the first step, we found that integrating ethics can mean that ethics is (a) subsumed under or (b) combined with other parts of the HTA process; that it can be (c) coordinated with other parts; or that (d) ethics actively interacts and changes other parts of the HTA process. For the second step, we found that the various methods in ethics have different merits with respect to the four conceptions of integration in HTA. Traditional approaches in moral philosophy tend to be most suited to be subsumed or combined, while processual approaches being close to the HTA or implementation process appear to be most suited to coordinated and interactive types of integration. The article provides a guide for choosing the ethics approach that appears most appropriate for the goals and process of a particular HTA.
NASA Astrophysics Data System (ADS)
Kim, Jae-Hun; Mirzaei, Ali; Kim, Hyoun Woo; Kim, Sang Sub
2018-05-01
Stainless steels are among the most common engineering materials and are used extensively in humid areas. Therefore, it is important that these materials must be robust to humidity and corrosion. This paper reports the fabrication of superhydrophobic surfaces from austenitic stainless steel (type AISI 304) using a facile two-step chemical etching method. In the first step, the stainless steel plates were etched in a HF solution, followed by a fluorination process, where they showed a water contact angle (WCA) of 166° and a sliding angle of 5° under the optimal conditions. To further enhance the superhydrophobicity, in the second step, they were dipped in a 0.1 wt.% NaCl solution at 100 °C, where the WCA was increased to 168° and the sliding angle was decreased to ∼2°. The long-term durability of the fabricated superhydrophobic samples for 1 month storage in air and water was investigated. The potential applicability of the fabricated samples was demonstrated by the excellent superhydrophobicity after 1 month. In addition, the self-cleaning properties of the fabricated superhydrophobic surface were also demonstrated. This paper outlines a facile, low-cost and scalable chemical etching method that can be adopted easily for large-scale purposes.
[Application of ordinary Kriging method in entomologic ecology].
Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong
2003-01-01
Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.
Methods for the continuous production of plastic scintillator materials
Bross, Alan; Pla-Dalmau, Anna; Mellott, Kerry
1999-10-19
Methods for producing plastic scintillating material employing either two major steps (tumble-mix) or a single major step (inline-coloring or inline-doping). Using the two step method, the polymer pellets are mixed with silicone oil, and the mixture is then tumble mixed with the dopants necessary to yield the proper response from the scintillator material. The mixture is then placed in a compounder and compounded in an inert gas atmosphere. The resultant scintillator material is then extruded and pelletized or formed. When only a single step is employed, the polymer pellets and dopants are metered into an inline-coloring extruding system. The mixture is then processed under a inert gas atmosphere, usually argon or nitrogen, to form plastic scintillator material in the form of either scintillator pellets, for subsequent processing, or as material in the direct formation of the final scintillator shape or form.
Insights: A New Method to Balance Chemical Equations.
ERIC Educational Resources Information Center
Garcia, Arcesio
1987-01-01
Describes a method designed to balance oxidation-reduction chemical equations. Outlines a method which is based on changes in the oxidation number that can be applied to both molecular reactions and ionic reactions. Provides examples and delineates the steps to follow for each type of equation balancing. (TW)
Tavakoli, Behnoosh; Zhu, Quing
2013-01-01
Ultrasound-guided diffuse optical tomography (DOT) is a promising method for characterizing malignant and benign lesions in the female breast. We introduce a new two-step algorithm for DOT inversion in which the optical parameters are estimated with the global optimization method, genetic algorithm. The estimation result is applied as an initial guess to the conjugate gradient (CG) optimization method to obtain the absorption and scattering distributions simultaneously. Simulations and phantom experiments have shown that the maximum absorption and reduced scattering coefficients are reconstructed with less than 10% and 25% errors, respectively. This is in contrast with the CG method alone, which generates about 20% error for the absorption coefficient and does not accurately recover the scattering distribution. A new measure of scattering contrast has been introduced to characterize benign and malignant breast lesions. The results of 16 clinical cases reconstructed with the two-step method demonstrates that, on average, the absorption coefficient and scattering contrast of malignant lesions are about 1.8 and 3.32 times higher than the benign cases, respectively.
A Comparative Evaluation between Cheiloscopic Patterns and Terminal Planes in Primary Dentition
Vignesh, R; Rekha, C Vishnu; Annamalai, Sankar; Norouzi, Parisa; Sharmin, Ditto
2017-01-01
Objective: To assess the correlation between different cheiloscopic patterns with the terminal planes in deciduous dentition. Materials and Methods: Three hundred children who are 3–6 years old with complete primary dentition were recruited, and the pattern of molar terminal plane was recorded in the pro forma. Lip prints of these children were recorded with lipstick-cellophane method, and the middle 10 mm of lower lip was analyzed for the lip print pattern as suggested by Sivapathasundharam et al. The pattern was classified based on Tsuchihashi and Suzuki classification. Results: Type II (branched) pattern was the most predominant cheiloscopic pattern. The predominant patterns which related to the terminal planes were as follows: Type IV (reticular) and Type V (irregular) pattern for mesial step, Type IV (reticular) pattern for distal step, and Type I (complete vertical) pattern for flush terminal plane. No significant relationship was obtained on gender comparison. Conclusion: Lip prints can provide an alternative to dermatoglyphics to predict the terminal plane in primary dentition. Further studies with larger sample size are required to provide an insight into its significant correlations. PMID:29326500
Design and fabrication of realistic adhesively bonded joints
NASA Technical Reports Server (NTRS)
Shyprykevich, P.
1983-01-01
Eighteen bonded joint test specimens representing three different designs of a composite wing chordwise bonded splice were designed and fabricated using current aircraft industry practices. Three types of joints (full wing laminate penetration, two side stepped; midthickness penetration, one side stepped; and partial penetration, scarfed) were analyzed using state of the art elastic joint analysis modified for plastic behavior of the adhesive. The static tensile fail load at room temperature was predicted to be: (1) 1026 kN/m (5860 1b/in) for the two side stepped joint; (2) 925 kN/m (5287 1b/in) for the one side stepped joint; and (3) 1330 kN/m (7600 1b/in) for the scarfed joint. All joints were designed to fail in the adhesive.
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less
Landsman, V; Lou, W Y W; Graubard, B I
2015-05-20
We present a two-step approach for estimating hazard rates and, consequently, survival probabilities, by levels of general categorical exposure. The resulting estimator utilizes three sources of data: vital statistics data and census data are used at the first step to estimate the overall hazard rate for a given combination of gender and age group, and cohort data constructed from a nationally representative complex survey with linked mortality records, are used at the second step to divide the overall hazard rate by exposure levels. We present an explicit expression for the resulting estimator and consider two methods for variance estimation that account for complex multistage sample design: (1) the leaving-one-out jackknife method, and (2) the Taylor linearization method, which provides an analytic formula for the variance estimator. The methods are illustrated with smoking and all-cause mortality data from the US National Health Interview Survey Linked Mortality Files, and the proposed estimator is compared with a previously studied crude hazard rate estimator that uses survey data only. The advantages of a two-step approach and possible extensions of the proposed estimator are discussed. Copyright © 2015 John Wiley & Sons, Ltd.
Flexible biodegradable citrate-based polymeric step-index optical fiber.
Shan, Dingying; Zhang, Chenji; Kalaba, Surge; Mehta, Nikhil; Kim, Gloria B; Liu, Zhiwen; Yang, Jian
2017-10-01
Implanting fiber optical waveguides into tissue or organs for light delivery and collection is among the most effective ways to overcome the issue of tissue turbidity, a long-standing obstacle for biomedical optical technologies. Here, we report a citrate-based material platform with engineerable opto-mechano-biological properties and demonstrate a new type of biodegradable, biocompatible, and low-loss step-index optical fiber for organ-scale light delivery and collection. By leveraging the rich designability and processibility of citrate-based biodegradable polymers, two exemplary biodegradable elastomers with a fine refractive index difference and yet matched mechanical properties and biodegradation profiles were developed. Furthermore, we developed a two-step fabrication method to fabricate flexible and low-loss (0.4 db/cm) optical fibers, and performed systematic characterizations to study optical, spectroscopic, mechanical, and biodegradable properties. In addition, we demonstrated the proof of concept of image transmission through the citrate-based polymeric optical fibers and conducted in vivo deep tissue light delivery and fluorescence sensing in a Sprague-Dawley (SD) rat, laying the groundwork for realizing future implantable devices for long-term implantation where deep-tissue light delivery, sensing and imaging are desired, such as cell, tissue, and scaffold imaging in regenerative medicine and in vivo optogenetic stimulation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Adiabatic tapered optical fiber fabrication in two step etching
NASA Astrophysics Data System (ADS)
Chenari, Z.; Latifi, H.; Ghamari, S.; Hashemi, R. S.; Doroodmand, F.
2016-01-01
A two-step etching method using HF acid and Buffered HF is proposed to fabricate adiabatic biconical optical fiber tapers. Due to the fact that the etching rate in second step is almost 3 times slower than the previous droplet etching method, terminating the fabrication process is controllable enough to achieve a desirable fiber diameter. By monitoring transmitted spectrum, final diameter and adiabaticity of tapers are deduced. Tapers with losses about 0.3 dB in air and 4.2 dB in water are produced. The biconical fiber taper fabricated using this method is used to excite whispering gallery modes (WGMs) on a microsphere surface in an aquatic environment. So that they are suitable to be used in applications like WGM biosensors.
Chemical-free n-type and p-type multilayer-graphene transistors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dissanayake, D. M. N. M., E-mail: nandithad@voxtel-inc.com; Eisaman, M. D.; Department of Electrical and Computer Engineering, Stony Brook University, Stony Brook, New York 11794
A single-step doping method to fabricate n- and p-type multilayer graphene (MG) top-gate field effect transistors (GFETs) is demonstrated. The transistors are fabricated on soda-lime glass substrates, with the n-type doping of MG caused by the sodium in the substrate without the addition of external chemicals. Placing a hydrogen silsesquioxane (HSQ) barrier layer between the MG and the substrate blocks the n-doping, resulting in p-type doping of the MG above regions patterned with HSQ. The HSQ is deposited in a single fabrication step using electron beam lithography, allowing the patterning of arbitrary sub-micron spatial patterns of n- and p-type doping.more » When a MG channel is deposited partially on the barrier and partially on the glass substrate, a p-type and n-type doping profile is created, which is used for fabricating complementary transistors pairs. Unlike chemically doped GFETs in which the external dopants are typically introduced from the top, these substrate doped GFETs allow for a top gate which gives a stronger electrostatic coupling to the channel, reducing the operating gate bias. Overall, this method enables scalable fabrication of n- and p-type complementary top-gated GFETs with high spatial resolution for graphene microelectronic applications.« less
Vail, W.B. III.
1991-12-24
Methods of operation are described for an apparatus having at least two pairs of voltage measurement electrodes vertically disposed in a cased well to measure the resistivity of adjacent geological formations from inside the cased well. During stationary measurements with the apparatus at a fixed vertical depth within the cased well, the invention herein discloses methods of operation which include a measurement step and subsequent first and second compensation steps respectively resulting in improved accuracy of measurement. The invention also discloses multiple frequency methods of operation resulting in improved accuracy of measurement while the apparatus is simultaneously moved vertically in the cased well. The multiple frequency methods of operation disclose a first A.C. current having a first frequency that is conducted from the casing into formation and a second A.C. current having a second frequency that is conducted along the casing. The multiple frequency methods of operation simultaneously provide the measurement step and two compensation steps necessary to acquire accurate results while the apparatus is moved vertically in the cased well. 6 figures.
Vail, III, William B.
1991-01-01
Methods of operation of an apparatus having at least two pairs of voltage measurement electrodes vertically disposed in a cased well to measure the resistivity of adjacent geological formations from inside the cased well. During stationary measurements with the apparatus at a fixed vertical depth within the cased well, the invention herein discloses methods of operation which include a measurement step and subsequent first and second compensation steps respectively resulting in improved accuracy of measurement. The invention also discloses multiple frequency methods of operation resulting in improved accuracy of measurement while the apparatus is simultaneously moved vertically in the cased well. The multiple frequency methods of operation disclose a first A.C. current having a first frequency that is conducted from the casing into formation and a second A.C. current having a second frequency that is conducted along the casing. The multiple frequency methods of operation simultaneously provide the measurement step and two compensation steps necessary to acquire accurate results while the apparatus is moved vertically in the cased well.
Methods to approximate reliabilities in single-step genomic evaluation
USDA-ARS?s Scientific Manuscript database
Reliability of predictions from single-step genomic BLUP (ssGBLUP) can be calculated by inversion, but that is not feasible for large data sets. Two methods of approximating reliability were developed based on decomposition of a function of reliability into contributions from records, pedigrees, and...
Optimisation of olive oil phenol extraction conditions using a high-power probe ultrasonication.
Jerman Klen, T; Mozetič Vodopivec, B
2012-10-15
A new method of ultrasound probe assisted liquid-liquid extraction (US-LLE) combined with a freeze-based fat precipitation clean-up and HPLC-DAD-FLD-MS detection is described for extra virgin olive oil (EVOO) phenol analysis. Three extraction variables (solvent type; 100%, 80%, 50% methanol, sonication time; 5, 10, 20 min, extraction steps; 1-5) and two clean-up methods (n-hexane washing vs. low temperature fat precipitation) were studied and optimised with aim to maximise extracts' phenol recoveries. A three-step extraction of 10 min with pure methanol (5 mL) resulted in the highest phenol content of freeze-based defatted extracts (667 μg GAE g(-1)) from 10 g of EVOO, providing much higher efficiency (up to 68%) and repeatability (up to 51%) vs. its non-sonicated counterpart (LLE-agitation) and n-hexane washing. In addition, the overall method provided high linearity (r(2)≥0.97), precision (RSD: 0.4-9.3%) and sensitivity with LODs/LOQs ranging from 0.03 to 0.16 μg g(-1) and 0.10-0.51 μg g(-1) of EVOO, respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.
The Multigrid-Mask Numerical Method for Solution of Incompressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Ku, Hwar-Ching; Popel, Aleksander S.
1996-01-01
A multigrid-mask method for solution of incompressible Navier-Stokes equations in primitive variable form has been developed. The main objective is to apply this method in conjunction with the pseudospectral element method solving flow past multiple objects. There are two key steps involved in calculating flow past multiple objects. The first step utilizes only Cartesian grid points. This homogeneous or mask method step permits flow into the interior rectangular elements contained in objects, but with the restriction that the velocity for those Cartesian elements within and on the surface of an object should be small or zero. This step easily produces an approximate flow field on Cartesian grid points covering the entire flow field. The second or heterogeneous step corrects the approximate flow field to account for the actual shape of the objects by solving the flow field based on the local coordinates surrounding each object and adapted to it. The noise occurring in data communication between the global (low frequency) coordinates and the local (high frequency) coordinates is eliminated by the multigrid method when the Schwarz Alternating Procedure (SAP) is implemented. Two dimensional flow past circular and elliptic cylinders will be presented to demonstrate the versatility of the proposed method. An interesting phenomenon is found that when the second elliptic cylinder is placed in the wake of the first elliptic cylinder a traction force results in a negative drag coefficient.
Null test fourier domain alignment technique for phase-shifting point diffraction interferometer
Naulleau, Patrick; Goldberg, Kenneth Alan
2000-01-01
Alignment technique for calibrating a phase-shifting point diffraction interferometer involves three independent steps where the first two steps independently align the image points and pinholes in rotation and separation to a fixed reference coordinate system, e.g, CCD. Once the two sub-elements have been properly aligned to the reference in two parameters (separation and orientation), the third step is to align the two sub-element coordinate systems to each other in the two remaining parameters (x,y) using standard methods of locating the pinholes relative to some easy to find reference point.
Statistical models for detecting differential chromatin interactions mediated by a protein.
Niu, Liang; Li, Guoliang; Lin, Shili
2014-01-01
Chromatin interactions mediated by a protein of interest are of great scientific interest. Recent studies show that protein-mediated chromatin interactions can have different intensities in different types of cells or in different developmental stages of a cell. Such differences can be associated with a disease or with the development of a cell. Thus, it is of great importance to detect protein-mediated chromatin interactions with different intensities in different cells. A recent molecular technique, Chromatin Interaction Analysis by Paired-End Tag Sequencing (ChIA-PET), which uses formaldehyde cross-linking and paired-end sequencing, is able to detect genome-wide chromatin interactions mediated by a protein of interest. Here we proposed two models (One-Step Model and Two-Step Model) for two sample ChIA-PET count data (one biological replicate in each sample) to identify differential chromatin interactions mediated by a protein of interest. Both models incorporate the data dependency and the extent to which a fragment pair is related to a pair of DNA loci of interest to make accurate identifications. The One-Step Model makes use of the data more efficiently but is more computationally intensive. An extensive simulation study showed that the models can detect those differentially interacted chromatins and there is a good agreement between each classification result and the truth. Application of the method to a two-sample ChIA-PET data set illustrates its utility. The two models are implemented as an R package MDM (available at http://www.stat.osu.edu/~statgen/SOFTWARE/MDM).
Statistical Models for Detecting Differential Chromatin Interactions Mediated by a Protein
Niu, Liang; Li, Guoliang; Lin, Shili
2014-01-01
Chromatin interactions mediated by a protein of interest are of great scientific interest. Recent studies show that protein-mediated chromatin interactions can have different intensities in different types of cells or in different developmental stages of a cell. Such differences can be associated with a disease or with the development of a cell. Thus, it is of great importance to detect protein-mediated chromatin interactions with different intensities in different cells. A recent molecular technique, Chromatin Interaction Analysis by Paired-End Tag Sequencing (ChIA-PET), which uses formaldehyde cross-linking and paired-end sequencing, is able to detect genome-wide chromatin interactions mediated by a protein of interest. Here we proposed two models (One-Step Model and Two-Step Model) for two sample ChIA-PET count data (one biological replicate in each sample) to identify differential chromatin interactions mediated by a protein of interest. Both models incorporate the data dependency and the extent to which a fragment pair is related to a pair of DNA loci of interest to make accurate identifications. The One-Step Model makes use of the data more efficiently but is more computationally intensive. An extensive simulation study showed that the models can detect those differentially interacted chromatins and there is a good agreement between each classification result and the truth. Application of the method to a two-sample ChIA-PET data set illustrates its utility. The two models are implemented as an R package MDM (available at http://www.stat.osu.edu/~statgen/SOFTWARE/MDM). PMID:24835279
Chen, Jian-bo; Sun, Su-qin; Zhou, Qun
2015-07-01
The nondestructive and label-free infrared (IR) spectroscopy is a direct tool to characterize the spatial distribution of organic and inorganic compounds in plant. Since plant samples are usually complex mixtures, signal-resolving methods are necessary to find the spectral features of compounds of interest in the signal-overlapped IR spectra. In this research, two approaches using existing data-driven signal-resolving methods are proposed to interpret the IR spectra of plant samples. If the number of spectra is small, "tri-step identification" can enhance the spectral resolution to separate and identify the overlapped bands. First, the envelope bands of the original spectrum are interpreted according to the spectra-structure correlations. Then the spectrum is differentiated to resolve the underlying peaks in each envelope band. Finally, two-dimensional correlation spectroscopy is used to enhance the spectral resolution further. For a large number of spectra, "tri-step decomposition" can resolve the spectra by multivariate methods to obtain the structural and semi-quantitative information about the chemical components. Principal component analysis is used first to explore the existing signal types without any prior knowledge. Then the spectra are decomposed by self-modeling curve resolution methods to estimate the spectra and contents of significant chemical components. At last, targeted methods such as partial least squares target can explore the content profiles of specific components sensitively. As an example, the macroscopic and microscopic distribution of eugenol and calcium oxalate in the bud of clove is studied.
A computational method for sharp interface advection
Bredmose, Henrik; Jasak, Hrvoje
2016-01-01
We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619
Fekete, Attila; Komáromi, István
2016-12-07
A proteolytic reaction of papain with a simple peptide model substrate N-methylacetamide has been studied. Our aim was twofold: (i) we proposed a plausible reaction mechanism with the aid of potential energy surface scans and second geometrical derivatives calculated at the stationary points, and (ii) we investigated the applicability of the dispersion corrected density functional methods in comparison with the popular hybrid generalized gradient approximations (GGA) method (B3LYP) without such a correction in the QM/MM calculations for this particular problem. In the resting state of papain the ion pair and neutral forms of the Cys-His catalytic dyad have approximately the same energy and they are separated by only a small barrier. Zero point vibrational energy correction shifted this equilibrium slightly to the neutral form. On the other hand, the electrostatic solvation free energy corrections, calculated using the Poisson-Boltzmann method for the structures sampled from molecular dynamics simulation trajectories, resulted in a more stable ion-pair form. All methods we applied predicted at least a two elementary step acylation process via a zwitterionic tetrahedral intermediate. Using dispersion corrected DFT methods the thioester S-C bond formation and the proton transfer from histidine occur in the same elementary step, although not synchronously. The proton transfer lags behind (or at least does not precede) the S-C bond formation. The predicted transition state corresponds mainly to the S-C bond formation while the proton is still on the histidine Nδ atom. In contrast, the B3LYP method using larger basis sets predicts a transition state in which the S-C bond is almost fully formed and the transition state can be mainly featured by the Nδ(histidine) to N(amid) proton transfer. Considerably lower activation energy was predicted (especially by the B3LYP method) for the next amide bond breaking elementary step of acyl-enzyme formation. Deacylation appeared to be a single elementary step process in all the methods we applied.
Du, Yuncheng; Budman, Hector M; Duever, Thomas A
2017-06-01
Accurate and fast quantitative analysis of living cells from fluorescence microscopy images is useful for evaluating experimental outcomes and cell culture protocols. An algorithm is developed in this work to automatically segment and distinguish apoptotic cells from normal cells. The algorithm involves three steps consisting of two segmentation steps and a classification step. The segmentation steps are: (i) a coarse segmentation, combining a range filter with a marching square method, is used as a prefiltering step to provide the approximate positions of cells within a two-dimensional matrix used to store cells' images and the count of the number of cells for a given image; and (ii) a fine segmentation step using the Active Contours Without Edges method is applied to the boundaries of cells identified in the coarse segmentation step. Although this basic two-step approach provides accurate edges when the cells in a given image are sparsely distributed, the occurrence of clusters of cells in high cell density samples requires further processing. Hence, a novel algorithm for clusters is developed to identify the edges of cells within clusters and to approximate their morphological features. Based on the segmentation results, a support vector machine classifier that uses three morphological features: the mean value of pixel intensities in the cellular regions, the variance of pixel intensities in the vicinity of cell boundaries, and the lengths of the boundaries, is developed for distinguishing apoptotic cells from normal cells. The algorithm is shown to be efficient in terms of computational time, quantitative analysis, and differentiation accuracy, as compared with the use of the active contours method without the proposed preliminary coarse segmentation step.
The development and plasticity of alveolar type 1 cells
Yang, Jun; Hernandez, Belinda J.; Martinez Alanis, Denise; Narvaez del Pilar, Odemaris; Vila-Ellis, Lisandra; Akiyama, Haruhiko; Evans, Scott E.; Ostrin, Edwin J.; Chen, Jichao
2016-01-01
Alveolar type 1 (AT1) cells cover >95% of the gas exchange surface and are extremely thin to facilitate passive gas diffusion. The development of these highly specialized cells and its coordination with the formation of the honeycomb-like alveolar structure are poorly understood. Using new marker-based stereology and single-cell imaging methods, we show that AT1 cells in the mouse lung form expansive thin cellular extensions via a non-proliferative two-step process while retaining cellular plasticity. In the flattening step, AT1 cells undergo molecular specification and remodel cell junctions while remaining connected to their epithelial neighbors. In the folding step, AT1 cells increase in size by more than 10-fold and undergo cellular morphogenesis that matches capillary and secondary septa formation, resulting in a single AT1 cell spanning multiple alveoli. Furthermore, AT1 cells are an unexpected source of VEGFA and their normal development is required for alveolar angiogenesis. Notably, a majority of AT1 cells proliferate upon ectopic SOX2 expression and undergo stage-dependent cell fate reprogramming. These results provide evidence that AT1 cells have both structural and signaling roles in alveolar maturation and can exit their terminally differentiated non-proliferative state. Our findings suggest that AT1 cells might be a new target in the pathogenesis and treatment of lung diseases associated with premature birth. PMID:26586225
Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution.
Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl
2016-11-16
Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.
Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution
NASA Astrophysics Data System (ADS)
Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl
2016-11-01
Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.
Rapid step-gradient purification of mitochondrial DNA.
Welter, C; Meese, E; Blin, N
1988-01-01
A convenient modification of the step gradient (CsCl/ethidium bomide) procedure is described. This rapid method allows isolation of covalently closed circular DNA separated from contaminating proteins, RNA and chromosomal DNA in ca. 5 h. Large scale preparations can be performed for circular DNA from eukaryotic organelles (mitochondria). The protocol uses organelle pelleting/NaCl-sarcosyl incubation steps for mitochondria followed by a CsCl step gradient and exhibits yields equal to the conventional procedures. It results in DNA sufficiently pure to be used for restriction endonuclease analysis, subcloning, 5'-end labeling, gel retention assays, and various types of hybridization.
3D Numerical Simulation on the Rockslide Generated Tsunamis
NASA Astrophysics Data System (ADS)
Chuang, M.; Wu, T.; Wang, C.; Chu, C.
2013-12-01
The rockslide generated tsunami is one of the most devastating nature hazards. However, the involvement of the moving obstacle and dynamic free-surface movement makes the numerical simulation a difficult task. To describe both the fluid motion and solid movement at the same time, we newly developed a two-way fully-coupled moving solid algorithm with 3D LES turbulent model. The free-surface movement is tracked by volume of fluid (VOF) method. The two-step projection method is adopted to solve the Navier-Stokes type government equations. In the new moving solid algorithm, a fictitious body force is implicitly prescribed in MAC correction step to make the cell-center velocity satisfied with the obstacle velocity. We called this method the implicit velocity method (IVM). Because no extra terms are added to the pressure Poission correction, the pressure field of the fluid part is stable, which is the key of the two-way fluid-solid coupling. Because no real solid material is presented in the IVM, the time marching step is not restricted to the smallest effective grid size. Also, because the fictitious force is implicitly added to the correction step, the resulting velocity is accurate and fully coupled with the resulting pressure field. We validated the IVM by simulating a floating box moving up and down on the free-surface. We presented the time-history obstacle trajectory and compared it with the experimental data. Very accurate result can be seen in terms of the oscillating amplitude and the period (Fig. 1). We also presented the free-surface comparison with the high-speed snapshots. At the end, the IVM was used to study the rock-slide generated tsunamis (Liu et al., 2005). Good validations on the slide trajectory and the free-surface movement will be presented in the full paper. From the simulation results (Fig. 2), we observed that the rockslide generated waves are manly caused by the rebounding waves from two sides of the sliding rock after the water is dragging down by the solid downward motion. We also found that the turbulence has minor effect to the main flow field. The rock size, rock density, and the steepness of the slope were analyzed to understand their effects to the maximum runup height. The detailed algorithm of IVM, the validation, the simulation and analysis of rockslide tsunami will be presented in the full paper. Figure 1. Time-history trajectory of obstacle for the floating obstacle simulation. Figure 2. Snapshots of the free-surface elevation with streamlines for the rockslide tsunami simulation.
3D Face Modeling Using the Multi-Deformable Method
Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun
2012-01-01
In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976
A Heckman selection model for the safety analysis of signalized intersections
Wong, S. C.; Zhu, Feng; Pei, Xin; Huang, Helai; Liu, Youjun
2017-01-01
Purpose The objective of this paper is to provide a new method for estimating crash rate and severity simultaneously. Methods This study explores a Heckman selection model of the crash rate and severity simultaneously at different levels and a two-step procedure is used to investigate the crash rate and severity levels. The first step uses a probit regression model to determine the sample selection process, and the second step develops a multiple regression model to simultaneously evaluate the crash rate and severity for slight injury/kill or serious injury (KSI), respectively. The model uses 555 observations from 262 signalized intersections in the Hong Kong metropolitan area, integrated with information on the traffic flow, geometric road design, road environment, traffic control and any crashes that occurred during two years. Results The results of the proposed two-step Heckman selection model illustrate the necessity of different crash rates for different crash severity levels. Conclusions A comparison with the existing approaches suggests that the Heckman selection model offers an efficient and convenient alternative method for evaluating the safety performance at signalized intersections. PMID:28732050
NASA Astrophysics Data System (ADS)
Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho
2018-06-01
In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.
Comparison of Methods for Demonstrating Passage of Time When Using Computer-Based Video Prompting
ERIC Educational Resources Information Center
Mechling, Linda C.; Bryant, Kathryn J.; Spencer, Galen P.; Ayres, Kevin M.
2015-01-01
Two different video-based procedures for presenting the passage of time (how long a step lasts) were examined. The two procedures were presented within the framework of video prompting to promote independent multi-step task completion across four young adults with moderate intellectual disability. The two procedures demonstrating passage of the…
Segregated nodal domains of two-dimensional multispecies Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Chang, Shu-Ming; Lin, Chang-Shou; Lin, Tai-Chia; Lin, Wen-Wei
2004-09-01
In this paper, we study the distribution of m segregated nodal domains of the m-mixture of Bose-Einstein condensates under positive and large repulsive scattering lengths. It is shown that components of positive bound states may repel each other and form segregated nodal domains as the repulsive scattering lengths go to infinity. Efficient numerical schemes are created to confirm our theoretical results and discover a new phenomenon called verticillate multiplying, i.e., the generation of multiple verticillate structures. In addition, our proposed Gauss-Seidel-type iteration method is very effective in that it converges linearly in 10-20 steps.
Global research priorities for interpersonal violence prevention: a modified Delphi study
Tanaka, Masako; Tomlinson, Mark; Streiner, David L; Tonmyr, Lil; Lee, Bandy X; Fisher, Jane; Hegadoren, Kathy; Pim, Joam Evans; Wang, Shr-Jie Sharlenna; MacMillan, Harriet L
2017-01-01
Abstract Objective To establish global research priorities for interpersonal violence prevention using a systematic approach. Methods Research priorities were identified in a three-round process involving two surveys. In round 1, 95 global experts in violence prevention proposed research questions to be ranked in round 2. Questions were collated and organized according to the four-step public health approach to violence prevention. In round 2, 280 international experts ranked the importance of research in the four steps, and the various substeps, of the public health approach. In round 3, 131 international experts ranked the importance of detailed research questions on the public health step awarded the highest priority in round 2. Findings In round 2, “developing, implementing and evaluating interventions” was the step of the public health approach awarded the highest priority for four of the six types of violence considered (i.e. child maltreatment, intimate partner violence, armed violence and sexual violence) but not for youth violence or elder abuse. In contrast, “scaling up interventions and evaluating their cost–effectiveness” was ranked lowest for all types of violence. In round 3, research into “developing, implementing and evaluating interventions” that addressed parenting or laws to regulate the use of firearms was awarded the highest priority. The key limitations of the study were response and attrition rates among survey respondents. However, these rates were in line with similar priority-setting exercises. Conclusion These findings suggest it is premature to scale up violence prevention interventions. Developing and evaluating smaller-scale interventions should be the funding priority. PMID:28053363
Comparative analysis of peak-detection techniques for comprehensive two-dimensional chromatography.
Latha, Indu; Reichenbach, Stephen E; Tao, Qingping
2011-09-23
Comprehensive two-dimensional gas chromatography (GC×GC) is a powerful technology for separating complex samples. The typical goal of GC×GC peak detection is to aggregate data points of analyte peaks based on their retention times and intensities. Two techniques commonly used for two-dimensional peak detection are the two-step algorithm and the watershed algorithm. A recent study [4] compared the performance of the two-step and watershed algorithms for GC×GC data with retention-time shifts in the second-column separations. In that analysis, the peak retention-time shifts were corrected while applying the two-step algorithm but the watershed algorithm was applied without shift correction. The results indicated that the watershed algorithm has a higher probability of erroneously splitting a single two-dimensional peak than the two-step approach. This paper reconsiders the analysis by comparing peak-detection performance for resolved peaks after correcting retention-time shifts for both the two-step and watershed algorithms. Simulations with wide-ranging conditions indicate that when shift correction is employed with both algorithms, the watershed algorithm detects resolved peaks with greater accuracy than the two-step method. Copyright © 2011 Elsevier B.V. All rights reserved.
Yang, Cheng-Huei; Luo, Ching-Hsing; Yang, Cheng-Hong; Chuang, Li-Yeh
2004-01-01
Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, including mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for disabled persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. This restriction is a major hindrance. Therefore, a switch adaptive automatic recognition method with a high recognition rate is needed. The proposed system combines counter-propagation networks with a variable degree variable step size LMS algorithm. It is divided into five stages: space recognition, tone recognition, learning process, adaptive processing, and character recognition. Statistical analyses demonstrated that the proposed method elicited a better recognition rate in comparison to alternative methods in the literature.
Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes
NASA Technical Reports Server (NTRS)
Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak
2004-01-01
High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel Benchmarks (NPB). In this paper, we present some interesting performance results of ow OpenMP parallel implementation on different architectures such as the SGI Origin2000, SGI Altix, and Cray MTA-2.
NASA Astrophysics Data System (ADS)
Colby, Aaron H.; Liu, Rong; Schulz, Morgan D.; Padera, Robert F.; Colson, Yolonda L.; Grinstaff, Mark W.
2016-01-01
Drug dose, high local target tissue concentration, and prolonged duration of exposure are essential criteria in achieving optimal drug performance. However, systemically delivered drugs often fail to effectively address these factors with only fractions of the injected dose reaching the target tissue. This is especially evident in the treatment of peritoneal cancers, including mesothelioma, ovarian, and pancreatic cancer, which regularly employ regimens of intravenous and/or intraperitoneal chemotherapy (e.g., gemcitabine, cisplatin, pemetrexed, and paclitaxel) with limited results. Here, we show that a “two-step” nanoparticle (NP) delivery system may address this limitation. This two-step approach involves the separate administration of NP and drug where, first, the NP localizes to tumor. Second, subsequent administration of drug then rapidly concentrates into the NP already stationed within the target tissue. This two-step method results in a greater than 5-fold increase in intratumoral drug concentrations compared to conventional “drug-alone” administration. These results suggest that this unique two-step delivery may provide a novel method for increasing drug concentrations in target tissues.
Modeling behavior dynamics using computational psychometrics within virtual worlds.
Cipresso, Pietro
2015-01-01
In case of fire in a building, how will people behave in the crowd? The behavior of each individual affects the behavior of others and, conversely, each one behaves considering the crowd as a whole and the individual others. In this article, I propose a three-step method to explore a brand new way to study behavior dynamics. The first step relies on the creation of specific situations with standard techniques (such as mental imagery, text, video, and audio) and an advanced technique [Virtual Reality (VR)] to manipulate experimental settings. The second step concerns the measurement of behavior in one, two, or many individuals focusing on parameters extractions to provide information about the behavior dynamics. Finally, the third step, which uses the parameters collected and measured in the previous two steps in order to simulate possible scenarios to forecast through computational models, understand, and explain behavior dynamics at the social level. An experimental study was also included to demonstrate the three-step method and a possible scenario.
Growth morphology of flux-synthesized La4Ti3O12 particles
NASA Astrophysics Data System (ADS)
Hori, Shigeo; Orum, Aslihan; Takatori, Kazumasa; Ikeda, Tomiko; Yoshimura, Masamichi; Tani, Toshihiko
2017-06-01
Anisometric-shaped particles were required for preparation of oriented ceramics by the reactive-templated grain growth method. Hexagonal plate-like particles of La4Ti3O12, (111)-type layered perovskite, were prepared by a molten salt synthesis (MSS), and the relationship between the morphology and crystal structure of the particles was analysed. La4Ti3O12 phase was obtained in KCl and NaCl fluxes whereas not obtained in LiCl. The developed plane of the plate-like particles was determined to be the (00l) plane and the side planes of the particle were found to be parallel the {h0l} planes. Surface steps with a height of approx. 0.9 nm were measured on the developed plane. The step height corresponds to the distance between two adjacent interlayers, which indicates the lowest surface energy of the planes along the interlayers.
Economic evaluations in pain management: principles and methods.
Asche, Carl V; Seal, Brian; Jackson, Kenneth C; Oderda, Gary M
2006-01-01
This paper describes how investigators may design, conduct, and report economic evaluations of pharmacotherapy for pain and symptom management. Because economic evaluation of therapeutic interventions is becoming increasingly important, there is a need for guidance on how economic evaluations can be optimally conducted. The steps required to conduct an economic evaluation are described to provide this guidance. Economic evaluations require two or more therapeutic interventions to be compared in relation to costs and effects. There are five types of economic evaluations, based on analysis of: (1) cost-effectiveness, (2) cost-utility, (3) cost-minimization, (4) cost-consequence, and (5) cost-benefit analyses. The six required steps are: identify the perspective of the study; identify the alternatives that will be compared; identify the relevant costs and effects; determine how to collect the cost and effect data; determine how to perform calculation for cost and effects data; and determine the manner in which to depict the results and draw comparisons.
NASA Astrophysics Data System (ADS)
Uehara, Yoichi; Michimata, Junichi; Watanabe, Shota; Katano, Satoshi; Inaoka, Takeshi
2018-03-01
We have investigated the scanning tunneling microscope (STM) light emission spectra of isolated single Ag nanoparticles lying on highly oriented pyrolytic graphite (HOPG). The STM light emission spectra exhibited two types of spectral structures (step-like and periodic). Comparisons of the observed structures and theoretical predictions indicate that the phonon energy of the ZO mode of HOPG [M. Mohr et al., Phys. Rev. B 76, 035439 (2007)] can be determined from the energy difference between the cutoff of STM light emission and the step in the former structure, and from the period of the latter structure. Since the role of the Ag nanoparticles does not depend on the substrate materials, this method will enable the phonon energies of various materials to be measured by STM light emission spectroscopy. The spatial resolution is comparable to the lateral size of the individual Ag nanoparticles (that is, a few nm).
Huang, Kai; Demadrille, Renaud; Silly, Mathieu G; Sirotti, Fausto; Reiss, Peter; Renault, Olivier
2010-08-24
High-energy resolution photoelectron spectroscopy (DeltaE < 200 meV) is used to investigate the internal structure of semiconductor quantum dots containing low Z-contrast elements. In InP/ZnS core/shell nanocrystals synthesized using a single-step procedure (core and shell precursors added at the same time), a homogeneously alloyed InPZnS core structure is evidenced by quantitative analysis of their In3d(5/2) spectra recorded at variable excitation energy. When using a two-step method (core InP nanocrystal synthesis followed by subsequent ZnS shell growth), XPS analysis reveals a graded core/shell interface. We demonstrate the existence of In-S and S(x)-In-P(1-x) bonding states in both types of InP/ZnS nanocrystals, which allows a refined view on the underlying reaction mechanisms.
Correlation optique en lumiere coherente (in French)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fontanel, A.; Grau, G.
1971-03-01
This paper describes a general bidimensional two-step method of correlation (or convolution) making use of the theory of holography. In the first step the light diffracted by one of the two plane transparent objects to be correlated interferes with the light diffracted by the other one. The hologram thus generated is photographed in the focal image plane of a convergent lens. Owing to the quadratic detection property of the photographic emulsion, the square of the modulus of the product of the spectra of the two objects considered is recorded on the photographic plate. In the second step the convolution productmore » of the two objects appears when the hologram is illuminated with a beam of coherent light. In its geophysical application this optical method of convolution makes it easy for us to obtain the autocorrelogram of a seismic cross-section. This method also makes it possible to correlate each of the seismic traces by special precalculated optically-recorded filters.« less
Zhang, Shanshan; Liu, Xiaofei; Qin, Jia'an; Yang, Meihua; Zhao, Hongzheng; Wang, Yong; Guo, Weiying; Ma, Zhijie; Kong, Weijun
2017-11-15
A simple and rapid gas chromatography-flame photometric detection (GC-FPD) method was developed for the determination of 12 organophosphorus pesticides (OPPs) in Salvia miltiorrhizae by using ultrasonication assisted one-step extraction (USAE) without any clean-up steps. Some crucial parameters such as type of extraction solvent were optimized to improve the method performance for trace analysis. Any clean-up steps were negligent as no interferences were detected in the GC-FPD chromatograms for sensitive detection. Under the optimized conditions, limits of detection (LODs) and quantitation (LOQs) for all pesticides were in the range of 0.001-0.002mg/kg and 0.002-0.01mg/kg and 0.002-0.01mg/kg, respectively, which were all below the regulatory maximum residue limits suggested. RSDs for method precision (intra- and inter-day variations) were lower than 6.8% in approval with international regulations. Average recovery rates for all pesticides at three fortification levels (0.5, 1.0 and 5.0mg/kg) were in the range of 71.2-101.0% with relative standard deviations (RSDs) <13%. The developed method was evaluated for its feasibility in the simultaneous pre-concentration and determination of 12 OPPs in 32 batches of real S. miltiorrhizae samples. Only one pesticide (dimethoate) out of the 12 targets was simultaneously detected in four samples at concentrations of 0.016-0.02mg/kg. Dichlorvos and omethoate were found in the same sample from Sichuan province at 0.004 and 0.027mg/kg, respectively. Malathion and monocrotophos were determined in the other two samples at 0.014 and 0.028mg/kg, respectively. All the positive samples were confirmed by LC-MS/MS. The simple, reliable and rapid USAE-GC-FPD method with many advantages over traditional techniques would be preferred for trace analysis of multiple pesticides in more complex matrices. Copyright © 2017 Elsevier B.V. All rights reserved.
Computer-assisted techniques to evaluate fringe patterns
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Bhat, Gopalakrishna K.
1992-01-01
Strain measurement using interferometry requires an efficient way to extract the desired information from interferometric fringes. Availability of digital image processing systems makes it possible to use digital techniques for the analysis of fringes. In the past, there have been several developments in the area of one dimensional and two dimensional fringe analysis techniques, including the carrier fringe method (spatial heterodyning) and the phase stepping (quasi-heterodyning) technique. This paper presents some new developments in the area of two dimensional fringe analysis, including a phase stepping technique supplemented by the carrier fringe method and a two dimensional Fourier transform method to obtain the strain directly from the discontinuous phase contour map.
Liu, Dong; Wu, Lili; Li, Chunxiu; Ren, Shengqiang; Zhang, Jingquan; Li, Wei; Feng, Lianghuan
2015-08-05
The methylammonium lead halide perovskite solar cells have become very attractive because they can be prepared with low-cost solution-processable technology and their power conversion efficiency have been increasing from 3.9% to 20% in recent years. However, the high performance of perovskite photovoltaic devices are dependent on the complicated process to prepare compact perovskite films with large grain size. Herein, a new method is developed to achieve excellent CH3NH3PbI3-xClx film with fine morphology and crystallization based on one step deposition and two-step annealing process. This method include the spin coating deposition of the perovskite films with the precursor solution of PbI2, PbCl2, and CH3NH3I at the molar ratio 1:1:4 in dimethylformamide (DMF) and the post two-step annealing (TSA). The first annealing is achieved by solvent-induced process in DMF to promote migration and interdiffusion of the solvent-assisted precursor ions and molecules and realize large size grain growth. The second annealing is conducted by thermal-induced process to further improve morphology and crystallization of films. The compact perovskite films are successfully prepared with grain size up to 1.1 μm according to SEM observation. The PL decay lifetime, and the optic energy gap for the film with two-step annealing are 460 ns and 1.575 eV, respectively, while they are 307 and 327 ns and 1.577 and 1.582 eV for the films annealed in one-step thermal and one-step solvent process. On the basis of the TSA process, the photovoltaic devices exhibit the best efficiency of 14% under AM 1.5G irradiation (100 mW·cm(-2)).
NASA Astrophysics Data System (ADS)
Peng, Jiaoyu; Bian, Shaoju; Lin, Feng; Wang, Liping; Dong, Yaping; Li, Wu
2017-10-01
The synthesis of pinnoite (MgB2O(OH)6) in boron-containing brine was established with a novel dilution method. Effects of temperature, precipitation time, boron concentration and mass dilution ratio on the formation of pinnoite were investigated. The products obtained were characterized by X-ray diffraction (XRD), Raman, thermogravimetric and differential scanning calorimeter (TG-DSC), and scanning electron microscopy. The transformation mechanism of pinnoite with different dilution ratios was assumed by studying the crystal growth of pinnoite. The results showed that pinnoite was synthesized above 60 °C in the diluted brine. There were two reaction steps - precipitation of amorphous solid and the formation of pinnoite crystals - during the whole reaction process of pinnoite when the dilution ratio is more than 1.0 at 80 °C. While in the 0.5 diluted brine, only one reaction step of pinnoite crystal formation was observed and its transformation mechanism was discussed based on dissociation of polyborates in brine. Besides, the origin of pinnoite mineral deposited on salt lake bottom was proposed.
Nano-scaled top-down of bismuth chalcogenides based on electrochemical lithium intercalation
NASA Astrophysics Data System (ADS)
Chen, Jikun; Zhu, Yingjie; Chen, Nuofu; Liu, Xinling; Sun, Zhengliang; Huang, Zhenghong; Kang, Feiyu; Gao, Qiuming; Jiang, Jun; Chen, Lidong
2011-12-01
A two-step method has been used to fabricate nano-particles of layer-structured bismuth chalcogenide compounds, including Bi2Te3, Bi2Se3, and Bi2Se0.3Te2.7, through a nano-scaled top-down route. In the first step, lithium (Li) atoms are intercalated between the van der Waals bonded quintuple layers of bismuth chalcogenide compounds by controllable electrochemical process inside self-designed lithium ion batteries. And in the second step, the Li intercalated bismuth chalcogenides are subsequently exposed to ethanol, in which process the intercalated Li atoms would explode like atom-scaled bombs to exfoliate original microscaled powder into nano-scaled particles with size around 10 nm. The influence of lithium intercalation speed and amount to three types of bismuth chalcogenide compounds are compared and the optimized intercalation conditions are explored. As to maintain the phase purity of the final nano-particle product, the intercalation lithium amount should be well controlled in Se contained bismuth chalcogenide compounds. Besides, compared with binary bismuth chalcogenide compound, lower lithium intercalation speed should be applied in ternary bismuth chalcogenide compound.
Trial of a novel endoscopic tattooing biopsy forceps on animal model
Si, Jian-Min; Sun, Lei-Min; Fan, Yu-Jing; Wang, Liang-Jing
2005-01-01
AIM: To tattoo gastric mucosa with a novel medical device which could be used to monitor and follow-up gastric mucosal lesions. METHODS: Combining endoscopic biopsy with sclerotherapy injection, we designed a new device that could perform biopsy and injection simultaneously. We performed endoscopies on a pig by using a novel endoscope tattoo biopsy forceps for 15 mo. At the same time, we used two-step method combining sclerotherapy injection needle with endoscopic biopsy. The acuity, inflammation and duration of endoscopy were compared between two methods. RESULTS: Compared with the old two-step method, although the inflammation induced by our new device was similar, the duration of procedure was markedly decreased and the acuity of tattooing was better than the old two-step method. All characteristics of the novel device complied with national safety guidelines. Follow-up gastroscopy after 15 mo showed the stained site with injection of 1:100 0.5 mL of India ink was still markedly visible with little inflammatory reaction. CONCLUSION: Endoscopic tattooing biopsy forceps can be widely used in monitoring precancerous lesions. Its safety and effectiveness has been established in animals. PMID:15793881
Hyperspectral image segmentation using a cooperative nonparametric approach
NASA Astrophysics Data System (ADS)
Taher, Akar; Chehdi, Kacem; Cariou, Claude
2013-10-01
In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.
Experimental study on the stability and failure of individual step-pool
NASA Astrophysics Data System (ADS)
Zhang, Chendi; Xu, Mengzhen; Hassan, Marwan A.; Chartrand, Shawn M.; Wang, Zhaoyin
2018-06-01
Step-pools are one of the most common bedforms in mountain streams, the stability and failure of which play a significant role for riverbed stability and fluvial processes. Given this importance, flume experiments were performed with a manually constructed step-pool model. The experiments were carried out with a constant flow rate to study features of step-pool stability as well as failure mechanisms. The results demonstrate that motion of the keystone grain (KS) caused 90% of the total failure events. The pool reached its maximum depth and either exhibited relative stability for a period before step failure, which was called the stable phase, or the pool collapsed before its full development. The critical scour depth for the pool increased linearly with discharge until the trend was interrupted by step failure. Variability of the stable phase duration ranged by one order of magnitude, whereas variability of pool scour depth was constrained within 50%. Step adjustment was detected in almost all of the runs with step-pool failure and was one or two orders smaller than the diameter of the step stones. Two discharge regimes for step-pool failure were revealed: one regime captures threshold conditions and frames possible step-pool failure, whereas the second regime captures step-pool failure conditions and is the discharge of an exceptional event. In the transitional stage between the two discharge regimes, pool and step adjustment magnitude displayed relatively large variabilities, which resulted in feedbacks that extended the duration of step-pool stability. Step adjustment, which was a type of structural deformation, increased significantly before step failure. As a result, we consider step deformation as the direct explanation to step-pool failure rather than pool scour, which displayed relative stability during step deformations in our experiments.
NASA Astrophysics Data System (ADS)
Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme
2013-04-01
Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.
Introduction to Polymer Chemistry.
ERIC Educational Resources Information Center
Harris, Frank W.
1981-01-01
Reviews the physical and chemical properties of polymers and the two major methods of polymer synthesis: addition (chain, chain-growth, or chain-reaction), and condensation (step-growth or step-reaction) polymerization. (JN)
A novel data-mining approach leveraging social media to monitor consumer opinion of sitagliptin.
Akay, Altug; Dragomir, Andrei; Erlandsson, Björn-Erik
2015-01-01
A novel data mining method was developed to gauge the experience of the drug Sitagliptin (trade name Januvia) by patients with diabetes mellitus type 2. To this goal, we devised a two-step analysis framework. Initial exploratory analysis using self-organizing maps was performed to determine structures based on user opinions among the forum posts. The results were a compilation of user's clusters and their correlated (positive or negative) opinion of the drug. Subsequent modeling using network analysis methods was used to determine influential users among the forum members. These findings can open new avenues of research into rapid data collection, feedback, and analysis that can enable improved outcomes and solutions for public health and important feedback for the manufacturer.
Shorofsky, Stephen R; Peters, Robert W; Rashba, Eric J; Gold, Michael R
2004-02-01
Determination of DFT is an integral part of ICD implantation. Two commonly used methods of DFT determination, the step-down method and the binary search method, were compared in 44 patients undergoing ICD testing for standard clinical indications. The step-down protocol used an initial shock of 18 J. The binary search method began with a shock energy of 9 J and successive shock energies were increased or decreased depending on the success of the previous shock. The DFT was defined as the lowest energy that successfully terminated ventricular fibrillation. The binary search method has the advantage of requiring a predetermined number of shocks, but some have questioned its accuracy. The study found that (mean) DFT obtained by the step-down method was 8.2 +/- 5.0, whereas by the binary search method DFT was 8.1 +/- 0.7 J, P = NS. DFT differed by no more than one step between methods in 32 (71%) of patients. The number of shocks required to determine DFT by the step-down method was 4.6 +/- 1.4, whereas by definition, the binary search method always required three shocks. In conclusion, the binary search method is preferable because it is of comparable efficacy and requires fewer shocks.
Particle Simulation of Coulomb Collisions: Comparing the Methods of Takizuka & Abe and Nanbu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, C; Lin, T; Caflisch, R
2007-05-22
The interactions of charged particles in a plasma are in a plasma is governed by the long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and stochastic error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.
Vibration-Induced Motor Responses of Infants With and Without Myelomeningocele
Teulier, Caroline; Smith, Beth A.; Kim, Byungji; Beutler, Benjamin D.; Martin, Bernard J.; Ulrich, Beverly D.
2012-01-01
Background The severity of myelomeningocele (MMC) stems both from a loss of neurons due to neural tube defect and a loss of function in viable neurons due to reduced movement experience during the first year after birth. In young infants with MMC, the challenge is to reinforce excitability and voluntary control of all available neurons. Muscle vibration paired with voluntary movement may increase motoneuron excitability and contribute to improvements in neural organization, responsiveness, and control. Objectives This study examined whether infants with or without MMC respond to vibration by altering their step or stance behavior when supported upright on a treadmill. Design This was a cross-sectional study. Methods Twenty-four 2- to 10-month-old infants, 12 with typical development (TD) and 12 with MMC (lumbar and sacral lesions), were tested. Infants were supported upright with their feet in contact with a stationary or moving treadmill during 30-second trials. Rhythmic alternating vibrations were applied to the right and left rectus femoris muscles, the lateral gastrocnemius muscle, or the sole of the foot. Two cameras and behavior coding were used to determine step count, step type, and motor response to vibration onset. Results Step count decreased and swing duration increased in infants with TD during vibration of the sole of the foot on a moving treadmill (FT-M trials). Across all groups the percentage of single steps increased during vibration of the lateral gastrocnemius muscle on a moving treadmill. Infants with MMC and younger infants with TD responded to onset of vibration with leg straightening during rectus femoris muscle stimulation and by stepping during FT-M trials more often than older infants with TD. Conclusions Vibration seems a viable option for increasing motor responsiveness in infants with MMC. Follow-up studies are needed to identify optimal methods of administering vibration to maximize step and stance behavior in infants. PMID:22228610
PHOSPHITE STABILIZATION EFFECTS ON TWO-STEP MELT-SPUN FIBERS OF POLYLACTIDE. (R826733)
The effects of molecular weight stabilization on mechanical properties of polylactide (PLA) fibers are investigated. The textile-grade PLA contains a 98:02 ratio of L:D stereocenters and fibers are produced by the two step method, involving a primary quench and cold drawing. M...
A novel two-step method for screening shade tolerant mutant plants via dwarfism
USDA-ARS?s Scientific Manuscript database
When subjected to shade, plants undergo rapid shoot elongation, which often makes them more prone to disease and mechanical damage. It has been reported that, in turfgrass, induced dwarfism can enhance shade tolerance. Here, we describe a two-step procedure for isolating shade tolerant mutants of ...
NASA Astrophysics Data System (ADS)
Cucchi, K.; Kawa, N.; Hesse, F.; Rubin, Y.
2017-12-01
In order to reduce uncertainty in the prediction of subsurface flow and transport processes, practitioners should use all data available. However, classic inverse modeling frameworks typically only make use of information contained in in-situ field measurements to provide estimates of hydrogeological parameters. Such hydrogeological information about an aquifer is difficult and costly to acquire. In this data-scarce context, the transfer of ex-situ information coming from previously investigated sites can be critical for improving predictions by better constraining the estimation procedure. Bayesian inverse modeling provides a coherent framework to represent such ex-situ information by virtue of the prior distribution and combine them with in-situ information from the target site. In this study, we present an innovative data-driven approach for defining such informative priors for hydrogeological parameters at the target site. Our approach consists in two steps, both relying on statistical and machine learning methods. The first step is data selection; it consists in selecting sites similar to the target site. We use clustering methods for selecting similar sites based on observable hydrogeological features. The second step is data assimilation; it consists in assimilating data from the selected similar sites into the informative prior. We use a Bayesian hierarchical model to account for inter-site variability and to allow for the assimilation of multiple types of site-specific data. We present the application and validation of the presented methods on an established database of hydrogeological parameters. Data and methods are implemented in the form of an open-source R-package and therefore facilitate easy use by other practitioners.
An empirically derived short form of the Hypoglycaemia Fear Survey II.
Grabman, J; Vajda Bailey, K; Schmidt, K; Cariou, B; Vaur, L; Madani, S; Cox, D; Gonder-Frederick, L
2017-04-01
To develop an empirically derived short version of the Hypoglycaemia Fear Survey II that still accurately measures fear of hypoglycaemia. Item response theory methods were used to generate an 11-item version of the Hypoglycaemia Fear Survey from a sample of 487 people with Type 1 or Type 2 diabetes mellitus. Subsequently, this scale was tested on a sample of 2718 people with Type 1 or insulin-treated Type 2 diabetes taking part in DIALOG, a large observational prospective study of hypoglycaemia in France. The short form of the Hypoglycaemia Fear Survey II matched the factor structure of the long form for respondents with both Type 1 and Type 2 diabetes, while maintaining adequate internal reliability on the total scale and all three subscales. The two forms were highly correlated on both the total scale and each subscale (Pearson's R > 0.89). The short form of the Hypoglycaemia Fear Survey II is an important first step in more efficiently measuring fear of hypoglycaemia. Future prospective studies are needed for further validity testing and exploring the survey's applicability to different populations. © 2016 Diabetes UK.
NASA Astrophysics Data System (ADS)
Zeng, Chao; Long, Di; Shen, Huanfeng; Wu, Penghai; Cui, Yaokui; Hong, Yang
2018-07-01
Land surface temperature (LST) is one of the most important parameters in land surface processes. Although satellite-derived LST can provide valuable information, the value is often limited by cloud contamination. In this paper, a two-step satellite-derived LST reconstruction framework is proposed. First, a multi-temporal reconstruction algorithm is introduced to recover invalid LST values using multiple LST images with reference to corresponding remotely sensed vegetation index. Then, all cloud-contaminated areas are temporally filled with hypothetical clear-sky LST values. Second, a surface energy balance equation-based procedure is used to correct for the filled values. With shortwave irradiation data, the clear-sky LST is corrected to the real LST under cloudy conditions. A series of experiments have been performed to demonstrate the effectiveness of the developed approach. Quantitative evaluation results indicate that the proposed method can recover LST in different surface types with mean average errors in 3-6 K. The experiments also indicate that the time interval between the multi-temporal LST images has a greater impact on the results than the size of the contaminated area.
Runge-Kutta methods combined with compact difference schemes for the unsteady Euler equations
NASA Technical Reports Server (NTRS)
Yu, Sheng-Tao
1992-01-01
Recent development using compact difference schemes to solve the Navier-Stokes equations show spectral-like accuracy. A study was made of the numerical characteristics of various combinations of the Runge-Kutta (RK) methods and compact difference schemes to calculate the unsteady Euler equations. The accuracy of finite difference schemes is assessed based on the evaluations of dissipative error. The objectives are reducing the numerical damping and, at the same time, preserving numerical stability. While this approach has tremendous success solving steady flows, numerical characteristics of unsteady calculations remain largely unclear. For unsteady flows, in addition to the dissipative errors, phase velocity and harmonic content of the numerical results are of concern. As a result of the discretization procedure, the simulated unsteady flow motions actually propagate in a dispersive numerical medium. Consequently, the dispersion characteristics of the numerical schemes which relate the phase velocity and wave number may greatly impact the numerical accuracy. The aim is to assess the numerical accuracy of the simulated results. To this end, the Fourier analysis is to provide the dispersive correlations of various numerical schemes. First, a detailed investigation of the existing RK methods is carried out. A generalized form of an N-step RK method is derived. With this generalized form, the criteria are derived for the three and four-step RK methods to be third and fourth-order time accurate for the non-linear equations, e.g., flow equations. These criteria are then applied to commonly used RK methods such as Jameson's 3-step and 4-step schemes and Wray's algorithm to identify the accuracy of the methods. For the spatial discretization, compact difference schemes are presented. The schemes are formulated in the operator-type to render themselves suitable for the Fourier analyses. The performance of the numerical methods is shown by numerical examples. These examples are detailed. described. The third case is a two-dimensional simulation of a Lamb vortex in an uniform flow. This calculation provides a realistic assessment of various finite difference schemes in terms of the conservation of the vortex strength and the harmonic content after travelling a substantial distance. The numerical implementation of Giles' non-refelctive equations coupled with the characteristic equations as the boundary condition is discussed in detail. Finally, the single vortex calculation is extended to simulate vortex pairing. For the distance between two vortices less than a threshold value, numerical results show crisp resolution of the vortex merging.
Hybrid mesh finite volume CFD code for studying heat transfer in a forward-facing step
NASA Astrophysics Data System (ADS)
Jayakumar, J. S.; Kumar, Inder; Eswaran, V.
2010-12-01
Computational fluid dynamics (CFD) methods employ two types of grid: structured and unstructured. Developing the solver and data structures for a finite-volume solver is easier than for unstructured grids. But real-life problems are too complicated to be fitted flexibly by structured grids. Therefore, unstructured grids are widely used for solving real-life problems. However, using only one type of unstructured element consumes a lot of computational time because the number of elements cannot be controlled. Hence, a hybrid grid that contains mixed elements, such as the use of hexahedral elements along with tetrahedral and pyramidal elements, gives the user control over the number of elements in the domain, and thus only the domain that requires a finer grid is meshed finer and not the entire domain. This work aims to develop such a finite-volume hybrid grid solver capable of handling turbulence flows and conjugate heat transfer. It has been extended to solving flow involving separation and subsequent reattachment occurring due to sudden expansion or contraction. A significant effect of mixing high- and low-enthalpy fluid occurs in the reattached regions of these devices. This makes the study of the backward-facing and forward-facing step with heat transfer an important field of research. The problem of the forward-facing step with conjugate heat transfer was taken up and solved for turbulence flow using a two-equation model of k-ω. The variation in the flow profile and heat transfer behavior has been studied with the variation in Re and solid to fluid thermal conductivity ratios. The results for the variation in local Nusselt number, interface temperature and skin friction factor are presented.
Reisner, Sari L; Biello, Katie; Rosenberger, Joshua G; Austin, S Bryn; Haneuse, Sebastien; Perez-Brumer, Amaya; Novak, David S; Mimiaga, Matthew J
2014-11-01
Few comparative data are available internationally to examine health differences by transgender identity. A barrier to monitoring the health and well-being of transgender people is the lack of inclusion of measures to assess natal sex/gender identity status in surveys. Data were from a cross-sectional anonymous online survey of members (n > 36,000) of a sexual networking website targeting men who have sex with men in Spanish- and Portuguese-speaking countries/territories in Latin America/the Caribbean, Portugal, and Spain. Natal sex/gender identity status was assessed using a two-step method (Step 1: assigned birth sex, Step 2: current gender identity). Male-to-female (MTF) and female-to-male (FTM) participants were compared to non-transgender males in age-adjusted regression models on socioeconomic status (SES) (education, income, sex work), masculine gender conformity, psychological health and well-being (lifetime suicidality, past-week depressive distress, positive self-worth, general self-rated health, gender related stressors), and sexual health (HIV-infection, past-year STIs, past-3 month unprotected anal or vaginal sex). The two-step method identified 190 transgender participants (0.54%; 158 MTF, 32 FTM). Of the 12 health-related variables, six showed significant differences between the three groups: SES, masculine gender conformity, lifetime suicidality, depressive distress, positive self-worth, and past-year genital herpes. A two-step approach is recommended for health surveillance efforts to assess natal sex/gender identity status. Cognitive testing to formally validate assigned birth sex and current gender identity survey items in Spanish and Portuguese is encouraged.
Reisner, Sari L.; Biello, Katie; Rosenberger, Joshua G.; Austin, S. Bryn; Haneuse, Sebastien; Perez-Brumer, Amaya; Novak, David S.; Mimiaga, Matthew J.
2014-01-01
Few comparative data are available internationally to examine health differences by transgender identity. A barrier to monitoring the health and well-being of transgender people is the lack of inclusion of measures to assess natal sex/gender identity status in surveys. Data were from a cross-sectional anonymous online survey of members (n > 36,000) of a sexual networking website targeting men who have sex with men in Spanish- and Portuguese-speaking countries/ territories in Latin America/the Caribbean, Portugal, and Spain. Natal sex/gender identity status was assessed using a two-step method (Step 1: assigned birth sex, Step 2: current gender identity). Male-to-female (MTF) and female-to-male (FTM) participants were compared to non-transgender males in age-adjusted regression models on socioeconomic status (SES) (education, income, sex work), masculine gender conformity, psychological health and well-being (lifetime suicidality, past-week depressive distress, positive self-worth, general self-rated health, gender related stressors), and sexual health (HIV-infection, past-year STIs, past-3 month unprotected anal or vaginal sex). The two-step method identified 190 transgender participants (0.54%; 158 MTF, 32 FTM). Of the 12 health-related variables, six showed significant differences between the three groups: SES, masculine gender conformity, lifetime suicidality, depressive distress, positive self-worth, and past-year genital herpes. A two-step approach is recommended for health surveillance efforts to assess natal sex/gender identity status. Cognitive testing to formally validate assigned birth sex and current gender identity survey items in Spanish and Portuguese is encouraged. PMID:25030120
NASA Astrophysics Data System (ADS)
Zaba, K.; Dul, I.; Puchlerska, S.
2017-02-01
Superalloys based on nickel and selected steels are widely used in the aerospace industry, because of their excellent mechanical properties, heat resistance and creep resistance. Metal sheets of these materials are plastically deformed and applied, inter alia, to critical components of aircraft engines. Due to their chemical composition these materials are hardly deformable. There are various methods to improve the formability of these materials, including plastic deformation at an elevated or high temperature, or a suitable heat treatment before forming process. The paper presents results of the metal sheets testing after heat treatment. For the research, sheets of two types of nickel superalloys type Inconel and of three types of steel were chosen. The materials were subjected to multivariate heat treatment at different temperature range and time. After this step, mechanical properties were examined according to the metal sheet rolling direction. The results were compared and the optimal type of pre-trial softening heat treatment for each of the materials was determined.
SU-E-J-126: An Online Replanning Method for FFF Beams Without Couch Shift
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahunbay, E; Ates, O; Li, X
2015-06-15
Purpose: In a situation that couch shift for patient positioning is not preferred or prohibited (e.g., MR-Linac), segment aperture morphing (SAM) can address target dislocation and deformation. For IMRT/VMAT with flattening filter free (FFF) beams, however, SAM method would lead to an adverse translational dose effect due to the beam unflattening. Here we propose a new 2-step process to address both the translational effect of FFF beams and the target deformation. Methods: The replanning method consists of an offline and an online steps. The offline step is to create a series of pre-shifted plans (PSP) obtained by a so calledmore » “warm start” optimization (starting optimization from the original plan, rather from scratch) at a series of isocenter shifts with fixed distance (e.g. 2 cm, at x,y,z = 2,0,0 ; 2,2,0 ; 0,2,0; …;− 2,0,0). The PSPs all have the same number of segments with very similar shapes, since the warm-start optimization only adjusts the MLC positions instead of regenerating them. In the online step, a new plan is obtained by linearly interpolating the MLC positions and the monitor units of the closest PSPs for the shift determined from the image of the day. This two-step process is completely automated, and instantaneously fast (no optimization or dose calculation needed). The previously-developed SAM algorithm is then applied for daily deformation. We tested the method on sample prostate and pancreas cases. Results: The two-step interpolation method can account for the adverse dose effects from FFF beams, while SAM corrects for the target deformation. The whole process takes the same time as the previously reported SAM process (5–10 min). Conclusion: The new two-step method plus SAM can address both the translation effects of FFF beams and target deformation, and can be executed in full automation requiring no additional time from the SAM process. This research was supported by Elekta inc. (Crawley, UK)« less
Cohn, T.A.; Lane, W.L.; Baier, W.G.
1997-01-01
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
NASA Astrophysics Data System (ADS)
Cohn, T. A.; Lane, W. L.; Baier, W. G.
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
Two types of rate-determining step in chemical and biochemical processes.
Yagisawa, S
1989-01-01
Close examination of the concept of the rate-determining step (RDS) shows that there are two types of RDS depending on the definition of 'rate'. One is represented by the highest peak of the free-energy diagram of consecutive reactions and holds true where the rate is defined in terms of the concentration of the first reactant. The other is represented by the peak showing the maximum free-energy difference, where the free-energy difference is the height of a peak measured from the bottom of any preceding troughs, where the definition of the rate is in terms of the total reactant concentration including intermediates. There are no criteria a priori for selecting one of them. PMID:2597141
Using a contextualized sensemaking model for interaction design: A case study of tumor contouring.
Aselmaa, Anet; van Herk, Marcel; Laprie, Anne; Nestle, Ursula; Götz, Irina; Wiedenmann, Nicole; Schimek-Jasch, Tanja; Picaud, Francois; Syrykh, Charlotte; Cagetti, Leonel V; Jolnerovski, Maria; Song, Yu; Goossens, Richard H M
2017-01-01
Sensemaking theories help designers understand the cognitive processes of a user when he/she performs a complicated task. This paper introduces a two-step approach of incorporating sensemaking support within the design of health information systems by: (1) modeling the sensemaking process of physicians while performing a task, and (2) identifying software interaction design requirements that support sensemaking based on this model. The two-step approach is presented based on a case study of the tumor contouring clinical task for radiotherapy planning. In the first step of the approach, a contextualized sensemaking model was developed to describe the sensemaking process based on the goal, the workflow and the context of the task. In the second step, based on a research software prototype, an experiment was conducted where three contouring tasks were performed by eight physicians respectively. Four types of navigation interactions and five types of interaction sequence patterns were identified by analyzing the gathered interaction log data from those twenty-four cases. Further in-depth study on each of the navigation interactions and interaction sequence patterns in relation to the contextualized sensemaking model revealed five main areas for design improvements to increase sensemaking support. Outcomes of the case study indicate that the proposed two-step approach was beneficial for gaining a deeper understanding of the sensemaking process during the task, as well as for identifying design requirements for better sensemaking support. Copyright © 2016. Published by Elsevier Inc.
Optimized spray drying process for preparation of one-step calcium-alginate gel microspheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popeski-Dimovski, Riste
Calcium-alginate micro particles have been used extensively in drug delivery systems. Therefore we establish a one-step method for preparation of internally gelated micro particles with spherical shape and narrow size distribution. We use four types of alginate with different G/M ratio and molar weight. The size of the particles is measured using light diffraction and scanning electron microscopy. Measurements showed that with this method, micro particles with size distribution around 4 micrometers can be prepared, and SEM imaging showed that those particles are spherical in shape.
Improving TCO-Conjugated Antibody Reactivity for Bioorthogonal Pretargeting
NASA Astrophysics Data System (ADS)
Chu, Tina Tingyi
Cancer remains a major cause of death because of its unpredictable progression. Utilizing bioorthogonal chemistry between trans-cyclooctene (TCO) and tetrazine to target imaging agents to tumors in two subsequent steps offers a more versatile platform for molecular imaging. This is accomplished by pretargeting TCO-modified primary antibody to cell surface biomarkers, followed by delivery of tetrazine-modified imaging probes. In previous work, it has been established that TCO-tetrazine chemistry can be applied to in vivo imaging, resulting in precise tumor detection. However, most TCO modifications on an antibody are not reactive because they are buried within hydrophobic domains. To expose and improve the reactivity, Rahim et al. incorporated a polyethylene glycol (PEG) linker through a two-step reaction with DBCO-azide, which successfully maintained 100% TCO functionality. In this project, various types of linkers were studied to improve the reactivity in a single step. Three primary types of linkers were studied: hydrophilic PEG chains, hydrophobic short linkers, and amphiphilic linkers. Our results show that PEG chain alone can only maintain 40% TCO reactivity. Unexpectedly, a short alkyl chain (valeric acid) provided superior results, with 60% TCO reactivity. Lengthening the alkyl chain did not improve results further. Finally, an amphiphilic linker containing valeric acid and PEG performed worse than either linker type alone, at ˜30% functionality. We conclude that our previous 100% functional TCO result obtained with the two-step coupling may have stemmed from generation of the DBCO/azide cycloaddition product. Future work will explore factors such as rigidity of linker structure, polarity, or charges.
A volume-of-fluid method for simulation of compressible axisymmetric multi-material flow
NASA Astrophysics Data System (ADS)
de Niem, D.; Kührt, E.; Motschmann, U.
2007-02-01
A two-dimensional Eulerian hydrodynamic method for the numerical simulation of inviscid compressible axisymmetric multi-material flow in external force fields for the situation of pure fluids separated by macroscopic interfaces is presented. The method combines an implicit Lagrangian step with an explicit Eulerian advection step. Individual materials obey separate energy equations, fulfill general equations of state, and may possess different temperatures. Material volume is tracked using a piecewise linear volume-of-fluid method. An overshoot-free logically simple and economic material advection algorithm for cylinder coordinates is derived, in an algebraic formulation. New aspects arising in the case of more than two materials such as the material ordering strategy during transport are presented. One- and two-dimensional numerical examples are given.
The Coast Artillery Journal. Volume 66, Number 2, February 1927
1927-02-01
bicycle gear and chain , step-up 1 to 5, con- necting the disk with the elevating drum. As a result one turn of the range disk covers the outer two...our seacoast cities, navy yards, a!ld habors are reasonably protected against bombardment, because we would otherwise be forced to chain down our...type and the sliding sleeve type. Spare parts, such as the rear bands, etc., are continually heing suplied for these two types and it would be
Boehm, A.B.; Griffith, J.; McGee, C.; Edge, T.A.; Solo-Gabriele, H. M.; Whitman, R.; Cao, Y.; Getrich, M.; Jay, J.A.; Ferguson, D.; Goodwin, K.D.; Lee, C.M.; Madison, M.; Weisberg, S.B.
2009-01-01
Aims: The absence of standardized methods for quantifying faecal indicator bacteria (FIB) in sand hinders comparison of results across studies. The purpose of the study was to compare methods for extraction of faecal bacteria from sands and recommend a standardized extraction technique. Methods and Results: Twenty-two methods of extracting enterococci and Escherichia coli from sand were evaluated, including multiple permutations of hand shaking, mechanical shaking, blending, sonication, number of rinses, settling time, eluant-to-sand ratio, eluant composition, prefiltration and type of decantation. Tests were performed on sands from California, Florida and Lake Michigan. Most extraction parameters did not significantly affect bacterial enumeration. anova revealed significant effects of eluant composition and blending; with both sodium metaphosphate buffer and blending producing reduced counts. Conclusions: The simplest extraction method that produced the highest FIB recoveries consisted of 2 min of hand shaking in phosphate-buffered saline or deionized water, a 30-s settling time, one-rinse step and a 10 : 1 eluant volume to sand weight ratio. This result was consistent across the sand compositions tested in this study but could vary for other sand types. Significance and Impact of the Study: Method standardization will improve the understanding of how sands affect surface water quality. ?? 2009 The Society for Applied Microbiology.
Feitosa, V P; Gotti, V B; Grohmann, C V; Abuná, G; Correr-Sobrinho, L; Sinhoreti, M A C; Correr, A B
2014-09-01
To evaluate the effects of two methods to simulate physiological pulpal pressure on the dentine bonding performance of two all-in-one adhesives and a two-step self-etch silorane-based adhesive by means of microtensile bond strength (μTBS) and nanoleakage surveys. The self-etch adhesives [G-Bond Plus (GB), Adper Easy Bond (EB) and silorane adhesive (SIL)] were applied to flat deep dentine surfaces from extracted human molars. The restorations were constructed using resin composites Filtek Silorane or Filtek Z350 (3M ESPE). After 24 h using the two methods of simulated pulpal pressure or no pulpal pressure (control groups), the bonded teeth were cut into specimens and submitted to μTBS and silver uptake examination. Results were analysed with two-way anova and Tukey's test (P < 0.05). Both methods of simulated pulpal pressure led statistically similar μTBS for all adhesives. No difference between control and pulpal pressure groups was found for SIL and GB. EB led significant drop (P = 0.002) in bond strength under pulpal pressure. Silver impregnation was increased after both methods of simulated pulpal pressure for all adhesives, and it was similar between the simulated pulpal pressure methods. The innovative method to simulate pulpal pressure behaved similarly to the classic one and could be used as an alternative. The HEMA-free one-step and the two-step self-etch adhesives had acceptable resistance against pulpal pressure, unlike the HEMA-rich adhesive. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Ren, Yan; Yang, Min; Li, Qian; Pan, Jay; Chen, Fei; Li, Xiaosong; Meng, Qun
2017-02-22
To introduce multilevel repeated measures (RM) models and compare them with multilevel difference-in-differences (DID) models in assessing the linear relationship between the length of the policy intervention period and healthcare outcomes (dose-response effect) for data from a stepped-wedge design with a hierarchical structure. The implementation of national essential medicine policy (NEMP) in China was a stepped-wedge-like design of five time points with a hierarchical structure. Using one key healthcare outcome from the national NEMP surveillance data as an example, we illustrate how a series of multilevel DID models and one multilevel RM model can be fitted to answer some research questions on policy effects. Routinely and annually collected national data on China from 2008 to 2012. 34 506 primary healthcare facilities in 2675 counties of 31 provinces. Agreement and differences in estimates of dose-response effect and variation in such effect between the two methods on the logarithm-transformed total number of outpatient visits per facility per year (LG-OPV). The estimated dose-response effect was approximately 0.015 according to four multilevel DID models and precisely 0.012 from one multilevel RM model. Both types of model estimated an increase in LG-OPV by 2.55 times from 2009 to 2012, but 2-4.3 times larger SEs of those estimates were found by the multilevel DID models. Similar estimates of mean effects of covariates and random effects of the average LG-OPV among all levels in the example dataset were obtained by both types of model. Significant variances in the dose-response among provinces, counties and facilities were estimated, and the 'lowest' or 'highest' units by their dose-response effects were pinpointed only by the multilevel RM model. For examining dose-response effect based on data from multiple time points with hierarchical structure and the stepped wedge-like designs, multilevel RM models are more efficient, convenient and informative than the multilevel DID models. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spataru, Sergiu; Hacke, Peter; Sera, Dezso
A method for detecting micro-cracks in solar cells using two dimensional matched filters was developed, derived from the electroluminescence intensity profile of typical micro-cracks. We describe the image processing steps to obtain a binary map with the location of the micro-cracks. Finally, we show how to automatically estimate the total length of each micro-crack from these maps, and propose a method to identify severe types of micro-cracks, such as parallel, dendritic, and cracks with multiple orientations. With an optimized threshold parameter, the technique detects over 90 % of cracks larger than 3 cm in length. The method shows great potentialmore » for quantifying micro-crack damage after manufacturing or module transportation for the determination of a module quality criterion for cell cracking in photovoltaic modules.« less
NASA Astrophysics Data System (ADS)
Muravsky, Leonid I.; Kmet', Arkady B.; Stasyshyn, Ihor V.; Voronyak, Taras I.; Bobitski, Yaroslav V.
2018-06-01
A new three-step interferometric method with blind phase shifts to retrieve phase maps (PMs) of smooth and low-roughness engineering surfaces is proposed. Evaluating of two unknown phase shifts is fulfilled by using the interframe correlation between interferograms. The method consists of two stages. The first stage provides recording of three interferograms of a test object and their processing including calculation of unknown phase shifts, and retrieval of a coarse PM. The second stage implements firstly separation of high-frequency and low-frequency PMs and secondly producing of a fine PM consisting of areal surface roughness and waviness PMs. Extraction of the areal surface roughness and waviness PMs is fulfilled by using a linear low-pass filter. The computer simulation and experiments fulfilled to retrieve a gauge block surface area and its areal surface roughness and waviness have confirmed the reliability of the proposed three-step method.
Learn, R; Feigenbaum, E
2016-06-01
Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Learn, R.; Feigenbaum, E.
Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.
Learn, R.; Feigenbaum, E.
2016-05-27
Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.
Multi-Spatiotemporal Patterns of Residential Burglary Crimes in Chicago: 2006-2016
NASA Astrophysics Data System (ADS)
Luo, J.
2017-10-01
This research attempts to explore the patterns of burglary crimes at multi-spatiotemporal scales in Chicago between 2006 and 2016. Two spatial scales are investigated that are census block and police beat area. At each spatial scale, three temporal scales are integrated to make spatiotemporal slices: hourly scale with two-hour time step from 12:00am to the end of the day; daily scale with one-day step from Sunday to Saturday within a week; monthly scale with one-month step from January to December. A total of six types of spatiotemporal slices will be created as the base for the analysis. Burglary crimes are spatiotemporally aggregated to spatiotemporal slices based on where and when they occurred. For each type of spatiotemporal slices with burglary occurrences integrated, spatiotemporal neighborhood will be defined and managed in a spatiotemporal matrix. Hot-spot analysis will identify spatiotemporal clusters of each type of spatiotemporal slices. Spatiotemporal trend analysis is conducted to indicate how the clusters shift in space and time. The analysis results will provide helpful information for better target policing and crime prevention policy such as police patrol scheduling regarding times and places covered.
Applications of singular value analysis and partial-step algorithm for nonlinear orbit determination
NASA Technical Reports Server (NTRS)
Ryne, Mark S.; Wang, Tseng-Chan
1991-01-01
An adaptive method in which cruise and nonlinear orbit determination problems can be solved using a single program is presented. It involves singular value decomposition augmented with an extended partial step algorithm. The extended partial step algorithm constrains the size of the correction to the spacecraft state and other solve-for parameters. The correction is controlled by an a priori covariance and a user-supplied bounds parameter. The extended partial step method is an extension of the update portion of the singular value decomposition algorithm. It thus preserves the numerical stability of the singular value decomposition method, while extending the region over which it converges. In linear cases, this method reduces to the singular value decomposition algorithm with the full rank solution. Two examples are presented to illustrate the method's utility.
Reconstruction of local perturbations in periodic surfaces
NASA Astrophysics Data System (ADS)
Lechleiter, Armin; Zhang, Ruming
2018-03-01
This paper concerns the inverse scattering problem to reconstruct a local perturbation in a periodic structure. Unlike the periodic problems, the periodicity for the scattered field no longer holds, thus classical methods, which reduce quasi-periodic fields in one periodic cell, are no longer available. Based on the Floquet-Bloch transform, a numerical method has been developed to solve the direct problem, that leads to a possibility to design an algorithm for the inverse problem. The numerical method introduced in this paper contains two steps. The first step is initialization, that is to locate the support of the perturbation by a simple method. This step reduces the inverse problem in an infinite domain into one periodic cell. The second step is to apply the Newton-CG method to solve the associated optimization problem. The perturbation is then approximated by a finite spline basis. Numerical examples are given at the end of this paper, showing the efficiency of the numerical method.
Geometries for roughness shapes in laminar flow
NASA Technical Reports Server (NTRS)
Holmes, Bruce J. (Inventor); Martin, Glenn L. (Inventor); Domack, Christopher S. (Inventor); Obara, Clifford J. (Inventor); Hassan, Ahmed A. (Inventor)
1986-01-01
A passive interface mechanism between upper and lower skin structures, and a leading edge structure of a laminar flow airfoil is described. The interface mechanism takes many shapes. All are designed to be different than the sharp orthogonal arrangement prevalent in the prior art. The shapes of the interface structures are generally of two types: steps away from the centerline of the airfoil with a sloping surface directed toward the trailing edge and, the other design has a gap before the sloping surface. By properly shaping the step, the critical step height is increased by more than 50% over the orthogonal edged step.
Method to produce alumina aerogels having porosities greater than 80 percent
Poco, John F.; Hrubesh, Lawrence W.
2003-09-16
A two-step method for producing monolithic alumina aerogels having porosities of greater than 80 percent. Very strong, very low density alumina aerogel monoliths are prepared using the two-step sol-gel process. The method of preparing pure alumina aerogel modifies the prior known sol method by combining the use of substoichiometric water for hydrolysis, the use of acetic acid to control hydrolysis/condensation, and high temperature supercritical drying, all of which contribute to the formation of a polycrystalline aerogel microstructure. This structure provides exceptional mechanical properties of the alumina aerogel, as well as enhanced thermal resistance and high temperature stability.
[Design method of convex master gratings for replicating flat-field concave gratings].
Zhou, Qian; Li, Li-Feng
2009-08-01
Flat-field concave diffraction grating is the key device of a portable grating spectrometer with the advantage of integrating dispersion, focusing and flat-field in a single device. It directly determines the quality of a spectrometer. The most important two performances determining the quality of the spectrometer are spectral image quality and diffraction efficiency. The diffraction efficiency of a grating depends mainly on its groove shape. But it has long been a problem to get a uniform predetermined groove shape across the whole concave grating area, because the incident angle of the ion beam is restricted by the curvature of the concave substrate, and this severely limits the diffraction efficiency and restricts the application of concave gratings. The authors present a two-step method for designing convex gratings, which are made holographically with two exposure point sources placed behind a plano-convex transparent glass substrate, to solve this problem. The convex gratings are intended to be used as the master gratings for making aberration-corrected flat-field concave gratings. To achieve high spectral image quality for the replicated concave gratings, the refraction effect at the planar back surface and the extra optical path lengths through the substrate thickness experienced by the two divergent recording beams are considered during optimization. This two-step method combines the optical-path-length function method and the ZEMAX software to complete the optimization with a high success rate and high efficiency. In the first step, the optical-path-length function method is used without considering the refraction effect to get an approximate optimization result. In the second step, the approximate result of the first step is used as the initial value for ZEMAX to complete the optimization including the refraction effect. An example of design problem was considered. The simulation results of ZEMAX proved that the spectral image quality of a replicated concave grating is comparable with that of a directly recorded concave grating.
Liu, Chengyu; Zhao, Lina; Tang, Hong; Li, Qiao; Wei, Shoushui; Li, Jianqing
2016-08-01
False alarm (FA) rates as high as 86% have been reported in intensive care unit monitors. High FA rates decrease quality of care by slowing staff response times while increasing patient burdens and stresses. In this study, we proposed a rule-based and multi-channel information fusion method for accurately classifying the true or false alarms for five life-threatening arrhythmias: asystole (ASY), extreme bradycardia (EBR), extreme tachycardia (ETC), ventricular tachycardia (VTA) and ventricular flutter/fibrillation (VFB). The proposed method consisted of five steps: (1) signal pre-processing, (2) feature detection and validation, (3) true/false alarm determination for each channel, (4) 'real-time' true/false alarm determination and (5) 'retrospective' true/false alarm determination (if needed). Up to four signal channels, that is, two electrocardiogram signals, one arterial blood pressure and/or one photoplethysmogram signal were included in the analysis. Two events were set for the method validation: event 1 for 'real-time' and event 2 for 'retrospective' alarm classification. The results showed that 100% true positive ratio (i.e. sensitivity) on the training set were obtained for ASY, EBR, ETC and VFB types, and 94% for VTA type, accompanied by the corresponding true negative ratio (i.e. specificity) results of 93%, 81%, 78%, 85% and 50% respectively, resulting in the score values of 96.50, 90.70, 88.89, 92.31 and 64.90, as well as with a final score of 80.57 for event 1 and 79.12 for event 2. For the test set, the proposed method obtained the score of 88.73 for ASY, 77.78 for EBR, 89.92 for ETC, 67.74 for VFB and 61.04 for VTA types, with the final score of 71.68 for event 1 and 75.91 for event 2.
Villamizar-Rodríguez, Germán; Fernández, Javier; Marín, Laura; Muñiz, Juan; González, Isabel; Lombó, Felipe
2015-01-01
Routine microbiological quality analyses in food samples require, in some cases, an initial incubation in pre-enrichment medium. This is necessary in order to ensure that small amounts of pathogenic strains are going to be detected. In this work, a universal pre-enrichment medium has been developed for the simultaneous growth of Bacillus cereus, Campylobacter jejuni, Clostridium perfringens, Cronobacter sakazakii, Escherichia coli, Enterobacteriaceae family (38 species, 27 genera), Listeria monocytogenes, Staphylococcus aureus, Salmonella spp. (two species, 13 strains). Growth confirmation for all these species was achieved in all cases, with excellent enrichments. This was confirmed by plating on the corresponding selective agar media for each bacterium. This GVUM universal pre-enrichment medium could be useful in food microbiological analyses, where different pathogenic bacteria must be detected after a pre-enrichment step. Following, a mPCR reaction for detection of all these pathogens was developed, after designing a set of nine oligonucleotide pairs from specific genetic targets on gDNA from each of these bacteria, covering all available strains already sequenced in GenBank for each pathogen type. The detection limits have been 1 Genome Equivalent (GE), with the exception of the Fam. Enterobacteriaceae (5 GEs). We obtained amplification for all targets (from 70 to 251 bp, depending on the bacteria type), showing the capability of this method to detect the most important industrial and sanitary food-borne pathogens from a universal pre-enrichment medium. This method includes an initial pre-enrichment step (18 h), followed by a mPCR (2 h) and a capillary electrophoresis (30 min); avoiding the tedious and long lasting growing on solid media required in traditional analysis (1-4 days, depending on the specific pathogen and verification procedure). An external testing of this method was conducted in order to compare classical and mPCR methods. This evaluation was carried out on five types of food matrices (meat, dairy products, prepared foods, canned fish, and pastry products), which were artificially contaminated with each one of the microorganisms, demonstrating the equivalence between both methods (coincidence percentages between both methods ranged from 78 to 92%).
Villamizar-Rodríguez, Germán; Fernández, Javier; Marín, Laura; Muñiz, Juan; González, Isabel; Lombó, Felipe
2015-01-01
Routine microbiological quality analyses in food samples require, in some cases, an initial incubation in pre-enrichment medium. This is necessary in order to ensure that small amounts of pathogenic strains are going to be detected. In this work, a universal pre-enrichment medium has been developed for the simultaneous growth of Bacillus cereus, Campylobacter jejuni, Clostridium perfringens, Cronobacter sakazakii, Escherichia coli, Enterobacteriaceae family (38 species, 27 genera), Listeria monocytogenes, Staphylococcus aureus, Salmonella spp. (two species, 13 strains). Growth confirmation for all these species was achieved in all cases, with excellent enrichments. This was confirmed by plating on the corresponding selective agar media for each bacterium. This GVUM universal pre-enrichment medium could be useful in food microbiological analyses, where different pathogenic bacteria must be detected after a pre-enrichment step. Following, a mPCR reaction for detection of all these pathogens was developed, after designing a set of nine oligonucleotide pairs from specific genetic targets on gDNA from each of these bacteria, covering all available strains already sequenced in GenBank for each pathogen type. The detection limits have been 1 Genome Equivalent (GE), with the exception of the Fam. Enterobacteriaceae (5 GEs). We obtained amplification for all targets (from 70 to 251 bp, depending on the bacteria type), showing the capability of this method to detect the most important industrial and sanitary food-borne pathogens from a universal pre-enrichment medium. This method includes an initial pre-enrichment step (18 h), followed by a mPCR (2 h) and a capillary electrophoresis (30 min); avoiding the tedious and long lasting growing on solid media required in traditional analysis (1–4 days, depending on the specific pathogen and verification procedure). An external testing of this method was conducted in order to compare classical and mPCR methods. This evaluation was carried out on five types of food matrices (meat, dairy products, prepared foods, canned fish, and pastry products), which were artificially contaminated with each one of the microorganisms, demonstrating the equivalence between both methods (coincidence percentages between both methods ranged from 78 to 92%). PMID:26579100
Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †
Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi
2016-01-01
During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781
Ríos, Sergio D; Castañeda, Joandiet; Torras, Carles; Farriol, Xavier; Salvadó, Joan
2013-04-01
Microalgae can grow rapidly and capture CO2 from the atmosphere to convert it into complex organic molecules such as lipids (biodiesel feedstock). High scale economically feasible microalgae based oil depends on optimizing the entire process production. This process can be divided in three very different but directly related steps (production, concentration, lipid extraction and transesterification). The aim of this study is to identify the best method of lipid extraction to undergo the potentiality of some microalgal biomass obtained from two different harvesting paths. The first path used all physicals concentration steps, and the second path was a combination of chemical and physical concentration steps. Three microalgae species were tested: Phaeodactylum tricornutum, Nannochloropsis gaditana, and Chaetoceros calcitrans One step lipid extraction-transesterification reached the same fatty acid methyl ester yield as the Bligh and Dyer and soxhlet extraction with n-hexane methods with the corresponding time, cost and solvent saving. Copyright © 2013 Elsevier Ltd. All rights reserved.
Validity and reliability of the Fitbit Zip as a measure of preschool children’s step count
Sharp, Catherine A; Mackintosh, Kelly A; Erjavec, Mihela; Pascoe, Duncan M; Horne, Pauline J
2017-01-01
Objectives Validation of physical activity measurement tools is essential to determine the relationship between physical activity and health in preschool children, but research to date has not focused on this priority. The aims of this study were to ascertain inter-rater reliability of observer step count, and interdevice reliability and validity of Fitbit Zip accelerometer step counts in preschool children. Methods Fifty-six children aged 3–4 years (29 girls) recruited from 10 nurseries in North Wales, UK, wore two Fitbit Zip accelerometers while performing a timed walking task in their childcare settings. Accelerometers were worn in secure pockets inside a custom-made tabard. Video recordings enabled two observers to independently code the number of steps performed in 3 min by each child during the walking task. Intraclass correlations (ICCs), concordance correlation coefficients, Bland-Altman plots and absolute per cent error were calculated to assess the reliability and validity of the consumer-grade device. Results An excellent ICC was found between the two observer codings (ICC=1.00) and the two Fitbit Zips (ICC=0.91). Concordance between the Fitbit Zips and observer counts was also high (r=0.77), with an acceptable absolute per cent error (6%–7%). Bland-Altman analyses identified a bias for Fitbit 1 of 22.8±19.1 steps with limits of agreement between −14.7 and 60.2 steps, and a bias for Fitbit 2 of 25.2±23.2 steps with limits of agreement between −20.2 and 70.5 steps. Conclusions Fitbit Zip accelerometers are a reliable and valid method of recording preschool children’s step count in a childcare setting. PMID:29081984
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2004-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes that are being developed at Glenn. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx were obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.
Systematic review of computational methods for identifying miRNA-mediated RNA-RNA crosstalk.
Li, Yongsheng; Jin, Xiyun; Wang, Zishan; Li, Lili; Chen, Hong; Lin, Xiaoyu; Yi, Song; Zhang, Yunpeng; Xu, Juan
2017-10-25
Posttranscriptional crosstalk and communication between RNAs yield large regulatory competing endogenous RNA (ceRNA) networks via shared microRNAs (miRNAs), as well as miRNA synergistic networks. The ceRNA crosstalk represents a novel layer of gene regulation that controls both physiological and pathological processes such as development and complex diseases. The rapidly expanding catalogue of ceRNA regulation has provided evidence for exploitation as a general model to predict the ceRNAs in silico. In this article, we first reviewed the current progress of RNA-RNA crosstalk in human complex diseases. Then, the widely used computational methods for modeling ceRNA-ceRNA interaction networks are further summarized into five types: two types of global ceRNA regulation prediction methods and three types of context-specific prediction methods, which are based on miRNA-messenger RNA regulation alone, or by integrating heterogeneous data, respectively. To provide guidance in the computational prediction of ceRNA-ceRNA interactions, we finally performed a comparative study of different combinations of miRNA-target methods as well as five types of ceRNA identification methods by using literature-curated ceRNA regulation and gene perturbation. The results revealed that integration of different miRNA-target prediction methods and context-specific miRNA/gene expression profiles increased the performance for identifying ceRNA regulation. Moreover, different computational methods were complementary in identifying ceRNA regulation and captured different functional parts of similar pathways. We believe that the application of these computational techniques provides valuable functional insights into ceRNA regulation and is a crucial step for informing subsequent functional validation studies. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Wu, Xiaoguang; Zhao, Xu; Li, Yi; Yang, Tao; Yan, Xiujuan; Wang, Ke
2015-09-01
In situ fabrication of carbonated hydroxyapatite (CHA) remineralization layer on an enamel slice was completed in a novel, biomimetic two-step method. First, a CaCO3 layer was synthesized on the surface of demineralized enamel using an acidic amino acid (aspartic acid or glutamate acid) as a soft template. Second, at the same concentration of the acidic amino acid, rod-like carbonated hydroxyapatite was produced with the CaCO3 layer as a sacrificial template and a reactant. The morphology, crystallinity and other physicochemical properties of the crystals were characterized using field emission scanning electron microscopy (FESEM), Fourier transform infrared spectrometry (FTIR), X-ray diffraction (XRD) and energy-dispersive X-ray analysis (EDAX), respectively. Acidic amino acid could promote the uniform deposition of hydroxyapatite with rod-like crystals via absorption of phosphate and carbonate ions from the reaction solution. Moreover, compared with hydroxyapatite crystals coated on the enamel when synthesized by a one-step method, the CaCO3 coating that was synthesized in the first step acted as an active bridge layer and sacrificial template. It played a vital role in orienting the artificial coating layer through the template effect. The results show that the rod-like carbonated hydroxyapatite crystals grow into bundles, which are similar in size and appearance to prisms in human enamel, when using the two-step method with either aspartic acid or acidic glutamate (20.00 mmol/L). Copyright © 2015 Elsevier B.V. All rights reserved.