Sample records for likelihood model selectors

  1. Generalized model of electromigration with 1:1 (analyte:selector) complexation stoichiometry: part I. Theory.

    PubMed

    Dubský, Pavel; Müllerová, Ludmila; Dvořák, Martin; Gaš, Bohuslav

    2015-03-06

    The model of electromigration of a multivalent weak acidic/basic/amphoteric analyte that undergoes complexation with a mixture of selectors is introduced. The model provides an extension of the series of models starting with the single-selector model without dissociation by Wren and Rowe in 1992, continuing with the monovalent weak analyte/single-selector model by Rawjee, Williams and Vigh in 1993 and that by Lelièvre in 1994, and ending with the multi-selector overall model without dissociation developed by our group in 2008. The new multivalent analyte multi-selector model shows that the effective mobility of the analyte obeys the original Wren and Row's formula. The overall complexation constant, mobility of the free analyte and mobility of complex can be measured and used in a standard way. The mathematical expressions for the overall parameters are provided. We further demonstrate mathematically that the pH dependent parameters for weak analytes can be simply used as an input into the multi-selector overall model and, in reverse, the multi-selector overall parameters can serve as an input into the pH-dependent models for the weak analytes. These findings can greatly simplify the rationale method development in analytical electrophoresis, specifically enantioseparations. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Generalized model of electromigration with 1:1 (analyte:selector) complexation stoichiometry: part II. Application to dual systems and experimental verification.

    PubMed

    Müllerová, Ludmila; Dubský, Pavel; Gaš, Bohuslav

    2015-03-06

    Interactions among analyte forms that undergo simultaneous dissociation/protonation and complexation with multiple selectors take the shape of a highly interconnected multi-equilibrium scheme. This makes it difficult to express the effective mobility of the analyte in these systems, which are often encountered in electrophoretical separations, unless a generalized model is introduced. In the first part of this series, we presented the theory of electromigration of a multivalent weakly acidic/basic/amphoteric analyte undergoing complexation with a mixture of an arbitrary number of selectors. In this work we demonstrate the validity of this concept experimentally. The theory leads to three useful perspectives, each of which is closely related to the one originally formulated for simpler systems. If pH, IS and the selector mixture composition are all kept constant, the system is treated as if only a single analyte form interacted with a single selector. If the pH changes at constant IS and mixture composition, the already well-established models of a weakly acidic/basic analyte interacting with a single selector can be employed. Varying the mixture composition at constant IS and pH leads to a situation where virtually a single analyte form interacts with a mixture of selectors. We show how to switch between the three perspectives in practice and confirm that they can be employed interchangeably according to the specific needs by measurements performed in single- and dual-selector systems at a pH where the analyte is fully dissociated, partly dissociated or fully protonated. Weak monoprotic analyte (R-flurbiprofen) and two selectors (native β-cyclodextrin and monovalent positively charged 6-monodeoxy-6-monoamino-β-cyclodextrin) serve as a model system. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Simulation and Experimental Studies on Grain Selection and Structure Design of the Spiral Selector for Casting Single Crystal Ni-Based Superalloy.

    PubMed

    Zhang, Hang; Xu, Qingyan

    2017-10-27

    Grain selection is an important process in single crystal turbine blades manufacturing. Selector structure is a control factor of grain selection, as well as directional solidification (DS). In this study, the grain selection and structure design of the spiral selector were investigated through experimentation and simulation. A heat transfer model and a 3D microstructure growth model were established based on the Cellular automaton-Finite difference (CA-FD) method for the grain selector. Consequently, the temperature field, the microstructure and the grain orientation distribution were simulated and further verified. The average error of the temperature result was less than 1.5%. The grain selection mechanisms were further analyzed and validated through simulations. The structural design specifications of the selector were suggested based on the two grain selection effects. The structural parameters of the spiral selector, namely, the spiral tunnel diameter ( d w ), the spiral pitch ( h b ) and the spiral diameter ( h s ), were studied and the design criteria of these parameters were proposed. The experimental and simulation results demonstrated that the improved selector could accurately and efficiently produce a single crystal structure.

  4. Simulation and Experimental Studies on Grain Selection and Structure Design of the Spiral Selector for Casting Single Crystal Ni-Based Superalloy

    PubMed Central

    Zhang, Hang; Xu, Qingyan

    2017-01-01

    Grain selection is an important process in single crystal turbine blades manufacturing. Selector structure is a control factor of grain selection, as well as directional solidification (DS). In this study, the grain selection and structure design of the spiral selector were investigated through experimentation and simulation. A heat transfer model and a 3D microstructure growth model were established based on the Cellular automaton-Finite difference (CA-FD) method for the grain selector. Consequently, the temperature field, the microstructure and the grain orientation distribution were simulated and further verified. The average error of the temperature result was less than 1.5%. The grain selection mechanisms were further analyzed and validated through simulations. The structural design specifications of the selector were suggested based on the two grain selection effects. The structural parameters of the spiral selector, namely, the spiral tunnel diameter (dw), the spiral pitch (hb) and the spiral diameter (hs), were studied and the design criteria of these parameters were proposed. The experimental and simulation results demonstrated that the improved selector could accurately and efficiently produce a single crystal structure. PMID:29077067

  5. Multi-physics transient simulation of monolithic niobium dioxide-tantalum dioxide memristor-selector structures

    NASA Astrophysics Data System (ADS)

    Sevic, John F.; Kobayashi, Nobuhiko P.

    2017-10-01

    Self-assembled niobium dioxide (NbO2) thin-film selectors self-aligned to tantalum dioxide (TaO2) memristive memory cells are studied by a multi-physics transient solution of the heat equation coupled to the nonlinear current continuity equation. While a compact model can resolve the quasi-static bulk negative differential resistance (NDR), a self-consistent coupled transport formulation provides a non-equilibrium picture of NbO2-TaO2 selector-memristor operation ab initio. By employing the drift-diffusion transport approximation, a finite element method is used to study the dynamic electrothermal behavior of our experimentally obtained selector-memristor devices, showing that existing conditions are suitable for electroformation of NbO2 selector thin-films. Both transient and steady-state simulations support our theory, suggesting that the phase change due to insulator-metal transition is responsible for NbO2 selector NDR in our as-fabricated selector-memristor devices. Simulation results further suggest that TiN nano-via may play a central role in electroforming, as its dimensions and material properties establish the mutual electrothermal interaction between TiN nano-via and the selector-memristor.

  6. Super Nonlinear Electrodeposition-Diffusion-Controlled Thin-Film Selector.

    PubMed

    Ji, Xinglong; Song, Li; He, Wei; Huang, Kejie; Yan, Zhiyuan; Zhong, Shuai; Zhang, Yishu; Zhao, Rong

    2018-03-28

    Selector elements with high nonlinearity are an indispensable part in constructing high density, large-scale, 3D stackable emerging nonvolatile memory and neuromorphic network. Although significant efforts have been devoted to developing novel thin-film selectors, it remains a great challenge in achieving good switching performance in the selectors to satisfy the stringent electrical criteria of diverse memory elements. In this work, we utilized high-defect-density chalcogenide glass (Ge 2 Sb 2 Te 5 ) in conjunction with high mobility Ag element (Ag-GST) to achieve a super nonlinear selective switching. A novel electrodeposition-diffusion dynamic selector based on Ag-GST exhibits superior selecting performance including excellent nonlinearity (<5 mV/dev), ultra-low leakage (<10 fA), and bidirectional operation. With the solid microstructure evidence and dynamic analyses, we attributed the selective switching to the competition between the electrodeposition and diffusion of Ag atoms in the glassy GST matrix under electric field. A switching model is proposed, and the in-depth understanding of the selective switching mechanism offers an insight of switching dynamics for the electrodeposition-diffusion-controlled thin-film selector. This work opens a new direction of selector designs by combining high mobility elements and high-defect-density chalcogenide glasses, which can be extended to other materials with similar properties.

  7. 75 FR 27668 - Airworthiness Directives; Fokker Services B.V. Model F.28 Mark 0070 and 0100 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-18

    ... cause of the MLG extension problem was the (partially) blocked hydraulic return line from the MLG selector valve by pieces of hard plastic. These were identified as parts of the poppet seat of PBSOV... problem was the (partially) blocked hydraulic return line from the MLG selector valve by pieces of hard...

  8. 75 FR 66649 - Airworthiness Directives; Fokker Services B.V. Model F.28 Mark 0070 and 0100 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-29

    ... investigation revealed that the cause of the MLG extension problem was the (partially) blocked hydraulic return line from the MLG selector valve by pieces of hard plastic. These were identified as parts of the... extension problem was the (partially) blocked hydraulic return line from the MLG selector valve by pieces of...

  9. Selecting the selector: Comparison of update rules for discrete global optimization

    DOE PAGES

    Theiler, James; Zimmer, Beate G.

    2017-05-24

    In this paper, we compare some well-known Bayesian global optimization methods in four distinct regimes, corresponding to high and low levels of measurement noise and to high and low levels of “quenched noise” (which term we use to describe the roughness of the function we are trying to optimize). We isolate the two stages of this optimization in terms of a “regressor,” which fits a model to the data measured so far, and a “selector,” which identifies the next point to be measured. Finally, the focus of this paper is to investigate the choice of selector when the regressor ismore » well matched to the data.« less

  10. A compact model for selectors based on metal doped electrolyte

    NASA Astrophysics Data System (ADS)

    Zhang, Lu; Song, Wenhao; Yang, J. Joshua; Li, Hai; Chen, Yiran

    2018-04-01

    A selector device that demonstrates high nonlinearity and low switching voltages was fabricated using HfOx as a solid electrolyte doped with Ag electrodes. The electronic conductance of the volatile conductive filaments responsible for the switching was studied under both static and dynamic conditions. A compact model is developed from this study that describes the physical processes of the formation and rupture of the Ag filament(s). A dynamic capacitance model is used to fit the transient current traces under different voltage bias, which enables the extraction of parameters associated with the various parasitic components in the device.

  11. Psychological woundedness and its evaluation in applications for clinical psychology training.

    PubMed

    Ivey, Gavin; Partington, Theresa

    2014-01-01

    This paper reports on a qualitative study investigating clinical psychology programme selectors' perceptions of psychological 'woundedness' in the autobiographical narratives of applicants for clinical psychology training. Woundedness was here defined in terms of the ongoing or residual psychological impact of adverse experiences and psychic conflicts. Ten selectors were presented with a sample of applicants' written autobiographical narratives, differentiated by the conspicuous presence or absence of psychological woundedness. The selectors, who were not informed of the specific aims of the study, ranked applicant protocols and were interviewed individually about their impressions of the protocols and the criteria that they used to rank them. Most selectors were positively biased toward 'wounded' narratives and suspicious of those in which woundedness was manifestly absent. Although generally disposed to favour wounded applicants, how woundedness was presented, rather than the mere presence of it, was a discriminating feature in selectors' appraisal of wounded narratives. Selectors were concerned that unresolved woundedness may compromise applicants' professional boundaries, impair self-reflective capacity and lead to damaging countertransference enactments. The relative extent to which applicant woundedness appeared to be resolved was significant in selectors' assessment of applicants' clinical training potential. A distinction is thus proposed between obstructive and facilitative woundedness in clinical psychology applicants. A sample of clinical psychology programme selectors identified psychological woundedness as a significant feature in applicant autobiographies. Selectors favoured applicant autobiographies showing evidence of woundedness. The distinction between obstructive and facilitative woundedness is important in how the selector sample evaluated woundedness. Copyright © 2012 John Wiley & Sons, Ltd.

  12. A niobium oxide-tantalum oxide selector-memristor self-aligned nanostack

    NASA Astrophysics Data System (ADS)

    Diaz Leon, Juan J.; Norris, Kate J.; Yang, J. Joshua; Sevic, John F.; Kobayashi, Nobuhiko P.

    2017-03-01

    The integration of nonlinear current-voltage selectors and bi-stable memristors is a paramount step for reliable operation of crossbar arrays. In this paper, the self-aligned assembly of a single nanometer-scale device that contains both a selector and a memristor is presented. The two components (i.e., selector and memristor) are vertically assembled via a self-aligned fabrication process combined with electroforming. In designing the device, niobium oxide and tantalum oxide are chosen as materials for selector and memristor, respectively. The formation of niobium oxide is visualized by exploiting the self-limiting reaction between niobium and tantalum oxide; crystalline niobium (di)oxide forms at the interface between metallic niobium and tantalum oxide via electrothermal heating, resulting in a niobium oxide selector self-aligned to a tantalum oxide memristor. A steady-state finite element analysis is used to assess the electrothermal heating expected to occur in the device. Current-voltage measurements and structural/chemical analyses conducted for the virgin device, the electroforming process, and the functional selector-memristor device are presented. The demonstration of a self-aligned, monolithically integrated selector-memristor device would pave a practical pathway to various circuits based on memristors attainable at manufacturing scales.

  13. Robotic Vehicle Communications Interoperability

    DTIC Science & Technology

    1988-08-01

    starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor

  14. An analytical model for enantioseparation process in capillary electrophoresis

    NASA Astrophysics Data System (ADS)

    Ranzuglia, G. A.; Manzi, S. J.; Gomez, M. R.; Belardinelli, R. E.; Pereyra, V. D.

    2017-12-01

    An analytical model to explain the mobilities of enantiomer binary mixture in capillary electrophoresis experiment is proposed. The model consists in a set of kinetic equations describing the evolution of the populations of molecules involved in the enantioseparation process in capillary electrophoresis (CE) is proposed. These equations take into account the asymmetric driven migration of enantiomer molecules, chiral selector and the temporary diastomeric complexes, which are the products of the reversible reaction between the enantiomers and the chiral selector. The solution of these equations gives the spatial and temporal distribution of each species in the capillary, reproducing a typical signal of the electropherogram. The mobility, μ, of each specie is obtained by the position of the maximum (main peak) of their respective distributions. Thereby, the apparent electrophoretic mobility difference, Δμ, as a function of chiral selector concentration, [ C ] , can be measured. The behaviour of Δμ versus [ C ] is compared with the phenomenological model introduced by Wren and Rowe in J. Chromatography 1992, 603, 235. To test the analytical model, a capillary electrophoresis experiment for the enantiomeric separation of the (±)-chlorpheniramine β-cyclodextrin (β-CD) system is used. These data, as well as, other obtained from literature are in closed agreement with those obtained by the model. All these results are also corroborate by kinetic Monte Carlo simulation.

  15. Hadoop neural network for parallel and distributed feature selection.

    PubMed

    Hodge, Victoria J; O'Keefe, Simon; Austin, Jim

    2016-06-01

    In this paper, we introduce a theoretical basis for a Hadoop-based neural network for parallel and distributed feature selection in Big Data sets. It is underpinned by an associative memory (binary) neural network which is highly amenable to parallel and distributed processing and fits with the Hadoop paradigm. There are many feature selectors described in the literature which all have various strengths and weaknesses. We present the implementation details of five feature selection algorithms constructed using our artificial neural network framework embedded in Hadoop YARN. Hadoop allows parallel and distributed processing. Each feature selector can be divided into subtasks and the subtasks can then be processed in parallel. Multiple feature selectors can also be processed simultaneously (in parallel) allowing multiple feature selectors to be compared. We identify commonalities among the five features selectors. All can be processed in the framework using a single representation and the overall processing can also be greatly reduced by only processing the common aspects of the feature selectors once and propagating these aspects across all five feature selectors as necessary. This allows the best feature selector and the actual features to select to be identified for large and high dimensional data sets through exploiting the efficiency and flexibility of embedding the binary associative-memory neural network in Hadoop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Enantioseparation by Capillary Electrophoresis Using Ionic Liquids as Chiral Selectors.

    PubMed

    Greño, Maider; Marina, María Luisa; Castro-Puyana, María

    2018-11-02

    Capillary electrophoresis (CE) is one of the most widely employed analytical techniques to achieve enantiomeric separations. In spite of the fact that there are many chiral selectors commercially available to perform enantioseparations by CE, one of the most relevant topics in this field is the search for new selectors capable of providing high enantiomeric resolutions. Chiral ionic liquids (CILs) have interesting characteristics conferring them a high potential in chiral separations although only some of them are commercially available. The aim of this article is to review all the works published on the use of CILs as chiral selectors in the development of enantioselective methodologies by CE, covering the period from 2006 (when the first research work on this topic was published) to 2017. The use of CILs as sole chiral selectors, as chiral selectors in dual systems or as chiral ligands will be considered. This review also provides detailed analytical information on the experimental conditions used to carry out enantioseparations in different fields as well as on the separation mechanism involved.

  17. Cationic permethylated 6-monoamino-6-monodeoxy-β-cyclodextrin as chiral selector of dansylated amino acids in capillary electrophoresis.

    PubMed

    Németh, Krisztina; Domonkos, Celesztina; Sarnyai, Virág; Szemán, Julianna; Jicsinszky, László; Szente, Lajos; Visy, Júlia

    2014-10-01

    The resolution power of permethylated 6-monoamino-6-monodeoxy-βCD (PMMABCD) - a single isomer, cationic CD derivative - developed previously for chiral analyses in capillary electrophoresis was further studied here. Dansylated amino acids (Dns-AA) were chosen as amphoteric chiral model compounds. Changes in the resolutions of Dns-AAs by varying pH and selector concentrations were investigated and correlated with their structures and chemical properties (isoelectric point and lipophilicity). Maximal resolutions could be achieved at pH 6 or pH 4. The separations improved with increasing concentration of the selector. Baseline or substantially better resolution for 8 pairs of these Dns-AAs could be achieved. Low CD concentration was enough for the separation of the most apolar Dns-AAs. Chiral discrimination ability of PMMABCD was demonstrated by the separation of an artificial mixture of 8 Dns-AA pairs. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Maltodextrins as chiral selectors in CE: molecular structure effect of basic chiral compounds on the enantioseparation.

    PubMed

    Tabani, Hadi; Fakhari, Ali Reza; Nojavan, Saeed

    2014-10-01

    Prediction of chiral separation for a compound using a chiral selector is an interesting and debatable work. For this purpose, in this study 23 chiral basic drugs with different chemical structures were selected as model solutes and the influence of their chemical structures on the enantioseparation in the presence of maltodextrin (MD) as chiral selector was investigated. For chiral separation, a 100-mM phosphate buffer solution (pH 3.0) containing 10% (w/v) MD with dextrose equivalent (DE) of 4-7 as chiral selector at the temperature of 25°C and voltage of 20 kV was used. Under this condition, baseline separation was achieved for nine chiral compounds and partial separation was obtained for another six chiral compounds while no enantioseparation was obtained for the remaining eight compounds. The results showed that the existence of at least two aromatic rings or cycloalkanes and an oxygen or nitrogen atom or -CN group directly bonded to the chiral center are necessary for baseline separation. With the obtained results in this study, chiral separation of a chiral compound can be estimated with MD-modified capillary electrophoresis before analysis. This prediction will minimize the number of preliminary experiments required to resolve enantiomers and will save time and cost. © 2014 Wiley Periodicals, Inc.

  19. Selector function of MHC I molecules is determined by protein plasticity

    NASA Astrophysics Data System (ADS)

    Bailey, Alistair; Dalchau, Neil; Carter, Rachel; Emmott, Stephen; Phillips, Andrew; Werner, Jörn M.; Elliott, Tim

    2015-10-01

    The selection of peptides for presentation at the surface of most nucleated cells by major histocompatibility complex class I molecules (MHC I) is crucial to the immune response in vertebrates. However, the mechanisms of the rapid selection of high affinity peptides by MHC I from amongst thousands of mostly low affinity peptides are not well understood. We developed computational systems models encoding distinct mechanistic hypotheses for two molecules, HLA-B*44:02 (B*4402) and HLA-B*44:05 (B*4405), which differ by a single residue yet lie at opposite ends of the spectrum in their intrinsic ability to select high affinity peptides. We used in vivo biochemical data to infer that a conformational intermediate of MHC I is significant for peptide selection. We used molecular dynamics simulations to show that peptide selector function correlates with protein plasticity, and confirmed this experimentally by altering the plasticity of MHC I with a single point mutation, which altered in vivo selector function in a predictable way. Finally, we investigated the mechanisms by which the co-factor tapasin influences MHC I plasticity. We propose that tapasin modulates MHC I plasticity by dynamically coupling the peptide binding region and α3 domain of MHC I allosterically, resulting in enhanced peptide selector function.

  20. Proline-based chiral stationary phases: a molecular dynamics study of the interfacial structure.

    PubMed

    Ashtari, M; Cann, N M

    2011-09-16

    Proline chains have generated considerable interest as a possible basis for new selectors in chiral chromatography. In this article, we employ molecular dynamics simulations to examine the interfacial structure of two diproline chiral selectors, one with a terminal trimethylacetyl group and one with a terminal t-butyl carbamate group. The solvents consist of a relatively apolar n-hexane/2-propanol and a polar water/methanol mixture. We begin with electronic structure calculations for the two chiral selectors to assess the energetics of conformational changes, particularly along the backbone where the amide bonds can alternate between cis and trans conformations. Force fields have been developed for the two selectors, based on these ab initio calculations. Molecular dynamics simulations of the selective interfaces are performed to examine the preferred backbone conformations, as a function of end-group and solvent. The full chiral surface includes the diproline selectors, trimethylsilyl end-caps, and silanol groups. Connection is made with selectivity measurements on these interfaces, where significant differences are observed between these two very similar selectors. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Unidirectional threshold switching in Ag/Si-based electrochemical metallization cells for high-density bipolar RRAM applications

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Song, Bing; Li, Qingjiang; Zeng, Zhongming

    2018-03-01

    We herein present a novel unidirectional threshold selector for cross-point bipolar RRAM array. The proposed Ag/amorphous Si based threshold selector showed excellent threshold characteristics in positive field, such as high selectivity ( 105), steep slope (< 5 mV/decade) and low off-state current (< 300 pA). Meanwhile, the selector exhibited rectifying characteristics in the high resistance state as well and the rectification ratio was as high as 103 at ± 1.5 V. Nevertheless, due to the high reverse current about 9 mA at - 3 V, this unidirectional threshold selector can be used as a selection element for bipolar-type RRAM. By integrating a bipolar RRAM device with the selector, experiments showed that the undesired sneak was significantly suppressed, indicating its potentiality for high-density integrated nonvolatile memory applications.

  2. ProSelection: A Novel Algorithm to Select Proper Protein Structure Subsets for in Silico Target Identification and Drug Discovery Research.

    PubMed

    Wang, Nanyi; Wang, Lirong; Xie, Xiang-Qun

    2017-11-27

    Molecular docking is widely applied to computer-aided drug design and has become relatively mature in the recent decades. Application of docking in modeling varies from single lead compound optimization to large-scale virtual screening. The performance of molecular docking is highly dependent on the protein structures selected. It is especially challenging for large-scale target prediction research when multiple structures are available for a single target. Therefore, we have established ProSelection, a docking preferred-protein selection algorithm, in order to generate the proper structure subset(s). By the ProSelection algorithm, protein structures of "weak selectors" are filtered out whereas structures of "strong selectors" are kept. Specifically, the structure which has a good statistical performance of distinguishing active ligands from inactive ligands is defined as a strong selector. In this study, 249 protein structures of 14 autophagy-related targets are investigated. Surflex-dock was used as the docking engine to distinguish active and inactive compounds against these protein structures. Both t test and Mann-Whitney U test were used to distinguish the strong from the weak selectors based on the normality of the docking score distribution. The suggested docking score threshold for active ligands (SDA) was generated for each strong selector structure according to the receiver operating characteristic (ROC) curve. The performance of ProSelection was further validated by predicting the potential off-targets of 43 U.S. Federal Drug Administration approved small molecule antineoplastic drugs. Overall, ProSelection will accelerate the computational work in protein structure selection and could be a useful tool for molecular docking, target prediction, and protein-chemical database establishment research.

  3. Chiral magnetic microspheres purified by centrifugal field flow fractionation and microspheres magnetic chiral chromatography for benzoin racemate separation

    PubMed Central

    Tian, Ailin; Qi, Jing; Liu, Yating; Wang, Fengkang; Ito, Yoichiro; Wei, Yun

    2013-01-01

    Separation of enantiomers still remains a challenge due to their identical physical and chemical properties in a chiral environment, and the research on specific chiral selector along with separation techniques continues to be conducted to resolve individual enantiomers. In our laboratory the promising magnetic chiral microspheres Fe3O4@SiO2@cellulose-2, 3-bis (3, 5-dimethylphenylcarbamate) have been developed to facilitate the resolution using both its magnetic property and chiral recognition ability. In our present studies this magnetic chiral selector was first purified by centrifuge field flow fractionation, and then used to separate benzoin racemate by a chromatographic method. Uniform-sized and masking-impurity-removed magnetic chiral selector was first obtained by field flow fractionation with ethanol through a spiral column mounted on the type-J planetary centrifuge, and using the purified magnetic chiral selector, the final chromatographic separation of benzoin racemate was successfully performed by eluting with ethanol through a coiled tube (wound around the cylindrical magnet to retain the magnetic chiral selector as a stationary phase) submerged in dry ice. In addition, an external magnetic field facilitates the recycling of the magnetic chiral selector. PMID:23891368

  4. Chiral magnetic microspheres purified by centrifugal field flow fractionation and microspheres magnetic chiral chromatography for benzoin racemate separation.

    PubMed

    Tian, Ailin; Qi, Jing; Liu, Yating; Wang, Fengkang; Ito, Yoichiro; Wei, Yun

    2013-08-30

    Separation of enantiomers still remains a challenge due to their identical physical and chemical properties in a chiral environment, and the research on specific chiral selector along with separation techniques continues to be conducted to resolve individual enantiomers. In our laboratory the promising magnetic chiral microspheres Fe3O4@SiO2@cellulose-2, 3-bis (3,5-dimethylphenylcarbamate) have been developed to facilitate the resolution using both its magnetic property and chiral recognition ability. In our present studies this magnetic chiral selector was first purified by centrifuge field flow fractionation, and then used to separate benzoin racemate by a chromatographic method. Uniform-sized and masking-impurity-removed magnetic chiral selector was first obtained by field flow fractionation with ethanol through a spiral column mounted on the type-J planetary centrifuge, and using the purified magnetic chiral selector, the final chromatographic separation of benzoin racemate was successfully performed by eluting with ethanol through a coiled tube (wound around the cylindrical magnet to retain the magnetic chiral selector as a stationary phase) submerged in dry ice. In addition, an external magnetic field facilitates the recycling of the magnetic chiral selector. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Neuronal Cell Fate Specification by the Convergence of Different Spatiotemporal Cues on a Common Terminal Selector Cascade

    PubMed Central

    Rubio-Ferrera, Irene; Millán-Crespo, Irene; Contero-García, Patricia; Bahrampour, Shahrzad

    2016-01-01

    Specification of the myriad of unique neuronal subtypes found in the nervous system depends upon spatiotemporal cues and terminal selector gene cascades, often acting in sequential combinatorial codes to determine final cell fate. However, a specific neuronal cell subtype can often be generated in different parts of the nervous system and at different stages, indicating that different spatiotemporal cues can converge on the same terminal selectors to thereby generate a similar cell fate. However, the regulatory mechanisms underlying such convergence are poorly understood. The Nplp1 neuropeptide neurons in the Drosophila ventral nerve cord can be subdivided into the thoracic-ventral Tv1 neurons and the dorsal-medial dAp neurons. The activation of Nplp1 in Tv1 and dAp neurons depends upon the same terminal selector cascade: col>ap/eya>dimm>Nplp1. However, Tv1 and dAp neurons are generated by different neural progenitors (neuroblasts) with different spatiotemporal appearance. Here, we find that the same terminal selector cascade is triggered by Kr/pdm>grn in dAp neurons, but by Antp/hth/exd/lbe/cas in Tv1 neurons. Hence, two different spatiotemporal combinations can funnel into a common downstream terminal selector cascade to determine a highly related cell fate. PMID:27148744

  6. The Deflector Selector: A Machine Learning Framework for Prioritizing Deflection Technology Development

    NASA Astrophysics Data System (ADS)

    Nesvold, E. R.; Erasmus, N.; Greenberg, A.; van Heerden, E.; Galache, J. L.; Dahlstrom, E.; Marchis, F.

    2017-02-01

    We present a machine learning model that can predict which asteroid deflection technology would be most effective, given the likely population of impactors. Our model can help policy and funding agencies prioritize technology development.

  7. The Reciprocal Principle of Selectand-Selector-Systems in Supramolecular Chromatography †.

    PubMed

    Schurig, Volker

    2016-11-15

    In selective chromatography and electromigration methods, supramolecular recognition of selectands and selectors is due to the fast and reversible formation of association complexes governed by thermodynamics. Whereas the selectand molecules to be separated are always present in the mobile phase, the selector employed for the separation of the selectands is either part of the stationary phase or is added to the mobile phase. By the reciprocal principle, the roles of selector and selectand can be reversed. In this contribution in honor of Professor Stig Allenmark, the evolution of the reciprocal principle in chromatography is reviewed and its advantages and limitations are outlined. Various reciprocal scenarios, including library approaches, are discussed in efforts to optimize selectivity in separation science.

  8. All oxide semiconductor-based bidirectional vertical p-n-p selectors for 3D stackable crossbar-array electronics

    PubMed Central

    Bae, Yoon Cheol; Lee, Ah Rahm; Baek, Gwang Ho; Chung, Je Bock; Kim, Tae Yoon; Park, Jea Gun; Hong, Jin Pyo

    2015-01-01

    Three-dimensional (3D) stackable memory devices including nano-scaled crossbar array are central for the realization of high-density non-volatile memory electronics. However, an essential sneak path issue affecting device performance in crossbar array remains a bottleneck and a grand challenge. Therefore, a suitable bidirectional selector as a two-way switch is required to facilitate a major breakthrough in the 3D crossbar array memory devices. Here, we show the excellent selectivity of all oxide p-/n-type semiconductor-based p-n-p open-based bipolar junction transistors as selectors in crossbar memory array. We report that bidirectional nonlinear characteristics of oxide p-n-p junctions can be highly enhanced by manipulating p-/n-type oxide semiconductor characteristics. We also propose an associated Zener tunneling mechanism that explains the unique features of our p-n-p selector. Our experimental findings are further extended to confirm the profound functionality of oxide p-n-p selectors integrated with several bipolar resistive switching memory elements working as storage nodes. PMID:26289565

  9. Thermodynamic models to elucidate the enantioseparation of drugs with two stereogenic centers by micellar electrokinetic chromatography.

    PubMed

    Guo, Xuming; Liu, Qiuxia; Hu, Shaoqiang; Guo, Wenbo; Yang, Zhuo; Zhang, Yonghua

    2017-08-25

    An equilibrium model depicting the simultaneous protonation of chiral drugs and partitioning of protonated ions and neutral molecules into chiral micelles in micellar electrokinetic chromatography (MEKC) has been introduced. It was used for the prediction and elucidation of complex changes in migration order patterns with experimental conditions in the enantioseparation of drugs with two stereogenic centers. Palonosetron hydrochloride (PALO), a weakly basic drug with two stereogenic centers, was selected as a model drug. Its four stereoisomers were separated by MEKC using sodium cholate (SC) as chiral selector and surfactant. Based on the equilibrium model, equations were derived for a calculation of the effective mobility and migration time of each stereoisomer at a certain pH. The migration times of four stereoisomers at different pHs were calculated and then the migration order patterns were constructed with derived equations. The results were in accord with the experiment. And the contribution of each mechanism to the separation and its influence on the migration order pattern was analyzed separately by introducing virtual isomers, i.e., hypothetical stereoisomers with only one parameter changed relative to a real PALO stereoisomer. A thermodynamic model for a judgment of the correlation of interactions between two stereogenic centers of stereoisomers and chiral selector was also proposed. According to this model, the interactions of two stereogenic centers of PALO stereoisomers in both neutral molecules and protonated ions with chiral selector are not independent, so the chiral recognition in each pair of enantiomers as well as the recognition for diastereomers is not simply the algebraic sum of the contributions of two stereogenic centers due to their correlation. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Studies on a Q/A selector for the SECRAL electron cyclotron resonance ion source.

    PubMed

    Yang, Y; Sun, L T; Feng, Y C; Fang, X; Lu, W; Zhang, W H; Cao, Y; Zhang, X Z; Zhao, H W

    2014-08-01

    Electron cyclotron resonance ion sources are widely used in heavy ion accelerators in the world because they are capable of producing high current beams of highly charged ions. However, the design of the Q/A selector system for these devices is challenging, because it must have a sufficient ion resolution while controlling the beam emittance growth. Moreover, this system has to be matched for a wide range of ion beam species with different intensities. In this paper, research on the Q/A selector system at the SECRAL (Superconducting Electron Cyclotron Resonance ion source with Advanced design in Lanzhou) platform both in experiment and simulation is presented. Based on this study, a new Q/A selector system has been designed for SECRAL II. The features of the new design including beam simulations are also presented.

  11. Numerical study of read scheme in one-selector one-resistor crossbar array

    NASA Astrophysics Data System (ADS)

    Kim, Sungho; Kim, Hee-Dong; Choi, Sung-Jin

    2015-12-01

    A comprehensive numerical circuit analysis of read schemes of a one selector-one resistance change memory (1S1R) crossbar array is carried out. Three schemes-the ground, V/2, and V/3 schemes-are compared with each other in terms of sensing margin and power consumption. Without the aid of a complex analytical approach or SPICE-based simulation, a simple numerical iteration method is developed to simulate entire current flows and node voltages within a crossbar array. Understanding such phenomena is essential in successfully evaluating the electrical specifications of selectors for suppressing intrinsic drawbacks of crossbar arrays, such as sneaky current paths and series line resistance problems. This method provides a quantitative tool for the accurate analysis of crossbar arrays and provides guidelines for developing an optimal read scheme, array configuration, and selector device specifications.

  12. Development of a Spacecraft Materials Selector Expert System

    NASA Technical Reports Server (NTRS)

    Pippin, G.; Kauffman, W. (Technical Monitor)

    2002-01-01

    This report contains a description of the knowledge base tool and examples of its use. A downloadable version of the Spacecraft Materials Selector (SMS) knowledge base is available through the NASA Space Environments and Effects Program. The "Spacecraft Materials Selector" knowledge base is part of an electronic expert system. The expert system consists of an inference engine that contains the "decision-making" code and the knowledge base that contains the selected body of information. The inference engine is a software package previously developed at Boeing, called the Boeing Expert System Tool (BEST) kit.

  13. A simple and compact mechanical velocity selector of use to analyze/select molecular alignment in supersonic seeded beams

    NASA Astrophysics Data System (ADS)

    Pirani, F.; Cappelletti, D.; Vecchiocattivi, F.; Vattuone, L.; Gerbi, A.; Rocca, M.; Valbusa, U.

    2004-02-01

    A light and compact mechanical velocity selector, of novel design, for applications in supersonic molecular-beam studies has been developed. It represents a simplified version of the traditional, 50 year old, slotted disks velocity selector. Taking advantage of new materials and improved machining techniques, the new version has been realized with only two rotating slotted disks, driven by an electrical motor with adjustable frequency of rotation, and thus has a much smaller weight and size with respect to the original design, which may allow easier implementation in most of the available molecular-beam apparatuses. This new type of selector, which maintains a sufficiently high velocity resolution, has been developed for sampling molecules with different degrees of rotational alignment, like those emerging from a seeded supersonic expansion. This sampling is the crucial step to realize new molecular-beam experiments to study the effect of molecular alignment in collisional processes.

  14. Excellent selector performance in engineered Ag/ZrO2:Ag/Pt structure for high-density bipolar RRAM applications

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Song, Bing; Zeng, Zhongming

    2017-12-01

    A high-performance selector with bidirectional threshold switching (TS) characteristics of Ag/ZrO2/Pt structure was prepared by incorporating metallic Ag into the ZrO2 matrix. The bidirectional TS device exhibited excellent switching uniformity, forming-free behavior, ultra-low off current of <1 nA and adjustable selectivity (from 102 to 107). The experiment results confirmed that metallic Ag clusters were penetrated into the ZrO2 matrix during the annealing process, which would function as an effective active source responsible for the bidirectional TS. The volatile behavior could be explained by the self-dissolution of unstable filaments caused by minimization of the interfacial energy and thermal effect. Furthermore, a bipolar-type one selector-one resistor (1S-1R) memory device was successfully fabricated and exhibited significant suppression of the undesired sneak current, indicating the great potential as selector in a cross-point array.

  15. Applications of nuclear magnetic resonance spectroscopy for the understanding of enantiomer separation mechanisms in capillary electrophoresis.

    PubMed

    Salgado, Antonio; Chankvetadze, Bezhan

    2016-10-07

    This review deals with the applications of nuclear magnetic resonance (NMR) spectroscopy to understand the mechanisms of chiral separation in capillary electrophoresis (CE). It is accepted that changes observed in the separation process, including the reversal of enantiomer migration order (EMO), can be caused by subtle modifications in the molecular recognition mechanisms between enantiomer and chiral selector. These modifications may imply minor structural differences in those selector-selectand complexes that arise from the above mentioned interactions. Therefore, it is mandatory to understand the fine intermolecular interactions between analytes and chiral selectors. In other words, it is necessary to know in detail the structures of the complexes formed by the enantiomer (selectand) and the selector. Any differences in the structures of these complexes arising from either enantiomer should be detected, so that enantiomeric bias in the separation process could be explained. As to the nature of these interactions, those have been extensively reviewed, and it is not intended to be discussed here. These interactions contemplate ionic, ion-dipole and dipole-dipole interactions, hydrogen bonding, van der Waals forces, π-π stacking, steric and hydrophobic interactions. The main subject of this review is to describe how NMR spectroscopy helps to gain insight into the non-covalent intermolecular interactions between selector and selectand that lead to enantiomer separation by CE. Examples in which diastereomeric species are created by covalent (irreversible) derivatization will not be considered here. This review is structured upon the different structural classes of chiral selectors employed in CE, in which NMR spectroscopy has made substantial contributions to rationalize the observed enantioseparations. Cases in which other techniques complement NMR spectroscopic data are also mentioned. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. 47 CFR 95.669 - External controls.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Audio frequency power amplifier output connector and selector switch. (5) On-off switch for primary power to transmitter. This switch may be combined with receiver controls such as the receiver on-off switch and volume control. (6) Upper/lower sideband selector switch (for a transmitter that transmits...

  17. 47 CFR 76.70 - Exemption from input selector switch rules.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Exemption from input selector switch rules. 76.70 Section 76.70 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.70...

  18. 47 CFR 76.70 - Exemption from input selector switch rules.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Exemption from input selector switch rules. 76.70 Section 76.70 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.70...

  19. 47 CFR 76.70 - Exemption from input selector switch rules.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Exemption from input selector switch rules. 76.70 Section 76.70 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.70...

  20. 47 CFR 76.70 - Exemption from input selector switch rules.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Exemption from input selector switch rules. 76.70 Section 76.70 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.70...

  1. An Orthogonal and pH-Tunable Sensor-Selector for Muconic Acid Biosynthesis in Yeast.

    PubMed

    Snoek, Tim; Romero-Suarez, David; Zhang, Jie; Ambri, Francesca; Skjoedt, Mette L; Sudarsan, Suresh; Jensen, Michael K; Keasling, Jay D

    2018-04-20

    Microbes offer enormous potential for production of industrially relevant chemicals and therapeutics, yet the rapid identification of high-producing microbes from large genetic libraries is a major bottleneck in modern cell factory development. Here, we develop and apply a synthetic selection system in Saccharomyces cerevisiae that couples the concentration of muconic acid, a plastic precursor, to cell fitness by using the prokaryotic transcriptional regulator BenM driving an antibiotic resistance gene. We show that the sensor-selector does not affect production nor fitness, and find that tuning pH of the cultivation medium limits the rise of nonproducing cheaters. We apply the sensor-selector to selectively enrich for best-producing variants out of a large library of muconic acid production strains, and identify an isolate that produces more than 2 g/L muconic acid in a bioreactor. We expect that this sensor-selector can aid the development of other synthetic selection systems based on allosteric transcription factors.

  2. Electronically scanned pressure sensor module with in SITU calibration capability

    NASA Technical Reports Server (NTRS)

    Gross, C. (Inventor)

    1978-01-01

    This high data rate pressure sensor module helps reduce energy consumption in wind tunnel facilities without loss of measurement accuracy. The sensor module allows for nearly a two order of magnitude increase in data rates over conventional electromechanically scanned pressure sampling techniques. The module consists of 16 solid state pressure sensor chips and signal multiplexing electronics integrally mounted to a four position pressure selector switch. One of the four positions of the pressure selector switch allows the in situ calibration of the 16 pressure sensors; the three other positions allow 48 channels (three sets of 16) pressure inputs to be measured by the sensors. The small size of the sensor module will allow mounting within many wind tunnel models, thus eliminating long tube lengths and their corresponding slow pressure response.

  3. Characterizing the interaction between enantiomers of eight psychoactive drugs and highly sulfated-β-cyclodextrin by counter-current capillary electrophoresis.

    PubMed

    Asensi-Bernardi, Lucía; Escuder-Gilabert, Laura; Martín-Biosca, Yolanda; Sagrado, Salvador; Medina-Hernández, María José

    2014-01-01

    The estimation of apparent binding constants and limit mobilities of the complexes of the enantiomers that characterize the interaction of enantiomers with chiral selectors, in this case highly sulfated β-cyclodextrin, was approached using a simple and economic electrophoretic modality, the complete filling technique (CFT) in counter-current mode. The enantiomers of eight psychoactive drugs, four antihistamines (dimethindene, promethazine, orphenadrine and terfenadine) and four antidepressants (bupropion, fluoxetine, nomifensine and viloxazine) were separated for the first time for this cyclodextrin (CD). Estimations of thermodynamic and electrophoretic enantioselectivies were also performed. Results indicate that, in general, thermodynamic enantioselectivity is the main component explaining the high resolution found, but also one case suggests that electrophoretic enantioselectivity itself is enough to obtain a satisfactory resolution. CFT results advantageous compared with conventional capillary electrophoresis (CE) and partial filling technique (PFT) for the study of the interaction between drugs and chiral selectors. It combines the use of a simple fitting model (as in CE), when the enantiomers do not exit the chiral selector plug during the separation (i.e. mobility of electroosmotic flow larger than mobility of CD), and drastic reduction of the consumption (and cost; ~99.7%) of the CD reagent (as in PFT) compared with the conventional CE. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Revisiting the Velocity Selector Problem with VPython

    ERIC Educational Resources Information Center

    Milbourne, Jeff; Lim, Halson

    2015-01-01

    The velocity selector is a classic first-year physics problem that demonstrates the influence of perpendicular electric and magnetic fields on a charged particle. Traditionally textbooks introduce this problem in the context of balanced forces, often asking for field strengths that would allow a charged particle, with a specific target velocity,…

  5. Memristor and selector devices fabricated from HfO2-xNx

    NASA Astrophysics Data System (ADS)

    Murdoch, B. J.; McCulloch, D. G.; Ganesan, R.; McKenzie, D. R.; Bilek, M. M. M.; Partridge, J. G.

    2016-04-01

    Monoclinic HfO2-xNx has been incorporated into two-terminal devices exhibiting either memristor or selector operation depending on the controlled inclusion/suppression of mobile oxygen vacancies. In HfO2 memristors containing oxygen vacancies, gradual conductance modulation, short-term plasticity, and long-term potentiation were observed using appropriate voltage-spike stimulation, suggesting suitability for artificial neural networks. Passivation of oxygen vacancies, confirmed by X-ray absorption spectroscopy, was achieved in HfO2-xNx films by the addition of nitrogen during growth. Selector devices formed on these films exhibited threshold switching and current controlled negative differential resistance consistent with thermally driven insulator to metal transitions.

  6. Agreement between selectors at seven Eastern U.S. locations in the second field generation

    USDA-ARS?s Scientific Manuscript database

    Clone x location interactions are known to be large for many traits in potatoes. Early generation selection in northern breeding programs may eliminate clones that perform better in other locations. The purpose of this study was to examine the agreement between selectors in the second field genera...

  7. Differences in Muscle Activation and Kinematics Between Cable-Based and Selectorized Weight Training.

    PubMed

    Signorile, Joseph F; Rendos, Nicole K; Heredia Vargas, Hector H; Alipio, Taislaine C; Regis, Rebecca C; Eltoukhy, Moataz M; Nargund, Renu S; Romero, Matthew A

    2017-02-01

    Signorile, JF, Rendos, NK, Heredia Vargas, HH, Alipio, TC, Regis, RC, Eltoukhy, MM, Nargund, RS, and Romero, MA. Differences in muscle activation and kinematics between cable-based and selectorized weight training. J Strength Cond Res 31(2): 313-322, 2017-Cable resistance training machines are showing resurgent popularity and allow greater number of degrees of freedom than typical selectorized equipment. Given that specific kinetic chains are used during distinct activities of daily living (ADL), cable machines may provide more effective interventions for some ADL, whereas others may be best addressed using selectorized equipment. This study examined differences in activity levels (root mean square of the EMG [rmsEMG]) of 6 major muscles (pectoralis major, PM; anterior deltoid, AD; biceps brachii, BB; rectus abdominis, RA; external obliques, EO; and triceps brachii, TB) and kinematics of multiple joints between a cable and standard selectorized machines during the biceps curl, the chest press, and the overhead press performed at 1.5 seconds per contractile stage. Fifteen individuals (9 men, 6 women; mean age ± SD, 24.33 ± 4.88 years) participated. Machine order was randomized. Significant differences favoring cable training were seen for PM and AD during biceps curl; BB, AD, and EO for chest press; and BB and EO during overhead press (p ≤ 0.05). Greater starting and ending angles were seen for the elbow and shoulder joints during selectorized biceps curl, whereas hip and knee starting and ending angles were greater for cable machine during chest and overhead presses (p < 0.0001). Greater range of motion (ROM) favoring the cable machine was also evident (p < 0.0001). These results indicate that utilization patterns of selected muscles, joint angles, and ROMs can be varied because of machine application even when similar exercises are used, and therefore, these machines can be used selectively in training programs requiring specific motor or biomechanical patterns.

  8. Feature selection using angle modulated simulated Kalman filter for peak classification of EEG signals.

    PubMed

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Mubin, Marizan; Saad, Ismail

    2016-01-01

    In the existing electroencephalogram (EEG) signals peak classification research, the existing models, such as Dumpala, Acir, Liu, and Dingle peak models, employ different set of features. However, all these models may not be able to offer good performance for various applications and it is found to be problem dependent. Therefore, the objective of this study is to combine all the associated features from the existing models before selecting the best combination of features. A new optimization algorithm, namely as angle modulated simulated Kalman filter (AMSKF) will be employed as feature selector. Also, the neural network random weight method is utilized in the proposed AMSKF technique as a classifier. In the conducted experiment, 11,781 samples of peak candidate are employed in this study for the validation purpose. The samples are collected from three different peak event-related EEG signals of 30 healthy subjects; (1) single eye blink, (2) double eye blink, and (3) eye movement signals. The experimental results have shown that the proposed AMSKF feature selector is able to find the best combination of features and performs at par with the existing related studies of epileptic EEG events classification.

  9. The Other Memex: The Tangled Career of Vannevar Bush's Information Machine, the Rapid Selector.

    ERIC Educational Resources Information Center

    Burke, Colin

    1992-01-01

    Presents an historical overview of Vannevar Bush's efforts to develop a machine for free-form indexing and computerized information retrieval. Descriptions of the Memex concept and two related machines--the Rapid Selector and the Comparator--are provided; and the shift in emphasis to a device for business or cryptanalytic purposes is discussed.…

  10. Design structure for in-system redundant array repair in integrated circuits

    DOEpatents

    Bright, Arthur A.; Crumley, Paul G.; Dombrowa, Marc; Douskey, Steven M.; Haring, Rudolf A.; Oakland, Steven F.; Quellette, Michael R.; Strissel, Scott A.

    2008-11-25

    A design structure for repairing an integrated circuit during operation of the integrated circuit. The integrated circuit comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The design structure provides the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The design structure further passes the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.

  11. Chiral capillary electrophoresis and nuclear magnetic resonance investigation on the structure-enantioselectivity relationship in synthetic cyclopeptides as chiral selectors.

    PubMed

    De Lorenzi, E; Massolini, G; Molinari, P; Galbusera, C; Longhi, R; Marinzi, C; Consonni, R; Chiari, M

    2001-04-01

    In the present work, synthetic cyclohexa- and cycloheptapeptides previously singled out by a combinatorial chemistry approach have been evaluated as chiral selectors in capillary electrophoresis. By applying the countercurrent migration technique and employing a new adsorbed coating, a series of dinitrophenyl amino acids as well as some chiral compounds of pharmaceutical interest have been evaluated for enantiorecognition. The results thus obtained led to a deeper investigation of the chiral discrimination process, by carrying out nuclear magnetic resonance (NMR) studies on selected cyclopeptide-analyte complexes. These studies shed light on the chemical groups involved in the analyte-selector interaction and provided useful information for a wider application of these cyclopeptides in the separation of other drug enantiomers.

  12. Optical bias selector based on a multilayer a-SiC:H optical filter

    NASA Astrophysics Data System (ADS)

    Vieira, M.; Vieira, M. A.; Louro, P.

    2017-08-01

    In this paper we present a MUX/DEMUX device based on a multilayer a-SiC:H optical filter that requires nearultraviolet steady state optical switches to select desired wavelengths in the visible range. Spectral response and transmittance measurements are presented and show the feasibility of tailoring the wavelength and bandwidth of a polychromatic mixture of different wavelengths. The selector filter is realized by using a two terminal double pi'n/pin a-SiC:H photodetector. Five visible communication channels are transmitted together, each one with a specific bit sequence. The combined optical signal is analyzed by reading out the photocurrent, under near-UV front steady state background. Data shows that 25 current levels are detected and corresponds to the thirty-two on/off possible states. The proximity of the magnitude of consecutive levels causes occasional errors in the decoded information. To minimize the errors, four parity bit are generated and stored along with the data word. The parity of the word is checked after reading the word to detect and correct the transmitted data. Results show that the background works as a selector in the visible range, shifting the sensor sensitivity and together with the parity check bits allows the identification and decoding of the different input channels. A transmission capability of 60 kbps using the generated codeword was achieved. An optoeletronic model gives insight on the system physics.

  13. A C-Te-based binary OTS device exhibiting excellent performance and high thermal stability for selector application.

    PubMed

    Chekol, Solomon Amsalu; Yoo, Jongmyung; Park, Jaehyuk; Song, Jeonghwan; Sung, Changhyuck; Hwang, Hyunsang

    2018-08-24

    In this letter, we demonstrate a new binary ovonic threshold switching (OTS) selector device scalable down to ø30 nm based on C-Te. Our proposed selector device exhibits outstanding performance such as a high switching ratio (I on /I off  > 10 5 ), an extremely low off-current (∼1 nA), an extremely fast operating speed of <10 ns (transition time of <2 ns and delay time of <8 ns), high endurance (10 9 ), and high thermal stability (>450 °C). The observed high thermal stability is caused by the relatively small atomic size of C, compared to Te, which can effectively suppress the segregation and crystallization of Te in the OTS film. Furthermore, to confirm the functionality of the selector in a crossbar array, we evaluated a 1S-1R device by integrating our OTS device with a ReRAM (resistive random access memory) device. The 1S-1R integrated device exhibits a successful suppression of leakage current at the half-selected cell and shows an excellent read-out margin (>2 12 word lines) in a fast read operation.

  14. In situ synthesis of di-n-butyl l-tartrate-boric acid complex chiral selector and its application in chiral microemulsion electrokinetic chromatography.

    PubMed

    Hu, Shaoqiang; Chen, Yonglei; Zhu, Huadong; Zhu, Jinhua; Yan, Na; Chen, Xingguo

    2009-11-06

    A novel procedure for in situ assembling a complex chiral selector, di-n-butyl l-tartrate-boric acid complex, by the reaction of di-n-butyl l-tartrate with boric acid in a running buffer was reported and its application in the enantioseparation of beta-blockers and structural related compounds by chiral microemulsion electrokinetic chromatography (MEEKC) has been demonstrated. In order to achieve a good enantioseparation, the effect of dibutyl l-tartrate and sodium tetraborate concentration, surfactant identity and concentration, cosurfactant, buffer pH and composition, organic modifiers, as well as applied voltage and capillary length were investigated. Ten pairs of enantiomers that could not be separated with only dibutyl l-tartrate, obtained good chiral separation using the complex chiral selector; among them, seven pairs could be baseline resolved under optimized experimental conditions. The fixation of chiral centers by the formation of five-membered rings, and being oppositely charged with basic analytes were thought to be the key factors giving the complex chiral selector a superior chiral recognition capability. The effect of the molecular structure of analytes on enantioseparation was discussed in terms of molecular interaction.

  15. An ion-pair principle for enantioseparations of basic analytes by nonaqueous capillary electrophoresis using the di-n-butyl L-tartrate-boric acid complex as chiral selector.

    PubMed

    Wang, Li-Juan; Liu, Xiu-Feng; Lu, Qie-Nan; Yang, Geng-Liang; Chen, Xing-Guo

    2013-04-05

    A chiral recognition mechanism of ion-pair principle has been proposed in this study. It rationalized the enantioseparations of some basic analytes using the complex of di-n-butyl l-tartrate and boric acid as the chiral selector in methanolic background electrolytes (BGEs) by nonaqueous capillary electrophoresis (NACE). An approach of mass spectrometer (MS) directly confirmed that triethylamine promoted the formation of negatively charged di-n-butyl l-tartrate-boric acid complex chiral counter ion with a complex ratio of 2:1. And the negatively charged counter ion was the real chiral selector in the ion-pair principle enantioseparations. It was assumed that triethylamine should play its role by adjusting the apparent acidity (pH*) of the running buffer to a higher value. Consequently, the effects of various basic electrolytes including inorganic and organic ones on the enantioseparations in NACE were investigated. The results showed that most of the basic electrolytes tested were favorable for the enantioseparations of basic analytes using di-n-butyl l-tartrate-boric acid complex as the chiral ion-pair selector. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. A miniature 48-channel pressure sensor module capable of in situ calibration

    NASA Technical Reports Server (NTRS)

    Gross, C.; Juanarena, D. B.

    1977-01-01

    A new high data rate pressure sensor module with in situ calibration capability has been developed by the Langley Research Center to help reduce energy consumption in wind-tunnel facilities without loss of measurement accuracy. The sensor module allows for nearly a two order of magnitude increase in data rates over conventional electromechanically scanned pressure sampling techniques. This module consists of 16 solid state pressure sensor chips and signal multiplexing electronics integrally mounted to a four position pressure selector switch. One of the four positions of the pressure selector switch allows the in situ calibration of the 16 pressure sensors; the three other positions allow 48 channels (three sets of 16) pressure inputs to be measured by sensors. The small size of the sensor module will allow mounting within many wind-tunnel models, thus eliminating long tube lengths and their corresponding slow pressure response.

  17. Effect of thermal insulation on the electrical characteristics of NbOx threshold switches

    NASA Astrophysics Data System (ADS)

    Wang, Ziwen; Kumar, Suhas; Wong, H.-S. Philip; Nishi, Yoshio

    2018-02-01

    Threshold switches based on niobium oxide (NbOx) are promising candidates as bidirectional selector devices in crossbar memory arrays and building blocks for neuromorphic computing. Here, it is experimentally demonstrated that the electrical characteristics of NbOx threshold switches can be tuned by engineering the thermal insulation. Increasing the thermal insulation by ˜10× is shown to produce ˜7× reduction in threshold current and ˜45% reduction in threshold voltage. The reduced threshold voltage leads to ˜5× reduction in half-selection leakage, which highlights the effectiveness of reducing half-selection leakage of NbOx selectors by engineering the thermal insulation. A thermal feedback model based on Poole-Frenkel conduction in NbOx can explain the experimental results very well, which also serves as a piece of strong evidence supporting the validity of the Poole-Frenkel based mechanism in NbOx threshold switches.

  18. Chiral Separations

    NASA Astrophysics Data System (ADS)

    Stalcup, A. M.

    2010-07-01

    The main goal of this review is to provide a brief overview of chiral separations to researchers who are versed in the area of analytical separations but unfamiliar with chiral separations. To researchers who are not familiar with this area, there is currently a bewildering array of commercially available chiral columns, chiral derivatizing reagents, and chiral selectors for approaches that span the range of analytical separation platforms (e.g., high-performance liquid chromatography, gas chromatography, supercritical-fluid chromatography, and capillary electrophoresis). This review begins with a brief discussion of chirality before examining the general strategies and commonalities among all of the chiral separation techniques. Rather than exhaustively listing all the chiral selectors and applications, this review highlights significant issues and differences between chiral and achiral separations, providing salient examples from specific classes of chiral selectors where appropriate.

  19. Stereoisomers Separation

    NASA Astrophysics Data System (ADS)

    Wieczorek, Piotr

    The use of capillary electrophoresis for enantiomer separation and optical purity determination is presented. The contents start with basic information about the nature of stereoizomers and the mechanism of enantioseparation using capillary electrophoresis techniques. The molecules to be separated show identical chemical structure and electrochemical behavior. Therefore, the chiral recognition of enantiomers is possible only by bonding to chiral selector and the separation based on very small differences in complexation energies of diastereomer complexes formed. This method is useful for this purpose due to the fact that different compounds can be used as chiral selectors. The mostly used chiral selectors like cyclodextrins, crown ethers, chiral surfactants, macrocyclic antibiotics, transition metal complexes, natural, and synthetic polymers and their application for this purpose is also discussed. Finally, examples of practical applications of electromigration techniques for enantiomers separation and determination are presented.

  20. Method and apparatus for in-system redundant array repair on integrated circuits

    DOEpatents

    Bright, Arthur A [Croton-on-Hudson, NY; Crumley, Paul G [Yorktown Heights, NY; Dombrowa, Marc B [Bronx, NY; Douskey, Steven M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Oakland, Steven F [Colchester, VT; Ouellette, Michael R [Westford, VT; Strissel, Scott A [Byron, MN

    2008-07-29

    Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.

  1. Method and apparatus for in-system redundant array repair on integrated circuits

    DOEpatents

    Bright, Arthur A [Croton-on-Hudson, NY; Crumley, Paul G [Yorktown Heights, NY; Dombrowa, Marc B [Bronx, NY; Douskey, Steven M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Oakland, Steven F [Colchester, VT; Ouellette, Michael R [Westford, VT; Strissel, Scott A [Byron, MN

    2008-07-08

    Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.

  2. Method and apparatus for in-system redundant array repair on integrated circuits

    DOEpatents

    Bright, Arthur A.; Crumley, Paul G.; Dombrowa, Marc B.; Douskey, Steven M.; Haring, Rudolf A.; Oakland, Steven F.; Ouellette, Michael R.; Strissel, Scott A.

    2007-12-18

    Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.

  3. Scalable InP integrated wavelength selector based on binary search.

    PubMed

    Calabretta, Nicola; Stabile, Ripalta; Albores-Mejia, Aaron; Williams, Kevin A; Dorren, Harm J S

    2011-10-01

    We present an InP monolithically integrated wavelength selector that implements a binary search for selecting one from N modulated wavelengths. The InP chip requires only log(2)N optical filters and log(2)N optical switches. Experimental results show nanosecond reconfiguration and error-free wavelength selection of four modulated wavelengths with 2 dB of power penalty. © 2011 Optical Society of America

  4. The Effects of Computerized Auditory Feedback on Electronic Article Surveillance Tag Placement in an Auto-Parts Distribution Center

    ERIC Educational Resources Information Center

    Goomas, David T.

    2008-01-01

    In this report from the field, computerized auditory feedback was used to inform order selectors and order selector auditors in a distribution center to add an electronic article surveillance (EAS) adhesive tag. This was done by programming handheld computers to emit a loud beep for high-priced items upon scanning the item's bar-coded Universal…

  5. Critical incidents influencing students' selection of elective science

    NASA Astrophysics Data System (ADS)

    Essary, Danny Ray

    Purpose of the study. The purpose of the study was to investigate the critical incidents that determined high school students' self selection into and out of elective science classes. The Critical Incident Technique was used to gather data. Procedure. Subjects for study were 436 students attending five high schools within the geographical boundaries of a Northeast Texas County. Each student was enrolled in a senior level government/economics course during the spring semester of 1997. Students enrolled and in attendance during data collection procedures were subjects of the study. The subjects recorded 712 usable critical incidents. Incidents were categorized by examiners and a total of eleven incident categories emerged for analysis purposes. Incident frequencies were categorized by sample population, selectors, and nonselectors; subdivided by gender. Findings. The following categories emerged for study; (A) Mentored, (B) Requirements, (C) Personal Interest(s), (D) Level of Difficulty, (E) Time Restraints, (F) Future Concerns, (G) Grades, (H) Teacher, (I) Peer Influence, (J) Challenge, (K) Other Academic Experiences. Data were analyzed qualitatively to answer research questions and quantitatively to test hypotheses. There was an emergence of ten incident categories for nonselectors and an emergence of eleven incident categories for selectors. Of the twelve hypotheses, four failed to be rejected and eight were rejected. Conclusions. Nonselectors and selectors of elective science were influenced by various external factors. Requirements were influential for nonselectors. Nonselectors chose to select the minimum number of science classes necessary for graduation. Selectors were influenced by curriculum requirements, future concerns and mentors. Special programs that required extra science classes were influential in students' decisions to enroll in elective science. Gender differences were not influential for selectors or nonselectors of elective science.

  6. Tone signal generator for producing multioperator tone signals using an operator circuit including a waveform generator, a selector and an enveloper

    DOEpatents

    Dong, Qiujie; Jenkins, Michael V.; Bernadas, Salvador R.

    1997-01-01

    A frequency modulation (FM) tone signal generator for generating a FM tone signal is disclosed. The tone signal generator includes a waveform generator having a plurality of wave tables, a selector and an enveloper. The waveform generator furnishes a waveform signal in response to a phase angle address signal. Each wave table stores a different waveform. The selector selects one of the wave tables in response to a plurality of selection signals such that the selected wave table largely provides the waveform signal upon being addressed largely by the phase angle address signal. Selection of the selected wave table varies with each selection signal. The enveloper impresses an envelope signal on the waveform signal. The envelope signal is used as a carrier or modulator for generating the FM tone signal.

  7. Internal filament modulation in low-dielectric gap design for built-in selector-less resistive switching memory application

    NASA Astrophysics Data System (ADS)

    Chen, Ying-Chen; Lin, Chih-Yang; Huang, Hui-Chun; Kim, Sungjun; Fowler, Burt; Chang, Yao-Feng; Wu, Xiaohan; Xu, Gaobo; Chang, Ting-Chang; Lee, Jack C.

    2018-02-01

    Sneak path current is a severe hindrance for the application of high-density resistive random-access memory (RRAM) array designs. In this work, we demonstrate nonlinear (NL) resistive switching characteristics of a HfO x /SiO x -based stacking structure as a realization for selector-less RRAM devices. The NL characteristic was obtained and designed by optimizing the internal filament location with a low effective dielectric constant in the HfO x /SiO x structure. The stacking HfO x /SiO x -based RRAM device as the one-resistor-only memory cell is applicable without needing an additional selector device to solve the sneak path issue with a switching voltage of ~1 V, which is desirable for low-power operating in built-in nonlinearity crossbar array configurations.

  8. Extended range heat pump system and centrifugal compressor for use therewith

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoemaker, J.F.

    1988-04-26

    Improvements in heat pump systems having indoor and outdoor heat exchangers and at least two compressors for supplying a refrigerant medium under pressure thereto, and means for circulating the medium through the heat exchangers, the improvement is described comprising a selector valve associated with each of the compressors. The selector valves provide that any combination and any one or more of the compressors can be selected for operation, each of the selector valves having a first operating condition placing the associated compressor in series with the heat exchangers and a second operating condition whereby the associated compressor is bypassed, whenmore » the selector valves for at least two of the compressors are simultaneously in their first positions a flow path is established through the associated compressors and through the heat exchangers all in series, a two position changeover valve and associated conduit means. The changeover valve has a first position wherein at least one of the compressors is connected in series with the first and second heat exchangers to produce flow of the medium in one direction therethrough and a second position whereby at least one compressor is connected to produce flow of the medium in the opposite direction through the heat exchangers.« less

  9. Capillary electrophoresis separation of peptide diastereomers that contain methionine sulfoxide by dual cyclodextrin-crown ether systems.

    PubMed

    Zhu, Qingfu; Heinemann, Stefan H; Schönherr, Roland; Scriba, Gerhard K E

    2014-12-01

    A dual-selector system employing achiral crown ethers in combination with cyclodextrins has been developed for the separation of peptide diastereomers that contain methionine sulfoxide. The combinations of the crown ethers 15-crown-5, 18-crown-6, Kryptofix® 21 and Kryptofix® 22 and β-cyclodextrin, carboxymethyl-β-cyclodextrin, and sulfated β-cyclodextrin were screened at pH 2.5 and pH 8.0 using a 40/50.2 cm, 50 μm id fused-silica capillary and a separation voltage of 25 kV. No diastereomer separation was observed in the sole presence of crown ethers, while only sulfated β-cyclodextrin was able to resolve some peptide diastereomers at pH 8.0. Depending on the amino acid sequence of the peptide and the applied cyclodextrin, the addition of crown ethers, especially the Krpytofix® diaza-crown ethers, resulted in significantly enhanced chiral recognition. Keeping one selector of the dual system constant, increasing concentrations of the second selector resulted in increased peak resolution and analyte migration time for peptide-crown ether-cyclodextrin combinations. The simultaneous diastereomer separation of three structurally related peptides was achieved using the dual selector system. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Chiral analysis of UV nonabsorbing compounds by capillary electrophoresis using macrocyclic antibiotics: 1. Separation of aspartic and glutamic acid enantiomers.

    PubMed

    Bednar, P; Aturki, Z; Stransky, Z; Fanali, S

    2001-07-01

    Glycopeptide antibiotics, namely vancomycin or teicoplanin, were evaluated in capillary electrophoresis for the analysis of UV nonabsorbing compounds such as aspartic and glutamic acid enantiomers. Electrophoretic runs were performed in laboratory-made polyacrylamide-coated capillaries using the partial filling-counter current method in order to avoid the presence on the detector path of the absorbing chiral selector. The background electrolyte consisted of an aqueous or aqueous-organic buffer in the pH range of 4.5-6.5 of sorbic acid/histidine and the appropriate concentration of chiral selector. Several experimental parameters such as antibiotic concentration and type, buffer pH, organic modifier, type and concentration of absorbing co-ion (for the indirect UV detection) were studied in order to find the optimum conditions for the chiral resolution of the two underivatized amino acids in their enantiomers. Among the two investigated chiral selectors, vancomycin resulted to be the most useful chiral selector allowing relatively high chiral resolution of the studied compounds even at low concentration. The optimized method (10 mM sorbic acid/histidine, pH 5, and 10 mM of vancomycin) was used for the analysis of real samples such as teeth dentine and beer.

  11. X-Ray Pulse Selector With 2 ns Lock-in Phase Setting And Stability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindenau, B.; Raebiger, J.; Polachowski, S.

    2004-05-12

    Selector devices, which are based on magnetically suspended, high speed triangular shutter rotors, have been designed and built in cooperation with ESRF, APS, and recently Spring-8 for time resolved studies with isolated x-ray pulses at white beam lines. The x-ray pulse selection is accomplished by means of a beam channel along one of the edges of the triangular rotor, which opens once per revolution. Entrance and exit apertures of the channel can be designed wedge shaped for variable tuning of the channel height between 0.1 mm to 0.9 mm. At the 1 kHz maximum operation frequency of a 220 mmmore » diameter disk with 190 mm channel length, the practicable open times of the channel are demonstrated to range down to 200 ns. The selector drive electronics is directly coupled to the storage ring RF clock for rotational phase control. It allows for continuous selector operation in phase locked mode to the temporal pulse structure of the synchrotron at 2 ns RMS stability. The phase angle between the pulse transmission period and the synchrotron bunch sequence can be adjusted with similar precision for X-ray pulse selection according to the experimental needs. ID09, Michael Wulff ; BioCARS 14-BM, Reinhard Pahl; BL40-XU, Shin-ichi Adachi.« less

  12. The docking of chiral analytes on proline-based chiral stationary phases: A molecular dynamics study of selectivity.

    PubMed

    Ashtari, M; Cann, N M

    2015-08-28

    Molecular dynamics simulations are employed to examine the selectivity of four proline-based chiral stationary phases in two solvent environments, a relatively apolar n-hexane/2-propanol solvent and a polar water/methanol solvent. The four chiral surfaces are based on a BOC-terminated diproline, a TMA-terminated diproline, a TMA-terminated triproline and a TMA-terminated hexaproline. This range of chiral selectors allows an analysis of the impact of oligomer length and terminal group on selectivity while the two solvent environments indicate the impact of solvent hydrogen bonding and polarity. The selector-analyte interactions are examined for six closely related analytes that each have an aromatic moiety, a hydrogen, and an alcohol group directly bonded to the stereocenter. The analytes differ in the nature of the aromatic group (phenyl or anthracyl), in the attachment point (to the central ring or a side ring in the anthracyl), and in the fourth group bonded to the carbon (CH3, CF3, or C2H5). For each of the 48 solvent+selector+analyte systems, selectivity factors are calculated and, when possible, compared to experiment. The docking mode for these proline-based selectors is analyzed. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Universal Rate Model Selector: A Method to Quickly Find the Best-Fit Kinetic Rate Model for an Experimental Rate Profile

    DTIC Science & Technology

    2017-08-01

    as an official Department of the Army position unless so designated by other authorizing documents. REPORT DOCUMENTATION PAGE Form Approved OMB...processes to find a kinetic rate model that provides a high degree of correlation with experimental data. Furthermore, the use of kinetic rate... correlation 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON Renu B

  14. One bipolar transistor selector - One resistive random access memory device for cross bar memory array

    NASA Astrophysics Data System (ADS)

    Aluguri, R.; Kumar, D.; Simanjuntak, F. M.; Tseng, T.-Y.

    2017-09-01

    A bipolar transistor selector was connected in series with a resistive switching memory device to study its memory characteristics for its application in cross bar array memory. The metal oxide based p-n-p bipolar transistor selector indicated good selectivity of about 104 with high retention and long endurance showing its usefulness in cross bar RRAM devices. Zener tunneling is found to be the main conduction phenomena for obtaining high selectivity. 1BT-1R device demonstrated good memory characteristics with non-linearity of 2 orders, selectivity of about 2 orders and long retention characteristics of more than 105 sec. One bit-line pull-up scheme shows that a 650 kb cross bar array made with this 1BT1R devices works well with more than 10 % read margin proving its ability in future memory technology application.

  15. Synthesis of cellulose-2,3-bis(3,5-dimethylphenylcarbamate) in an ionic liquid and its chiral separation efficiency as stationary phase.

    PubMed

    Liu, Runqiang; Zhang, Yijun; Bai, Lianyang; Huang, Mingxian; Chen, Jun; Zhang, Yuping

    2014-04-11

    A chiral selector of cellulose-2,3-bis(3,5-dimethylphenylcarbamate) (CBDMPC) was synthesized by reacting 3,5-dimethylphenyl isocyanate with microcrystalline cellulose dissolved in an ionic liquid of 1-allyl-3-methyl-imidazolium chloride (AMIMCl). The obtained chiral selector was effectively characterized by infrared spectroscopy, elemental analysis and 1H NMR. The selector was reacted with 3-aminopropylsilanized silica gel and the CBDMPC bonded chiral stationary phase (CSP) was obtained. Chromatographic evaluation of the prepared CSPs was conducted by high performance liquid chromatographic (HPLC) and baseline separation of three typical fungicides including hexaconazole, metalaxyl and myclobutanil was achieved using n-hexane/isopropanol as the mobile phase with a flow rate 1.0 mL/min. Experimental results also showed that AMIMCl could be recycled easily and reused in the preparation of CSPs as an effective reaction media.

  16. Synthesis of Cellulose-2,3-bis(3,5-dimethylphenylcarbamate) in an Ionic Liquid and Its Chiral Separation Efficiency as Stationary Phase

    PubMed Central

    Liu, Runqiang; Zhang, Yijun; Bai, Lianyang; Huang, Mingxian; Chen, Jun; Zhang, Yuping

    2014-01-01

    A chiral selector of cellulose-2,3-bis(3,5-dimethylphenylcarbamate) (CBDMPC) was synthesized by reacting 3,5-dimethylphenyl isocyanate with microcrystalline cellulose dissolved in an ionic liquid of 1-allyl-3-methyl-imidazolium chloride (AMIMCl). The obtained chiral selector was effectively characterized by infrared spectroscopy, elemental analysis and 1H NMR. The selector was reacted with 3-aminopropylsilanized silica gel and the CBDMPC bonded chiral stationary phase (CSP) was obtained. Chromatographic evaluation of the prepared CSPs was conducted by high performance liquid chromatographic (HPLC) and baseline separation of three typical fungicides including hexaconazole, metalaxyl and myclobutanil was achieved using n-hexane/isopropanol as the mobile phase with a flow rate 1.0 mL/min. Experimental results also showed that AMIMCl could be recycled easily and reused in the preparation of CSPs as an effective reaction media. PMID:24733066

  17. Tone signal generator for producing multioperator tone signals using an operator circuit including a waveform generator, a selector and an enveloper

    DOEpatents

    Dong, Q.; Jenkins, M.V.; Bernadas, S.R.

    1997-09-09

    A frequency modulation (FM) tone signal generator for generating a FM tone signal is disclosed. The tone signal generator includes a waveform generator having a plurality of wave tables, a selector and an enveloper. The waveform generator furnishes a waveform signal in response to a phase angle address signal. Each wave table stores a different waveform. The selector selects one of the wave tables in response to a plurality of selection signals such that the selected wave table largely provides the waveform signal upon being addressed largely by the phase angle address signal. Selection of the selected wave table varies with each selection signal. The enveloper impresses an envelope signal on the waveform signal. The envelope signal is used as a carrier or modulator for generating the FM tone signal. 17 figs.

  18. Revised Simulation Model of the Control System, Displays, and Propulsion System for a ASTOVL Lift Fan Aircraft

    NASA Technical Reports Server (NTRS)

    Franklin, James A.

    1997-01-01

    This report describes revisions to a simulation model that was developed for use in piloted evaluations of takeoff, transition, hover, and landing characteristics of an advanced short takeoff and vertical landing lift fan fighter aircraft. These revisions have been made to the flight/propulsion control system, head-up display, and propulsion system to reflect recent flight and simulation experience with short takeoff and vertical landing operations. They include nonlinear inverse control laws in all axes (eliminating earlier versions with state rate feedback), throttle scaling laws for flightpath and thrust command, control selector commands apportioned based on relative effectiveness of the individual controls, lateral guidance algorithms that provide more flexibility for terminal area operations, and a simpler representation of the propulsion system. The model includes modes tailored to the phases of the aircraft's operation, with several response types which are coupled to the aircraft's aerodynamic and propulsion system effectors through a control selector tailored to the propulsion system. Head-up display modes for approach and hover are integrated with the corresponding control modes. Propulsion system components modeled include a remote lift fan and a lift-cruise engine. Their static performance and dynamic responses are represented by the model. A separate report describes the subsonic, power-off aerodynamics and jet induced aerodynamics in hover and forward flight, including ground effects.

  19. Speech Privacy Problems

    DTIC Science & Technology

    1945-08-18

    were interconnected, how? ever,, it was found that one of the oscillators had an intermit - tent, defect-. This trouble was cleared by removing the...switches> i.e., two pairsL are included in.the unit," one of ä pair of selectors {the " fast selector") steps .each time the latch operates, the other (the...34slow seleotor") steps oiice eaoh time the fast seleotor completes 25 steps. Thus, a total of 625 steps, or changes in permutation, is involved be

  20. Fast-acting valve actuator

    DOEpatents

    Cho, Nakwon

    1980-01-01

    A fast-acting valve actuator utilizes a spring driven pneumatically loaded piston to drive a valve gate. Rapid exhaust of pressurized gas from the pneumatically loaded side of the piston facilitates an extremely rapid piston stroke. A flexible selector diaphragm opens and closes an exhaust port in response to pressure differentials created by energizing and de-energizing a solenoid which controls the pneumatic input to the actuator as well as selectively providing a venting action to one side of the selector diaphragm.

  1. Field Impact Evaluation Report on the Electronic Tabular Display Subsystem (ETABS). The Electronic Tabular Display Subsystem Field Impact Evaluation Team.

    DTIC Science & Technology

    1979-10-01

    modification. Phase VII of this prgram , Preliminary Radar Associate/Nonradar Control Training and Assistant Controller Duties, is currently programmed for...software diagnostics. Advantage. The additional staffing would handle the increased workload in an efficient manner and prevent a deterioration of morale...alternative 2 can be employed if any delays or problems prevent the timely installation of the additional storage element. SELECTOR CHANNEL. The selector

  2. Portable tester for determining gas content within a core sample

    DOEpatents

    Garcia, Jr., Fred; Schatzel, Steven J.

    1998-01-01

    A portable tester is provided for reading and displaying the pressure of a gas released from a rock core sample stored within a sealed container and for taking a sample of the released pressurized gas for chemical analysis thereof for subsequent use in a modified direct method test which determines the volume of gas and specific type of gas contained within the core sample. The portable tester includes a pair of low and high range electrical pressure transducers for detecting a gas pressure; a pair of low and high range display units for displaying the pressure of the detected gas- a selector valve connected to the low and high range pressure transducers, a selector knob for selecting gas flow to one of the flow paths; control valve having an inlet connection to the sealed container, and outlets connected to: a sample gas canister, a second outlet port connected to the selector valve means for reading the pressure of the gas from the sealed container to either the low range or high range pressure transducers, and a connection for venting gas contained within the sealed container to the atmosphere. A battery is electrically connected to and supplies the power for operating the unit. The pressure transducers, display units, selector and control valve means and the battery is mounted to and housed within a protective casing for portable transport and use.

  3. Portable tester for determining gas content within a core sample

    DOEpatents

    Garcia, F. Jr.; Schatzel, S.J.

    1998-04-21

    A portable tester is provided for reading and displaying the pressure of a gas released from a rock core sample stored within a sealed container and for taking a sample of the released pressurized gas for chemical analysis thereof for subsequent use in a modified direct method test which determines the volume of gas and specific type of gas contained within the core sample. The portable tester includes a pair of low and high range electrical pressure transducers for detecting a gas pressure; a pair of low and high range display units for displaying the pressure of the detected gas; a selector valve connected to the low and high range pressure transducers and a selector knob for selecting gas flow to one of the flow paths; control valve having an inlet connection to the sealed container; and outlets connected to: a sample gas canister, a second outlet port connected to the selector valve means for reading the pressure of the gas from the sealed container to either the low range or high range pressure transducers, and a connection for venting gas contained within the sealed container to the atmosphere. A battery is electrically connected to and supplies the power for operating the unit. The pressure transducers, display units, selector and control valve means and the battery is mounted to and housed within a protective casing for portable transport and use. 5 figs.

  4. The Deflector Selector: A Machine Learning Framework for Prioritizing Hazardous Object Deflection Technology Development

    NASA Astrophysics Data System (ADS)

    Nesvold, Erika; Greenberg, Adam; Erasmus, Nicolas; Van Heerden, Elmarie; Galache, J. L.; Dahlstrom, Eric; Marchis, Franck

    2018-01-01

    Several technologies have been proposed for deflecting a hazardous Solar System object on a trajectory that would otherwise impact the Earth. The effectiveness of each technology depends on several characteristics of the given object, including its orbit and size. The distribution of these parameters in the likely population of Earth-impacting objects can thus determine which of the technologies are most likely to be useful in preventing a collision with the Earth. None of the proposed deflection technologies has been developed and fully tested in space. Developing every proposed technology is currently prohibitively expensive, so determining now which technologies are most likely to be effective would allow us to prioritize a subset of proposed deflection technologies for funding and development. We will present a new model, the Deflector Selector, that takes as its input the characteristics of a hazardous object or population of such objects and predicts which technology would be able to perform a successful deflection. The model consists of a machine-learning algorithm trained on data produced by N-body integrations simulating the deflections. We will describe the model and present the results of tests of the effectiveness of nuclear explosives, kinetic impactors, and gravity tractors on three simulated populations of hazardous objects.

  5. The Deflector Selector: A machine learning framework for prioritizing hazardous object deflection technology development

    NASA Astrophysics Data System (ADS)

    Nesvold, E. R.; Greenberg, A.; Erasmus, N.; van Heerden, E.; Galache, J. L.; Dahlstrom, E.; Marchis, F.

    2018-05-01

    Several technologies have been proposed for deflecting a hazardous Solar System object on a trajectory that would otherwise impact the Earth. The effectiveness of each technology depends on several characteristics of the given object, including its orbit and size. The distribution of these parameters in the likely population of Earth-impacting objects can thus determine which of the technologies are most likely to be useful in preventing a collision with the Earth. None of the proposed deflection technologies has been developed and fully tested in space. Developing every proposed technology is currently prohibitively expensive, so determining now which technologies are most likely to be effective would allow us to prioritize a subset of proposed deflection technologies for funding and development. We present a new model, the Deflector Selector, that takes as its input the characteristics of a hazardous object or population of such objects and predicts which technology would be able to perform a successful deflection. The model consists of a machine-learning algorithm trained on data produced by N-body integrations simulating the deflections. We describe the model and present the results of tests of the effectiveness of nuclear explosives, kinetic impactors, and gravity tractors on three simulated populations of hazardous objects.

  6. Enantioresolution in electrokinetic chromatography-complete filling technique using sulfated gamma-cyclodextrin. Software-free topological anticipation.

    PubMed

    Escuder-Gilabert, Laura; Martín-Biosca, Yolanda; Medina-Hernández, María José; Sagrado, Salvador

    2016-10-07

    Few papers have tried to predict the resolution ability of chiral selectors in capillary electrophoresis for the separation of the enantiomers of chiral compounds. In a previous work, we have used molecular information available on-line to establish enantioresolution levels of basic compounds using highly sulfated β-CD (HS-β-CD) as chiral selector in electrokinetic chromatography-complete filling technique (EKC-CFT). The present study is a continuation of this previous work, introducing some novelties. In this work, the ability of sulfated γ-cyclodextrin (S-γ-CD) as chiral selector in EKC-CFT is modelled for the first time. Thirty-three structurally unrelated cationic and neutral compounds (drugs and pesticides) are studied. Categorical enantioresolution levels (RsC, 0 or 1) are assigned from experimental enantioresolution values obtained at different S-γ-CD concentrations. Novel topological parameters connected to the chiral carbon (C * -parameters) are introduced. Four C * -parameters and a topological parameter of the whole molecule (aromatic atom count) are the most important variables according to a discriminant partial least squares-variable selection process. It suggests the preponderance of the topology adjacent to the chiral carbon to anticipate the RsC levels. A software-free anticipation protocol for new molecules is proposed. Over the current set of molecules evaluated, 100% of correct anticipations (resolved and non-resolved compounds) are obtained, while anticipation of some compounds remains undetermined. A criterion is introduced to alert on compounds which should not be anticipated. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Separation of Undersampled Composite Signals Using the Dantzig Selector with Overcomplete Dictionaries

    DTIC Science & Technology

    2014-06-02

    2011). [22] Li, Q., Micchelli, C., Shen, L., and Xu, Y. A proximity algorithm acelerated by Gauss - Seidel iterations for L1/TV denoising models. Inverse...system of equations and their relationship to the solution of Model (2) and present an algorithm with an iterative approach for finding these solutions...Using the fixed-point characterization above, the (k + 1)th iteration of the prox- imity operator algorithm to find the solution of the Dantzig

  8. FIBER AND INTEGRATED OPTICS, LASER APPLICATIONS, AND OTHER PROBLEMS IN QUANTUM ELECTRONICS: Numerical simulation of an unstable ring resonator with a Fourier phase corrector

    NASA Astrophysics Data System (ADS)

    Kliment'ev, S. I.; Kuprenyuk, V. I.; Lyubimov, V. V.; Sherstobitov, V. E.

    1989-04-01

    The results are given of calculations of the parameters of an unstable ring resonator with an internal angular selector based on a Fourier phase corrector. It is shown that the use of such a selector makes it possible to compensate partly for the effects of small-scale phase inhomogeneities and to reduce also the influence of the edge diffraction on the structure of the field in a resonator.

  9. Chiral permselectivity in surface-modified nanoporous opal films.

    PubMed

    Cichelli, Julie; Zharov, Ilya

    2006-06-28

    Nanoporous 7 mum thin opal films comprising 35 layers of 200 nm diameter SiO2 spheres were assembled on Pt electrodes and modified with chiral selector moieties on the silica surface. Diffusion of chiral redox species through the opals was studied by cyclic voltammetry. The chiral opal films demonstrate high selectivity for transport of one enantiomer over the other. This chiral permselectivity is attributed to the surface-facilitated transport utilizing noncovalent interactions between the chiral permeant molecules and surface-bound chiral selectors.

  10. Mutually Exclusive Splicing of the Insect Dscam Pre-mRNA Directed by Competing Intronic RNA Secondary Structures

    PubMed Central

    Graveley, Brenton R.

    2008-01-01

    Summary Drosophila Dscam encodes 38,016 distinct axon guidance receptors through the mutually exclusive alternative splicing of 95 variable exons. Importantly, known mechanisms that ensure the mutually exclusive splicing of pairs of exons cannot explain this phenomenon in Dscam. I have identified two classes of conserved elements in the Dscam exon 6 cluster, which contains 48 alternative exons—the docking site, located in the intron downstream of constitutive exon 5, and the selector sequences, which are located upstream of each exon 6 variant. Strikingly, each selector sequence is complementary to a portion of the docking site, and this pairing juxtaposes one, and only one, alternative exon to the upstream constitutive exon. The mutually exclusive nature of the docking site:selector sequence interactions suggests that the formation of these competing RNA structures is a central component of the mechanism guaranteeing that only one exon 6 variant is included in each Dscam mRNA. PMID:16213213

  11. An amorphous titanium dioxide metal insulator metal selector device for resistive random access memory crossbar arrays with tunable voltage margin

    NASA Astrophysics Data System (ADS)

    Cortese, Simone; Khiat, Ali; Carta, Daniela; Light, Mark E.; Prodromakis, Themistoklis

    2016-01-01

    Resistive random access memory (ReRAM) crossbar arrays have become one of the most promising candidates for next-generation non volatile memories. To become a mature technology, the sneak path current issue must be solved without compromising all the advantages that crossbars offer in terms of electrical performances and fabrication complexity. Here, we present a highly integrable access device based on nickel and sub-stoichiometric amorphous titanium dioxide (TiO2-x), in a metal insulator metal crossbar structure. The high voltage margin of 3 V, amongst the highest reported for monolayer selector devices, and the good current density of 104 A/cm2 make it suitable to sustain ReRAM read and write operations, effectively tackling sneak currents in crossbars without compromising fabrication complexity in a 1 Selector 1 Resistor (1S1R) architecture. Furthermore, the voltage margin is found to be tunable by an annealing step without affecting the device's characteristics.

  12. Adaptive strategies for materials design using uncertainties

    DOE PAGES

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; ...

    2016-01-21

    Here, we compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material withmore » desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.« less

  13. Advanced online control mode selection for gas turbine aircraft engines

    NASA Astrophysics Data System (ADS)

    Wiseman, Matthew William

    The modern gas turbine aircraft engine is a complex, highly nonlinear system the operates in a widely varying environment. Traditional engine control techniques based on the hydro mechanical control concepts of early turbojet engines are unable to deliver the performance required from today's advanced engine designs. A new type of advanced control utilizing multiple control modes and an online mode selector is investigated, and various strategies for improving the baseline mode selection architecture are introduced. The ability to five-tune actuator command outputs is added to the basic mode selection and blending process, and mode selection designs that we valid for the entire flight envelope are presented. Methods for optimizing the mode selector to improve overall engine performance are also discussed. Finally, using flight test data from a GE F110-powered F16 aircraft, the full-envelope mode selector designs are validated and shown to provide significant performance benefits. Specifically, thrust command tracking is enhanced while critical engine limits are protected, with very little impact on engine efficiency.

  14. Adaptive strategies for materials design using uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James

    Here, we compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material withmore » desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.« less

  15. A modified UCT method for biological nutrient removal: configuration and performance.

    PubMed

    Vaiopoulou, E; Aivasidis, A

    2008-07-01

    A pilot-scale prototype activated sludge system is presented, which combines both, the idea of University of Cape Town (UCT) concept and the step denitrification cascade for removal of carbon, nitrogen and phosphorus. The experimental set-up consists of an anaerobic selector and stepwise feeding in subsequent three identical pairs of anoxic and oxic tanks. Raw wastewater with influent flow rates ranging between 48 and 168 l d(-1) was fed to the unit at hydraulic residence times (HRTs) of 5-18 h and was distributed at percentages of 60/25/15%, 40/30/30% and 25/40/35% to the anaerobic selector, 2nd and 3rd anoxic tanks, respectively (influent flow distribution before the anaerobic selector). The results for the entire experimental period showed high removal efficiencies of organic matter of 89% as total chemical oxygen demand removal and 95% removal for biochemical oxygen demand, 90% removal of total Kjeldahl nitrogen and total nitrogen removal through denitrification of 73%, mean phosphorus removal of 67%, as well as excellent settleability. The highest removal efficiency and the optimum performance were recorded at an HRT of about 9h and influent flow rate of 96 l d(-1), in which 60% is distributed to the anaerobic selector, 25% to the second anoxic tank and 15% to the last anoxic tank. Consequently, the plant configuration enhanced removal efficiency, optimized performance, saved energy, formed good settling sludge and provided operational assurance.

  16. Experimental and Computational Characterization of Combustion Phenomena

    DTIC Science & Technology

    2006-05-01

    combustors without installing glass , quartz, or sapphire windows when using terahertz radiation. To explore the potential diagnostics utility of T...laser was reduced using a Spectra-Physics Model 3980 pulse selector. This device employs a TeO2 acousto-optic modulator to select subsets of pulses...equipped with a UG-11 and two WG-295 colored glass filters to reduce visible and laser-scattered light, respectively. OH-PLIF images were acquired

  17. Structurally Engineered Nanoporous Ta2O5-x Selector-Less Memristor for High Uniformity and Low Power Consumption.

    PubMed

    Kwon, Soonbang; Kim, Tae-Wook; Jang, Seonghoon; Lee, Jae-Hwang; Kim, Nam Dong; Ji, Yongsung; Lee, Chul-Ho; Tour, James M; Wang, Gunuk

    2017-10-04

    A memristor architecture based on metal-oxide materials would have great promise in achieving exceptional energy efficiency and higher scalability in next-generation electronic memory systems. Here, we propose a facile method for fabricating selector-less memristor arrays using an engineered nanoporous Ta 2 O 5-x architecture. The device was fabricated in the form of crossbar arrays, and it functions as a switchable rectifier with a self-embedded nonlinear switching behavior and ultralow power consumption (∼2.7 × 10 -6 W), which results in effective suppression of crosstalk interference. In addition, we determined that the essential switching elements, such as the programming power, the sneak current, the nonlinearity value, and the device-to-device uniformity, could be enhanced by in-depth structural engineering of the pores in the Ta 2 O 5-x layer. Our results, on the basis of the structural engineering of metal-oxide materials, could provide an attractive approach for fabricating simple and cost-efficient memristor arrays with acceptable device uniformity and low power consumption without the need for additional addressing selectors.

  18. Sequence requirements of oligonucleotide chiral selectors for the capillary electrophoresis resolution of low-affinity DNA binders.

    PubMed

    Tohala, Luma; Oukacine, Farid; Ravelet, Corinne; Peyrin, Eric

    2017-05-01

    We recently reported that a great variety of DNA oligonucleotides (ONs) used as chiral selectors in partial-filling capillary electrophoresis (CE) exhibited interesting enantioresolution properties toward low-affinity DNA binders. Herein, the sequence prerequisites of ONs for the CE enantioseparation process were studied. First, the chiral resolution properties of a series of homopolymeric sequences (Poly-dT) of different lengths (from 5 to 60-mer) were investigated. It was shown that the size increase-dependent random coil-like conformation of Poly-dT favorably acted on the apparent selectivity and resolution. The base-unpairing state constituted also an important factor in the chiral resolution ability of ONs as the switch from the single-stranded to double-stranded structure was responsible for a significant decrease in the analyte selectivity range. Finally, the chemical diversity enhanced the enantioresolution ability of single-stranded ONs. The present work could lay the foundation for the design of performant ON chiral selectors for the CE separation of weak DNA binder enantiomers. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Threshold switching in SiGeAsTeN chalcogenide glass prepared by As ion implantation into sputtered SiGeTeN film

    NASA Astrophysics Data System (ADS)

    Liu, Guangyu; Wu, Liangcai; Song, Zhitang; Liu, Yan; Li, Tao; Zhang, Sifan; Song, Sannian; Feng, Songlin

    2017-12-01

    A memory cell composed of a selector device and a storage device is the basic unit of phase change memory. The threshold switching effect, main principle of selectors, is a universal phenomenon in chalcogenide glasses. In this work, we put forward a safe and controllable method to prepare a SiGeAsTeN chalcogenide film by implanting As ions into sputtered SiGeTeN films. For the SiGeAsTeN material, the phase structure maintains the amorphous state, even at high temperature, indicating that no phase transition occurs for this chalcogenide-based material. The electrical test results show that the SiGeAsTeN-based devices exhibit good threshold switching characteristics and the switching voltage decreases with the increasing As content. The decrease in valence alternation pairs, reducing trap state density, may be the physical mechanism for lower switch-on voltage, which makes the SiGeAsTeN material more applicable in selector devices through component optimization.

  20. Resistive Switching of Ta2O5-Based Self-Rectifying Vertical-Type Resistive Switching Memory

    NASA Astrophysics Data System (ADS)

    Ryu, Sungyeon; Kim, Seong Keun; Choi, Byung Joon

    2018-01-01

    To efficiently increase the capacity of resistive switching random-access memory (RRAM) while maintaining the same area, a vertical structure similar to a vertical NAND flash structure is needed. In addition, the sneak-path current through the half-selected neighboring memory cell should be mitigated by integrating a selector device with each RRAM cell. In this study, an integrated vertical-type RRAM cell and selector device was fabricated and characterized. Ta2O5 as the switching layer and TaOxNy as the selector layer were used to preliminarily study the feasibility of such an integrated device. To make the side contact of the bottom electrode with active layers, a thick Al2O3 insulating layer was placed between the Pt bottom electrode and the Ta2O5/TaOxNy stacks. Resistive switching phenomena were observed under relatively low currents (below 10 μA) in this vertical-type RRAM device. The TaOxNy layer acted as a nonlinear resistor with moderate nonlinearity. Its low-resistance-state and high-resistance-state were well retained up to 1000 s.

  1. Enantioselective determination by capillary electrophoresis with cyclodextrins as chiral selectors.

    PubMed

    Fanali, S

    2000-04-14

    This review surveys the separation of enantiomers by capillary electrophoresis using cyclodextrins as chiral selector. Cyclodextrins or their derivatives have been widely employed for the direct chiral resolution of a wide number of enantiomers, mainly of pharmaceutical interest, selected examples are reported in the tables. For method optimisation, several parameters influencing the enantioresolution, e.g., cyclodextrin type and concentration, buffer pH and composition, presence of organic solvents or complexing additives in the buffer were considered and discussed. Finally, selected applications to real samples such as pharmaceutical formulations, biological and medical samples are also discussed.

  2. Military Operations Research. Winter 1996. Volume 1, Number 4

    DTIC Science & Technology

    1996-01-01

    ANALYSIS DISTURBANCE INPUT OUTPUT PLANT SEMANTIC CONTROL SYSTEM CONTROL DESIGNER CONTROL I i LAW SYSTEM GOAL CONTROL IIDENTIFIER SELECTOR ADAPTER CONTRO...analysts for many years. It is designed to provide a quick reference for models that represent the effects of a conventional attack against ground...satellites offer this capability. This poses the additional challenge as to how many highways one can "see" per unit time. He did, however, design a

  3. Bidentate urea-based chiral selectors for enantioselective high performance liquid chromatography: synthesis and evaluation of "Crab-like" stationary phases.

    PubMed

    Kotoni, Dorina; Villani, Claudio; Bell, David S; Capitani, Donatella; Campiglia, Pietro; Gasparrini, Francesco

    2013-07-05

    A rational approach for the design and preparation of two new "Crab-like" totally synthetic, brush-type chiral stationary phases is presented. Enantiopure diamines, namely 1,2-diaminocyclohexane and 1,2-diphenyl-1,2-ethylene-diamine were treated with 3-(triethoxysilyl)propyl isocyanate, to yield reactive ureido selectors that were eventually attached to unmodified silica particles through a stable, bidentate tether, through a facile two-step one-pot procedure. A full chemical characterization of the new materials has been obtained through solid-state NMR (both (29)Si and (13)C CPMAS) spectroscopy. Columns packed with the two Crab-like chiral stationary phases allow for different mechanisms of separation: normal phase liquid chromatography, reversed phase liquid chromatography and polar organic mode and show a high stability at basic pH values. In particular, the Crab-like column containing the 1,2-diphenyl-1,2-ethylene-diamine selector proved a promising candidate for the resolution of a wide range of racemates (including benzodiazepines, N-derivatized amino acids, and free carboxylic acids) both in normal phase and polar organic mode. An Hmin of 9.57 at a μsf of 0.80mm/s (corresponding to 0.8mL/min) was obtained through van Deemter analysis, based on toluene, for the Crab-like column with the 1,2-diphenyl-1,2-ethylene-diamine selector (250mm×4.6mm I.D.), with a calculated reduced height equivalent to a theoretical plate (h) of only 1.91. Finally, comparative studies were performed with a polymeric commercially available P-CAP-DP column in order to evaluate enantioselectivity and resolution of the Crab-like columns. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. A Case–Control Study of Socio-Economic and Nutritional Characteristics as Determinants of Dental Caries in Different Age Groups, Considered as Public Health Problem: Data from NHANES 2013–2014

    PubMed Central

    Chávez-Lamas, Nubia M.; Gracia-Cortés, Ma. del Carmen; Moreno-Báez, Arturo; Arceo-Olague, Jose G.; Galván-Tejada, Jorge I.

    2018-01-01

    One of the principal conditions that affects oral health worldwide is dental caries, occurring in about 90% of the global population. This pathology has been considered a challenge because of its high prevalence, besides being a chronic but preventable disease which can be caused by a series of different demographic, dietary, among others. Based on this problem, in this research a demographic and dietary features analysis is performed for the classification of subjects according to their oral health status based on caries, according to the age group where the population belongs, using as feature selector a technique based on fast backward selection (FBS) approach for the development of three predictive models, one for each age range (group 1: 10–19; group 2: 20–59; group 3: 60 or more years old). As validation, a net reclassification improvement (NRI), AUC, ROC, and OR values are used to evaluate their classification accuracy. We analyzed 189 demographic and dietary features from National Health and Nutrition Examination Survey (NHANES) 2013–2014. Each model obtained statistically significant results for most features and narrow OR confidence intervals. Age group 2 obtained a mean NRI = −0.080 and AUC = 0.933; age group 3 obtained a mean NRI = −0.024 and AUC = 0.787; and age group 4 obtained a mean NRI = −0.129 and AUC = 0.735. Based on these results, it is concluded that these specific demographic and dietary features are significant determinants for estimating the oral health status in patients based on their likelihood of developing caries, and the age group could imply different risk factors for subjects. PMID:29748513

  5. A Case⁻Control Study of Socio-Economic and Nutritional Characteristics as Determinants of Dental Caries in Different Age Groups, Considered as Public Health Problem: Data from NHANES 2013⁻2014.

    PubMed

    Zanella-Calzada, Laura A; Galván-Tejada, Carlos E; Chávez-Lamas, Nubia M; Gracia-Cortés, Ma Del Carmen; Moreno-Báez, Arturo; Arceo-Olague, Jose G; Celaya-Padilla, Jose M; Galván-Tejada, Jorge I; Gamboa-Rosales, Hamurabi

    2018-05-10

    One of the principal conditions that affects oral health worldwide is dental caries, occurring in about 90% of the global population. This pathology has been considered a challenge because of its high prevalence, besides being a chronic but preventable disease which can be caused by a series of different demographic, dietary" among others. Based on this problem, in this research a demographic and dietary features analysis is performed for the classification of subjects according to their oral health status based on caries, according to the age group where the population belongs, using as feature selector a technique based on fast backward selection (FBS) approach for the development of three predictive models, one for each age range (group 1: 10⁻19; group 2: 20⁻59; group 3: 60 or more years old). As validation, a net reclassification improvement (NRI), AUC, ROC, and OR values are used to evaluate their classification accuracy. We analyzed 189 demographic and dietary features from National Health and Nutrition Examination Survey (NHANES) 2013⁻2014. Each model obtained statistically significant results for most features and narrow OR confidence intervals. Age group 2 obtained a mean NRI = -0.080 and AUC = 0.933; age group 3 obtained a mean NRI = -0.024 and AUC = 0.787; and age group 4 obtained a mean NRI = -0.129 and AUC = 0.735. Based on these results, it is concluded that these specific demographic and dietary features are significant determinants for estimating the oral health status in patients based on their likelihood of developing caries, and the age group could imply different risk factors for subjects.

  6. Wide bandwidth phase-locked loop circuit

    NASA Technical Reports Server (NTRS)

    Koudelka, Robert David (Inventor)

    2005-01-01

    A PLL circuit uses a multiple frequency range PLL in order to phase lock input signals having a wide range of frequencies. The PLL includes a VCO capable of operating in multiple different frequency ranges and a divider bank independently configurable to divide the output of the VCO. A frequency detector detects a frequency of the input signal and a frequency selector selects an appropriate frequency range for the PLL. The frequency selector automatically switches the PLL to a different frequency range as needed in response to a change in the input signal frequency. Frequency range hysteresis is implemented to avoid operating the PLL near a frequency range boundary.

  7. Hyper-track selector nuclear emulsion readout system aimed at scanning an area of one thousand square meters

    NASA Astrophysics Data System (ADS)

    Yoshimoto, Masahiro; Nakano, Toshiyuki; Komatani, Ryosuke; Kawahara, Hiroaki

    2017-10-01

    Automatic nuclear emulsion readout systems have seen remarkable progress since the original idea was developed almost 40 years ago. After the success of its full application to a large-scale neutrino experiment, OPERA, a much faster readout system, the hyper-track selector (HTS), has been developed. HTS, which has an extremely wide-field objective lens, reached a scanning speed of 4700 cm^2/h, which is nearly 100 times faster than the previous system and therefore strongly promotes many new experimental projects. We will describe the concept, specifications, system structure, and achieved performance in this paper.

  8. Analysis of the threshold switching mechanism of a Te-SbO selector device for crosspoint nonvolatile memory applications

    NASA Astrophysics Data System (ADS)

    Kim, Young Seok; Park, Ji Woon; Lee, Jong Ho; Choi, In Ah; Heo, Jaeyeong; Kim, Hyeong Joon

    2017-10-01

    The threshold switching mechanism of Te-SbO thin films with a unique microstructure in which a Te nanocluster is present in the SbO matrix is analyzed. During the electro-forming process, amorphous Te filaments are formed in the Te nanocluster. However, unlike conventional Ovonic threshold switching (TS) selector devices, it has been demonstrated that the off-current flows along the filament. Numerical calculations show that the off-current is due to the trap present in the filament. We also observed changes in TS parameters through controls in the strength or volume of the filaments.

  9. The Fisher-Markov selector: fast selecting maximally separable feature subset for multiclass classification with applications to high-dimensional data.

    PubMed

    Cheng, Qiang; Zhou, Hongbo; Cheng, Jie

    2011-06-01

    Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selector--which we call the Fisher-Markov selector--to identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid--dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.

  10. VELOCITY SELECTOR METHOD FOR THE SEPARATION OF ISOTOPES

    DOEpatents

    Britten, R.J.

    1957-12-31

    A velocity selector apparatus is described for separating and collecting an enriched fraction of the isotope of a particular element. The invention has the advantage over conventional mass spectrometers in that a magnetic field is not used, doing away with the attendant problems of magnetic field variation. The apparatus separates the isotopes by selectively accelerating the ionized constituents present in a beam of the polyisotopic substance that are of uniform kinetic energy, the acceleration being applied intermittently and at spaced points along the beam and in a direction normal to the direction of the propagation of the uniform energy beam whereby a transverse displacement of the isotopic constituents of different mass is obtained.

  11. Enantioseparation of rabeprazole and omeprazole by nonaqueous capillary electrophoresis with an ephedrine-based ionic liquid as the chiral selector.

    PubMed

    Ma, Zheng; Zhang, Lijuan; Lin, Lina; Ji, Ping; Guo, Xingjie

    2010-12-01

    An ephedrine-based chiral ionic liquid, (+)-N,N-dimethylephedrinium-bis(trifluoromethanesulfon)imidate ([DMP](+) [Tf(2) N](-) ), served as both chiral selector and background electrolyte in nonaqueous capillary electrophoresis. The enantioseparation of rabeprazole and omeprazole was achieved in acetonitrile-methanol (60:40 v/v) containing 60 mm[DMP](+) [Tf(2) N](-) . The influences of separation conditions, including the concentration of [DMP](+) [Tf(2) N](-) , the electrophoretic media and the buffer, on enantioseparation were evaluated. The mechanism of enantioseparation was investigated and discussed. Ion-pair interaction and hydrogen bonding may be responsible for the main separation mechanism. Copyright © 2010 John Wiley & Sons, Ltd.

  12. Application of maltodextrin as chiral selector in capillary electrophoresis for quantification of amlodipine enantiomers in commercial tablets.

    PubMed

    Nojavan, Saeed; Pourmoslemi, Shabnam; Behdad, Hamideh; Fakhari, Ali Reza; Mohammadi, Ali

    2014-08-01

    Maltodextrin was investigated as a chiral selector in capillary electrophoresis (CE) analysis of amlodipine (AM) enantiomers. For development of a stereoselective CE method, various effective parameters on the enantioseparation were optimized. The best results were achieved on an uncoated fused silica capillary at 20 °C using phosphate buffer (100 mM, pH 4) containing 10% w/v maltodextrin (dextrose equivalent value 4-7). The UV detector was set at 214 nm and a constant voltage of 20 kV was applied. The range of quantitation was 2.5-250 µg/mL (R(2)  > 0.999) for both enantiomers. Intra- (n = 5) and interday (n = 3) relative standard deviation (RSD) values were less than 7%. The limits of quantitation and detection were 1.7 µg/mL and 0.52 µg/mL, respectively. Recoveries of R(+) and S(-) enantiomers from tablet matrix were 97.2% and 97.8%, respectively. The method was applied for the quantification of AM enantiomers in commercial tablets. Also, the enantioseparation capability of heparin was evaluated and the results showed that heparin did not have any chiral selector activity in this study. Copyright © 2014 Wiley Periodicals, Inc.

  13. Selector-free resistive switching memory cell based on BiFeO3 nano-island showing high resistance ratio and nonlinearity factor

    PubMed Central

    Jeon, Ji Hoon; Joo, Ho-Young; Kim, Young-Min; Lee, Duk Hyun; Kim, Jin-Soo; Kim, Yeon Soo; Choi, Taekjib; Park, Bae Ho

    2016-01-01

    Highly nonlinear bistable current-voltage (I–V) characteristics are necessary in order to realize high density resistive random access memory (ReRAM) devices that are compatible with cross-point stack structures. Up to now, such I–V characteristics have been achieved by introducing complex device structures consisting of selection elements (selectors) and memory elements which are connected in series. In this study, we report bipolar resistive switching (RS) behaviours of nano-crystalline BiFeO3 (BFO) nano-islands grown on Nb-doped SrTiO3 substrates, with large ON/OFF ratio of 4,420. In addition, the BFO nano-islands exhibit asymmetric I–V characteristics with high nonlinearity factor of 1,100 in a low resistance state. Such selector-free RS behaviours are enabled by the mosaic structures and pinned downward ferroelectric polarization in the BFO nano-islands. The high resistance ratio and nonlinearity factor suggest that our BFO nano-islands can be extended to an N × N array of N = 3,740 corresponding to ~107 bits. Therefore, our BFO nano-island showing both high resistance ratio and nonlinearity factor offers a simple and promising building block of high density ReRAM. PMID:27001415

  14. Development of the Spacecraft Materials Selector Expert System

    NASA Technical Reports Server (NTRS)

    Pippin, H. G.

    2000-01-01

    A specific knowledge base to evaluate the on-orbit performance of selected materials on spacecraft is being developed under contract to the NASA SEE program. An artificial intelligence software package, the Boeing Expert System Tool (BEST), contains an inference engine used to operate knowledge bases constructed to selectively recall and distribute information about materials performance in space applications. This same system is used to make estimates of the environmental exposures expected for a given space flight. The performance capabilities of the Spacecraft Materials Selector (SMS) knowledge base are described in this paper. A case history for a planned flight experiment on ISS is shown as an example of the use of the SMS, and capabilities and limitations of the knowledge base are discussed.

  15. Carbon Nanotube Formic Acid Sensors Using a Nickel Bis( ortho-diiminosemiquinonate) Selector.

    PubMed

    Lin, Sibo; Swager, Timothy M

    2018-03-23

    Formic acid is corrosive, and a sensitive and selective sensor could be useful in industrial, medical, and environmental settings. We present a chemiresistor for detection of formic acid composed of single-walled carbon nanotubes (CNTs) and nickel bis( ortho-diiminosemiquinonate) (1), a planar metal complex that can act as a ditopic hydrogen-bonding selector. Formic acid is detected in concentrations as low as 83 ppb. The resistance of the material decreases on exposure to formic acid, but slightly increases on exposure to acetic acid. We propose that 1 assists in partial protonation of the CNT by formic acid, but the response toward acetic acid is dominated by inter-CNT swelling. This technology establishes CNT-based chemiresistive discrimination between formic and acetic acid vapors.

  16. Easy sperm processing technique allowing exclusive accumulation and later usage of DNA-strandbreak-free spermatozoa.

    PubMed

    Ebner, T; Shebl, O; Moser, M; Mayer, R B; Arzt, W; Tews, G

    2011-01-01

    Sperm DNA fragmentation is increased in poor-quality semen samples and correlates with failed fertilization, impaired preimplantation development and reduced pregnancy outcome. Common sperm preparation techniques may reduce the percentage of strandbreak-positive spermatozoa, but, to date, there is no reliable approach to exclusively accumulate strandbreak-free spermatozoa. To analyse the efficiency of special sperm selection chambers (Zech-selectors made of glass or polyethylene) in terms of strandbreak reduction, 39 subfertile men were recruited and three probes (native, density gradient and Zech-selector) were used to check for strand breaks using the sperm chromatin dispersion test. The mean percentage of affected spermatozoa in the ejaculate was 15.8 ± 7.8% (range 5.0–42.1%). Density gradient did not significantly improve the quality of spermatozoa selected(14.2 ± 7.0%). However, glass chambers completely removed 90% spermatozoa showing strand breaks and polyethylene chambers removed 76%. Both types of Zech-selectors were equivalent in their efficiency, significantly reduced DNA damage (P < 0.001) and,with respect to this, performed better than density gradient centrifugation (P < 0.001). As far as is known, this is the first report ona sperm preparation technique concentrating spermatozoa unaffected in terms of DNA damage. The special chambers most probably select for sperm motility and/or maturity. Copyright © 2010 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  17. Separation of enilconazole enantiomers in capillary electrophoresis with cyclodextrin-type chiral selectors and investigation of structure of selector-selectand complexes by using nuclear magnetic resonance spectroscopy.

    PubMed

    Gogolashvili, Ann; Tatunashvili, Elene; Chankvetadze, Lali; Sohajda, Tamas; Szeman, Julianna; Salgado, Antonio; Chankvetadze, Bezhan

    2017-08-01

    In the present study, the enantiomer migration order (EMO) of enilconazole in the presence of various cyclodextrins (CDs) was investigated by capillary electrophoresis (CE). Opposite EMO of enilconazole were observed when β-CD or the sulfated heptakis(2-O-methyl-3,6-di-O-sulfo)-β-CD (HMDS-β-CD) was used as the chiral selectors. Nuclear magnetic resonance (NMR) spectroscopy was used to study the mechanism of chiral recognition between enilconazole enantiomers and those two cyclodextrins. On the basis of rotating frame nuclear Overhauser (ROESY) experiments, the structure of an inclusion complex between enilconazole and β-CD was derived, in which (+)-enilconazole seemed to form a tighter complex than the (-)-enantiomer. This correlates well with the migration order of enilconazole enantiomers observed in CE. No evidence of complexation between enilconazole and HMDS-β-CD could be gathered due to lack of intermolecular nuclear Overhauser effect (NOE). Most likely the interaction between enilconazole and HMDS-β-CD leads to formation of a shallow external complex that is sufficient for separation of enantiomers in CE but cannot be evidenced based on ROESY experiment. Thus, in this particular case CE documents the presence of intermolecular interactions which are at least very difficult to be evidenced by other instrumental techniques. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Combined use of [TBA][L-ASP] and hydroxypropyl-β-cyclodextrin as selectors for separation of Cinchona alkaloids by capillary electrophoresis.

    PubMed

    Zhang, Yu; Yu, Haixia; Wu, Yujiao; Zhao, Wenyan; Yang, Min; Jing, Huanwang; Chen, Anjia

    2014-10-01

    In this paper, a new capillary electrophoresis (CE) separation and detection method was developed for the chiral separation of the four major Cinchona alkaloids (quinine/quinidine and cinchonine/cinchonidine) using hydroxypropyl-β-cyclodextrin (HP-β-CD) and chiral ionic liquid ([TBA][L-ASP]) as selectors. Separation parameters such as buffer concentrations, pH, HP-β-CD and chiral ionic liquid concentrations, capillary temperature, and separation voltage were investigated. After optimization of separation conditions, baseline separation of the three analytes (cinchonidine, quinine, cinchonine) was achieved in fewer than 7 min in ammonium acetate background electrolyte (pH 5.0) with the addition of HP-β-CD in a concentration of 40 mM and [TBA][L-ASP] of 14 mM, while the baseline separation of cinchonine and quinidine was not obtained. Therefore, the first-order derivative electropherogram was applied for resolving overlapping peaks. Regression equations revealed a good linear relationship between peak areas in first-order derivative electropherograms and concentrations of the two diastereomer pairs. The results not only indicated that the first-order derivative electropherogram was effective in determination of a low content component and of those not fully separated from adjacent ones, but also showed that the ionic liquid appeared to be a very promising chiral selector in CE. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. 47 CFR 76.70 - Exemption from input selector switch rules.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... broadcast television station, or non-commercial educational television translator station operating with 5... service broadcast television station, or new non-commercial educational television translator station with...

  20. Application of antibiotics as chiral selectors for capillary electrophoretic enantioseparation of pharmaceuticals: a review.

    PubMed

    Dixit, Shuchi; Park, Jung Hag

    2014-01-01

    Recent years have witnessed several new trends in chiral separation, for example, the enantiorecognition ability of several new antibiotics has been explored using capillary electrophoresis (CE) prior to HPLC; antibiotics have been employed as chiral selectors (CSs) in a nonaqueous CE (NACE) mode; and several new detection techniques (namely, capacitively coupled contactless conductivity detection) have been used in combination with CE for quantification of enantiomers. On account of these emerging trends, this article aims to review the application of various classes of antibiotics for CE enantioseparation of pharmaceuticals. A detailed account of the basic factors affecting enantioseparation, certain limitations of antibiotics as CSs and strategies to mitigate them, and advantages of NACE while using antibiotics as CSs has also been presented. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  2. Fast pressure-sensor system

    NASA Technical Reports Server (NTRS)

    Gross, C.

    1976-01-01

    Miniature silicon-diaphragm sensors and signal multiplexer are mounted to ganged zero-operate-calibrate pressure selector switches. Device allows in-situ calibration, can be computer controlled, and measures at approximately 10,000 readings per second.

  3. Caregiver Stress

    MedlinePlus

    ... Selector Search Alzheimer’s Association Caregiving En Español Caregiver Stress Caregiver Stress Caregiver Stress Share or Print this ... Tips to manage stress 10 symptoms of caregiver stress Denial about the disease and its effect on ...

  4. The 3-amino-derivative of gamma-cyclodextrin as chiral selector of Dns-amino acids in electrokinetic chromatography.

    PubMed

    Giuffrida, A; Contino, A; Maccarrone, G; Messina, M; Cucinotta, V

    2009-04-24

    The enantioseparation of the enantiomeric pairs of 10 Dns derivatives of alpha-amino acids was successfully carried out by using for the first time the 3-amino derivative of the gamma-cyclodextrin. The effects of pH and selector concentration on the migration times and the resolutions of analytes were studied in detail. 3-Deoxy-3-amino-2(S),3(R)-gamma-cyclodextrin (GCD3AM) shows very good chiral recognition ability even at very low concentrations at all the three investigated values of pH, as shown by the very large values of selectivity and resolution towards several pairs of amino acids. The role played by the cavity, the substitution site and the protonation equilibria on the observed properties of chiral selectivity, on varying the specific amino acid involved, is discussed.

  5. The LIM and POU homeobox genes ttx-3 and unc-86 act as terminal selectors in distinct cholinergic and serotonergic neuron types.

    PubMed

    Zhang, Feifan; Bhattacharya, Abhishek; Nelson, Jessica C; Abe, Namiko; Gordon, Patricia; Lloret-Fernandez, Carla; Maicas, Miren; Flames, Nuria; Mann, Richard S; Colón-Ramos, Daniel A; Hobert, Oliver

    2014-01-01

    Transcription factors that drive neuron type-specific terminal differentiation programs in the developing nervous system are often expressed in several distinct neuronal cell types, but to what extent they have similar or distinct activities in individual neuronal cell types is generally not well explored. We investigate this problem using, as a starting point, the C. elegans LIM homeodomain transcription factor ttx-3, which acts as a terminal selector to drive the terminal differentiation program of the cholinergic AIY interneuron class. Using a panel of different terminal differentiation markers, including neurotransmitter synthesizing enzymes, neurotransmitter receptors and neuropeptides, we show that ttx-3 also controls the terminal differentiation program of two additional, distinct neuron types, namely the cholinergic AIA interneurons and the serotonergic NSM neurons. We show that the type of differentiation program that is controlled by ttx-3 in different neuron types is specified by a distinct set of collaborating transcription factors. One of the collaborating transcription factors is the POU homeobox gene unc-86, which collaborates with ttx-3 to determine the identity of the serotonergic NSM neurons. unc-86 in turn operates independently of ttx-3 in the anterior ganglion where it collaborates with the ARID-type transcription factor cfi-1 to determine the cholinergic identity of the IL2 sensory and URA motor neurons. In conclusion, transcription factors operate as terminal selectors in distinct combinations in different neuron types, defining neuron type-specific identity features.

  6. Effect of molecular structure of tartrates on chiral recognition of tartrate-boric acid complex chiral selectors in chiral microemulsion electrokinetic chromatography.

    PubMed

    Hu, Shao-Qiang; Chen, Yong-Lei; Zhu, Hua-Dong; Shi, Hai-Jun; Yan, Na; Chen, Xing-Guo

    2010-08-20

    Eight l-tartrates and a d-tartrate with different alcohol moieties were used as chiral oils to prepare chiral microemulsions, which were utilized in conjunction with borate buffer to separate the enantiomers of beta-blockers or structurally related compounds by the chiral microemulsion electrokinetic chromatography (MEEKC) method. Among them, six were found to have a relatively good chiral separation performance and their chiral recognition effect in terms of both enantioselectivity and resolution increases linearly with the number of carbon atoms in the alkyl group of alcohol moiety. The tartrates containing alkyl groups of different structures but the same number of carbon atoms, i.e. one of straight chain and one of branched chain, provide similar enantioseparations. The trend was elucidated according to the changes in the difference of the steric matching between the molecules of two enantiomers and chiral selector. Furthermore, it was demonstrated for the first time that a water insoluble solid compound, di-i-butyl l-tartrate (mp. 73.5 degrees C), can be used as an oil to prepare a stable microemulsion to be used in the chiral MEEKC successfully. And a critical effect of the microemulsion for chiral separation, which has never been reported before, was found in this experiment, namely providing a hydrophobic environment to strengthen the interactions between the chiral selector and enantiomers. Copyright 2010 Elsevier B.V. All rights reserved.

  7. Chemical Method of Urine Volume Measurement

    NASA Technical Reports Server (NTRS)

    Petrack, P.

    1967-01-01

    A system has been developed and qualified as flight hardware for the measurement of micturition volumes voided by crewmen during Gemini missions. This Chemical Urine Volume Measurement System (CUVMS) is used for obtaining samples of each micturition for post-flight volume determination and laboratory analysis for chemical constituents of physiological interest. The system is versatile with respect to volumes measured, with a capacity beyond the largest micturition expected to be encountered, and with respect to mission duration of inherently indefinite length. The urine sample is used for the measurement of total micturition volume by a tracer dilution technique, in which a fixed, predetermined amount of tritiated water is introduced and mixed into the voided urine, and the resulting concentration of the tracer in the sample is determined with a liquid scintillation spectrometer. The tracer employed does not interfere with the analysis for the chemical constituents of the urine. The CUVMS hardware consists of a four-way selector valve in which an automatically operated tracer metering pump is incorporated, a collection/mixing bag, and tracer storage accumulators. The assembled system interfaces with a urine receiver at the selector valve inlet, sample bags which connect to the side of the selector valve, and a flexible hose which carries the excess urine to the overboard drain connection. Results of testing have demonstrated system volume measurement accuracy within the specification limits of +/-5%, and operating reliability suitable for system use aboard the GT-7 mission, in which it was first used.

  8. Towards an African Philosophy of Education.

    ERIC Educational Resources Information Center

    Ocaya-Lakidi, Dent

    1980-01-01

    Compares and contrasts contemporary philosophies of education in Africa with two philosophical doctrines (naturalism and idealism). Topics discussed include value selectors, westernization, the role of missionaries in African education, critical consciousness, relevance, and African education today. (DB)

  9. Simulation model of the integrated flight/propulsion control system, displays, and propulsion system for ASTOVL lift-fan aircraft

    NASA Technical Reports Server (NTRS)

    Chung, W. Y. William; Borchers, Paul F.; Franklin, James A.

    1995-01-01

    A simulation model has been developed for use in piloted evaluations of takeoff, transition, hover, and landing characteristics of an advanced, short takeoff, vertical landing lift fan fighter aircraft. The flight/propulsion control system includes modes for several response types which are coupled to the aircraft's aerodynamic and propulsion system effectors through a control selector tailored to the lift fan propulsion system. Head-up display modes for approach and hover, tailored to their corresponding control modes are provided in the simulation. Propulsion system components modeled include a remote lift and a lift/cruise engine. Their static performance and dynamic response are represented by the model. A separate report describes the subsonic, power-off aerodynamics and jet induced aerodynamics in hover and forward flight, including ground effects.

  10. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  11. Cryogenic Photogrammetry and Radiometry for the James Webb Space Telescope Microshutters

    NASA Technical Reports Server (NTRS)

    Chambers, Victor J.; Morey, Peter A.; Zukowski, Barbara J.; Kutyrev, Alexander S.; Collins, Nicholas R.

    2012-01-01

    The James Webb Space Telescope (JWST) relies on several innovations to complete its five year mission. One vital technology is microshutters, the programmable field selectors that enable the Near Infrared Spectrometer (NIRSpec) to perform multi-object spectroscopy. Mission success depends on acquiring spectra from large numbers of galaxies by positioning shutter slits over faint targets. Precise selection of faint targets requires field selectors that are both high in contrast and stable in position. We have developed test facilities to evaluate microshutter contrast and alignment stability at their 35K operating temperature. These facilities used a novel application of image registration algorithms to obtain non-contact, sub-micron measurements in cryogenic conditions. The cryogenic motion of the shutters was successfully characterized. Optical results also demonstrated that shutter contrast far exceeds the NIRSpec requirements. Our test program has concluded with the delivery of a flight-qualified field selection subsystem to the NIRSpec bench.

  12. Selective excitation of LG 00, LG 01, and LG 02 modes by a solid core PCF based mode selector in MDM-Ro-FSO transmission systems

    NASA Astrophysics Data System (ADS)

    Chaudhary, Sushank; Amphawan, Angela

    2018-07-01

    Radio over free space (Ro-FSO) provides an ambitious platform for seamless integration of radio networks to optical networks. Three independent channels, each carrying 2.5 Gbps–5 GHz data, are successfully transmitted over a free space link of 2.5 km by using mode division multiplexing (MDM) of three modes LG 00, LG 01, and LG 02 modes in conjunction with solid core photonic crystal fibers (SC-PCFs). Moreover, SC-PCFs are used as a mode selector in the proposed MDM-Ro-FSO system. The results are reported in terms of bit error rate, mode spectrum, and spatial profiles. The performance of the proposed Ro-FSO system is also evaluated under the influence of atmospheric turbulence in the form of different levels of fog, namely, light fog, thin fog, and heavy fog.

  13. Stacked 3D RRAM Array with Graphene/CNT as Edge Electrodes

    PubMed Central

    Bai, Yue; Wu, Huaqiang; Wang, Kun; Wu, Riga; Song, Lin; Li, Tianyi; Wang, Jiangtao; Yu, Zhiping; Qian, He

    2015-01-01

    There are two critical challenges which determine the array density of 3D RRAM: 1) the scaling limit in both horizontal and vertical directions; 2) the integration of selector devices in 3D structure. In this work, we present a novel 3D RRAM structure using low-dimensional materials, including 2D graphene and 1D carbon nanotube (CNT), as the edge electrodes. A two-layer 3D RRAM with monolayer graphene as edge electrode is demonstrated. The electrical results reveal that the RRAM devices could switch normally with this very thin edge electrode at nanometer scale. Meanwhile, benefited from the asymmetric carrier transport induced by Schottky barrier at metal/CNT and oxide/CNT interfaces, a selector built-in 3D RRAM structure using CNT as edge electrode is successfully fabricated and characterized. Furthermore, the discussion of high array density potential is presented. PMID:26348797

  14. Stacked 3D RRAM Array with Graphene/CNT as Edge Electrodes.

    PubMed

    Bai, Yue; Wu, Huaqiang; Wang, Kun; Wu, Riga; Song, Lin; Li, Tianyi; Wang, Jiangtao; Yu, Zhiping; Qian, He

    2015-09-08

    There are two critical challenges which determine the array density of 3D RRAM: 1) the scaling limit in both horizontal and vertical directions; 2) the integration of selector devices in 3D structure. In this work, we present a novel 3D RRAM structure using low-dimensional materials, including 2D graphene and 1D carbon nanotube (CNT), as the edge electrodes. A two-layer 3D RRAM with monolayer graphene as edge electrode is demonstrated. The electrical results reveal that the RRAM devices could switch normally with this very thin edge electrode at nanometer scale. Meanwhile, benefited from the asymmetric carrier transport induced by Schottky barrier at metal/CNT and oxide/CNT interfaces, a selector built-in 3D RRAM structure using CNT as edge electrode is successfully fabricated and characterized. Furthermore, the discussion of high array density potential is presented.

  15. The Drosophila Extradenticle and Homothorax selector proteins control branchless/FGF expression in mesodermal bridge-cells.

    PubMed

    Merabet, Samir; Ebner, Andreas; Affolter, Markus

    2005-08-01

    The stereotyped outgrowth of tubular branches of the Drosophila tracheal system is orchestrated by the local and highly dynamic expression profile of branchless (bnl), which encodes a secreted fibroblast growth factor (FGF)-like molecule. Despite the importance of the spatial and temporal bnl regulation, little is known about the upstream mechanisms that establish its complex expression pattern. Here, we show that the Extradenticle and Homothorax selector proteins control bnl transcription in a single cell per segment, the mesodermal bridge-cell. In addition, we observed that a key determinant of bridge-cell specification, the transcription factor Hunchback, is also required for bnl expression. Therefore, we propose that one of the functions of the bridge-cell is to synthesize and secrete the chemoattractant Bnl. These findings provide a hitherto unknown and interesting link between combinatorial inputs of transcription factors, cell-specific ligand expression and organ morphogenesis.

  16. High-performance liquid chromatographic separations of stereoisomers of chiral basic agrochemicals with polysaccharide-based chiral columns and polar organic mobile phases.

    PubMed

    Matarashvili, Iza; Shvangiradze, Iamze; Chankvetadze, Lali; Sidamonidze, Shota; Takaishvili, Nino; Farkas, Tivadar; Chankvetadze, Bezhan

    2015-12-01

    The separation of the stereoisomers of 23 chiral basic agrochemicals was studied on six different polysaccharide-based chiral columns in high-performance liquid chromatography with various polar organic mobile phases. Along with the successful separation of analyte stereoisomers, emphasis was placed on the effect of the chiral selector and mobile phase composition on the elution order of stereoisomers. The interesting phenomenon of reversal of enantiomer/stereoisomer elution order function of the polysaccharide backbone (cellulose or amylose), type of derivative (carbamate or benzoate), nature, and position of the substituent(s) in the phenylcarbamate moiety (methyl or chloro) and the nature of the mobile phase was observed. For several of the analytes containing two chiral centers all four stereoisomers were resolved with at least one chiral selector/mobile phase combination. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Advances in chiral separations by nonaqueous capillary electrophoresis in pharmaceutical and biomedical analysis.

    PubMed

    Ali, Imran; Sanagi, Mohd Marsin; Aboul-Enein, Hassan Y

    2014-04-01

    NACE is an alternative technique to aqueous CE in the chiral separations of partially soluble racemates. Besides, partially water-soluble or insoluble chiral selectors may be exploited in the enantiomeric resolution in NACE. The high reproducibility due to low Joule heat generation and no change in BGE concentration may make NACE a routine analytical technique. These facts attracted scientists to use NACE for the chiral resolution. The present review describes the advances in the chiral separations by NACE and its application in pharmaceutical and biomedical analysis. The emphasis has been given to discuss the selection of the chiral selectors and organic solvents, applications of NACE, comparison between NACE and aqueous CE, and chiral recognition mechanism. Besides, efforts have also been made to predict the future perspectives of NACE. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Separation of profen enantiomers by capillary electrophoresis using cyclodextrins as chiral selectors.

    PubMed

    Blanco, M; Coello, J; Iturriaga, H; Maspoch, S; Pérez-Maseda, C

    1998-01-09

    A method for resolving the enantiomers of various 2-arylpropionic acids (viz. ketoprofen, ibuprofen and fenoprofen) by capillary zone electrophoresis (CZE) using a background electrolyte (BGE) containing a cyclodextrin as chiral selector is proposed. The effects of the type of cyclodextrin used and its concentration on resolution were studied and heptakis-2,3,6-tri- O-methyl-beta-cyclodextrin was found to be the sole effective choice for the quantitative enantiomeric resolution of all the compounds tested. The influence of pH, BGE concentration, capillary temperature and addition of methanol to the BGE on resolution and other separation-related parameters was also studied. The three compounds studied can be enantiomerically resolved with a high efficiency in a short time (less than 20 min) with no capillary treatment. This makes the proposed method suitable for assessing the enantiomeric purity of commercially available pharmaceuticals.

  19. Digital controller for a Baum folding machine. [providing automatic counting and machine shutoff

    NASA Technical Reports Server (NTRS)

    Bryant, W. H. (Inventor)

    1974-01-01

    A digital controller for controlling the operation of a folding machine enables automatic folding of a desired number of sheets responsive to entry of that number into a selector. The controller includes three decade counter stages for corresponding rows of units, tens and hundreds push buttons. Each stage including a decimal-to-BCD encoder, a buffer register, and a digital or binary counter. The BCD representation of the selected count for each digit is loaded into the respective decade down counters. Pulses generated by a sensor and associated circuitry are used to decrease the count in the decade counters. When the content of the decade counter reaches either 0 or 1, a solenoid control valve is actuated which interrupts operation of the machine. A repeat switch, when actuated, prevents clearing of the buffer registers so that multiple groups of the same number of sheets can be folded without reentering the number into the selector.

  20. LFER and CoMFA studies on optical resolution of alpha-alkyl alpha-aryloxy acetic acid methyl esters on DACH-DNB chiral stationary phase.

    PubMed

    Carotti, A; Altomare, C; Cellamare, S; Monforte, A; Bettoni, G; Loiodice, F; Tangari, N; Tortorella, V

    1995-04-01

    The HPLC resolution of a series of racemic alpha-substituted alpha-aryloxy acetic acid methyl esters I on a pi-acid N,N'-(3,5-dinitrobenzoyl)-trans-1,2-diaminocyclohexane as chiral selector was modelled by linear free energy-related (LFER) equations and comparative molecular field analysis (CoMFA). Our results indicate that the retention process mainly depends on solute lipophilicity and steric properties, whereas enantioselectivity is primarily influenced by electrostatic and steric interactions. CoMFA provided additional information with respect to the LFER study, allowed the mixing of different subsets of I and led to a quantitative 3D model of steric and electrostatic factors responsible for chiral recognition.

  1. Proton Electrostatic Analyzer.

    DTIC Science & Technology

    1983-02-01

    Detector Assembly ......................................... 11 2.2 Analyzer (Energy Selector) Assembly............................ 12 2.3 Collimator...Spectrometer assembly ........................................ 13 2.2 Base plate .................................................. 14 - ~ 2.3 Detector ... sensitive vehicle systems. Space objects undergo differential charging due to variations in physical properties among their surface regions. The rate and

  2. Finding Helpful Software Reviews.

    ERIC Educational Resources Information Center

    Kruse, Ted, Comp.

    1987-01-01

    Provides a list of evaluation services currently producing critical reviews of educational software. Includes information about The Apple K-12 Curriculum Software Reference, The Educational Software Preview, The Educational Software Selector, MicroSIFT, and Only The Best: The Discriminating Guide for Preschool-Grade 12. (TW)

  3. WoonyBird Restoration Plant Selector Manual

    EPA Science Inventory

    Modifying greenspaces to enhance habitat value has been proposed as a means towards protecting or restoring biodiversity in urban landscapes. As part of a framework for developing low-cost, low-impact enhancements that can be incorporated during the restoration of greenspaces to ...

  4. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  5. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  6. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    PubMed

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  7. Transparent Information Systems through Gateways, Front Ends, Intermediaries, and Interfaces.

    ERIC Educational Resources Information Center

    Williams, Martha E.

    1986-01-01

    Provides overview of design requirements for transparent information retrieval (implies that user sees through complexity of retrieval activities sequence). Highlights include need for transparent systems; history of transparent retrieval research; information retrieval functions (automated converters, routers, selectors, evaluators/analyzers);…

  8. 47 CFR 15.101 - Equipment authorization of unintentional radiators.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... TV interface device Declaration of Conformity or Certification. Cable system terminal device Declaration of Conformity. Stand-alone cable input selector switch Verification. Class B personal computers... used with Class B personal computers Declaration of Conformity or Certification. 1 Class B personal...

  9. 47 CFR 15.101 - Equipment authorization of unintentional radiators.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... TV interface device Declaration of Conformity or Certification. Cable system terminal device Declaration of Conformity. Stand-alone cable input selector switch Verification. Class B personal computers... used with Class B personal computers Declaration of Conformity or Certification. 1 Class B personal...

  10. 47 CFR 15.101 - Equipment authorization of unintentional radiators.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... TV interface device Declaration of Conformity or Certification. Cable system terminal device Declaration of Conformity. Stand-alone cable input selector switch Verification. Class B personal computers... used with Class B personal computers Declaration of Conformity or Certification. 1 Class B personal...

  11. 47 CFR 15.101 - Equipment authorization of unintentional radiators.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... TV interface device Declaration of Conformity or Certification. Cable system terminal device Declaration of Conformity. Stand-alone cable input selector switch Verification. Class B personal computers... used with Class B personal computers Declaration of Conformity or Certification. 1 Class B personal...

  12. 47 CFR 15.101 - Equipment authorization of unintentional radiators.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... TV interface device Declaration of Conformity or Certification. Cable system terminal device Declaration of Conformity. Stand-alone cable input selector switch Verification. Class B personal computers... used with Class B personal computers Declaration of Conformity or Certification. 1 Class B personal...

  13. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  14. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  15. Chiral discrimination of sibutramine enantiomers by capillary electrophoresis and proton nuclear magnetic resonance spectroscopy.

    PubMed

    Lee, Yong-Jae; Choi, Seungho; Lee, Jinhoo; Nguyen, NgocVan Thi; Lee, Kyungran; Kang, Jong Seong; Mar, Woongchon; Kim, Kyeong Ho

    2012-03-01

    Capillary electrophoresis (CE) and proton nuclear magnetic resonance spectroscopy ((1)H-NMR) have been used to discriminate the enantiomers of sibutramine using cyclodextrin derivatives. Possible correlation between CE and (1)H-NMR was examined. Good correlation between the (1)H-NMR shift non-equivalence data for sibutramine and the degree of enantioseparation in CE was observed. In CE study, a method of enantiomeric separation and quantitation of sibutramine was developed using enantiomeric standards. The method was based on the use of 50 mM of phosphate buffer of pH 3.0 with 10 mM of methyl-beta-cyclodextrin (M-β-CD). 0.05% of LOD, 0.2% of LOQ for S-sibutramine enantiomer was achieved, and the method was validated and applied to the quantitative determination of sibutramine enantiomers in commercial drugs. On a 600 MHz (1)H-NMR analysis, enantiomer signal separation of sibutramine was obtained by fast diastereomeric interaction with a chiral selector M-β-CD. For chiral separation and quantification, N-methyl proton peaks (at 2.18 ppm) were selected because of its being singlet and simple for understanding of diastereomeric interaction. Effects of temperature and concentration of chiral selector on enantiomer signal separation were investigated. The optimum condition was 0.5 mg/mL of sibutramine and 10 mg/mL of M-β-CD at 10°C. Distinguishment of 0.5% of S-sibutramine in R-sibutramine was found to be possible by (1)H-NMR with M-β-CD as chiral selector. Host-guest interaction between sibutramine and M-β-CD was confirmed by (1)H-NMR studies and CE studies. A Structure of the inclusion complex was proposed considering (1)H-NMR and 2D ROESY studies.

  16. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    NASA Astrophysics Data System (ADS)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  17. Likelihood Ratio Tests for Special Rasch Models

    ERIC Educational Resources Information Center

    Hessen, David J.

    2010-01-01

    In this article, a general class of special Rasch models for dichotomous item scores is considered. Although Andersen's likelihood ratio test can be used to test whether a Rasch model fits to the data, the test does not differentiate between special Rasch models. Therefore, in this article, new likelihood ratio tests are proposed for testing…

  18. APPLICATION OF ELECTROPHORESIS TO STUDY THE ENANTIOSELECTIVE TRANSFORMATION OF FIVE CHIRAL PESTICIDES IN AEROBIC SOIL SLURRIES

    EPA Science Inventory

    The enantiomers of five chiral pesticides of environmental interest, metalaxyl, imazaquin, fonofos (dyfonate), ruelene (cruformate) and dichlorprop, were separated analytically using capillary electrophoresis (CE) with cyclodextrin chiral selectors. CE is shown to be a simple, ef...

  19. Neuronal cell fate specification by the molecular convergence of different spatio-temporal cues on a common initiator terminal selector gene

    PubMed Central

    Stratmann, Johannes

    2017-01-01

    The extensive genetic regulatory flows underlying specification of different neuronal subtypes are not well understood at the molecular level. The Nplp1 neuropeptide neurons in the developing Drosophila nerve cord belong to two sub-classes; Tv1 and dAp neurons, generated by two distinct progenitors. Nplp1 neurons are specified by spatial cues; the Hox homeotic network and GATA factor grn, and temporal cues; the hb -> Kr -> Pdm -> cas -> grh temporal cascade. These spatio-temporal cues combine into two distinct codes; one for Tv1 and one for dAp neurons that activate a common terminal selector feedforward cascade of col -> ap/eya -> dimm -> Nplp1. Here, we molecularly decode the specification of Nplp1 neurons, and find that the cis-regulatory organization of col functions as an integratory node for the different spatio-temporal combinatorial codes. These findings may provide a logical framework for addressing spatio-temporal control of neuronal sub-type specification in other systems. PMID:28414802

  20. Diversification of C. elegans Motor Neuron Identity via Selective Effector Gene Repression.

    PubMed

    Kerk, Sze Yen; Kratsios, Paschalis; Hart, Michael; Mourao, Romulo; Hobert, Oliver

    2017-01-04

    A common organizational feature of nervous systems is the existence of groups of neurons that share common traits but can be divided into individual subtypes based on anatomical or molecular features. We elucidate the mechanistic basis of neuronal diversification processes in the context of C.elegans ventral cord motor neurons that share common traits that are directly activated by the terminal selector UNC-3. Diversification of motor neurons into different classes, each characterized by unique patterns of effector gene expression, is controlled by distinct combinations of phylogenetically conserved, class-specific transcriptional repressors. These repressors are continuously required in postmitotic neurons to prevent UNC-3, which is active in all neuron classes, from activating class-specific effector genes in specific motor neuron subsets via discrete cis-regulatory elements. The strategy of antagonizing the activity of broadly acting terminal selectors of neuron identity in a subtype-specific fashion may constitute a general principle of neuron subtype diversification. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Effect of basic and acidic additives on the separation of some basic drug enantiomers on polysaccharide-based chiral columns with acetonitrile as mobile phase.

    PubMed

    Gogaladze, Khatuna; Chankvetadze, Lali; Tsintsadze, Maia; Farkas, Tivadar; Chankvetadze, Bezhan

    2015-03-01

    The separation of enantiomers of 16 basic drugs was studied using polysaccharide-based chiral selectors and acetonitrile as mobile phase with emphasis on the role of basic and acidic additives on the separation and elution order of enantiomers. Out of the studied chiral selectors, amylose phenylcarbamate-based ones more often showed a chiral recognition ability compared to cellulose phenylcarbamate derivatives. An interesting effect was observed with formic acid as additive on enantiomer resolution and enantiomer elution order for some basic drugs. Thus, for instance, the enantioseparation of several β-blockers (atenolol, sotalol, toliprolol) improved not only by the addition of a more conventional basic additive to the mobile phase, but also by the addition of an acidic additive. Moreover, an opposite elution order of enantiomers was observed depending on the nature of the additive (basic or acidic) in the mobile phase. © 2015 Wiley Periodicals, Inc.

  2. Differentiating the Differentiation Models: A Comparison of the Retrieving Effectively from Memory Model (REM) and the Subjective Likelihood Model (SLiM)

    ERIC Educational Resources Information Center

    Criss, Amy H.; McClelland, James L.

    2006-01-01

    The subjective likelihood model [SLiM; McClelland, J. L., & Chappell, M. (1998). Familiarity breeds differentiation: a subjective-likelihood approach to the effects of experience in recognition memory. "Psychological Review," 105(4), 734-760.] and the retrieving effectively from memory model [REM; Shiffrin, R. M., & Steyvers, M. (1997). A model…

  3. Effect of formal and informal likelihood functions on uncertainty assessment in a single event rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran

    2016-09-01

    In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.

  4. How much to trust the senses: Likelihood learning

    PubMed Central

    Sato, Yoshiyuki; Kording, Konrad P.

    2014-01-01

    Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood. PMID:25398975

  5. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  6. Assessing Ongoing Electronic Resource Purchases: Linking Tools to Synchronize Staff Workflows

    ERIC Educational Resources Information Center

    Carroll, Jeffrey D.; Major, Colleen; O'Neal, Nada; Tofanelli, John

    2012-01-01

    Ongoing electronic resource purchases represent a substantial proportion of collections budgets. Recognizing the necessity of systematic ongoing assessment with full selector engagement, Columbia University Libraries appointed an Electronic Resources Assessment Working Group to promote the inclusion of such resources within our current culture of…

  7. 14 CFR 27.1555 - Control markings.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Control markings. (a) Each cockpit control, other than primary flight controls or control whose function... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Control markings. 27.1555 Section 27.1555... fuel controls— (1) Each fuel tank selector control must be marked to indicate the position...

  8. 14 CFR 27.1555 - Control markings.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Control markings. (a) Each cockpit control, other than primary flight controls or control whose function... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Control markings. 27.1555 Section 27.1555... fuel controls— (1) Each fuel tank selector control must be marked to indicate the position...

  9. 14 CFR 27.1555 - Control markings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Control markings. (a) Each cockpit control, other than primary flight controls or control whose function... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Control markings. 27.1555 Section 27.1555... fuel controls— (1) Each fuel tank selector control must be marked to indicate the position...

  10. 14 CFR 27.1555 - Control markings.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Control markings. (a) Each cockpit control, other than primary flight controls or control whose function... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Control markings. 27.1555 Section 27.1555... fuel controls— (1) Each fuel tank selector control must be marked to indicate the position...

  11. 75 FR 42579 - Energy Conservation Program for Consumer Products: Test Procedure for Microwave Ovens; Repeal of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-22

    ... the usable baking space. If there is a selector switch for selecting the mode of operation of the oven, set it for normal baking. If an oven permits baking by either forced convection by using a fan, or...

  12. PLATO--AN AUTOMATED TEACHING DEVICE.

    ERIC Educational Resources Information Center

    BITZER, D.; AND OTHERS

    PLATO (PROGRAMED LOGIC FOR AUTOMATIC TEACHING OPERATION) IS A DEVICE FOR TEACHING A NUMBER OF STUDENTS INDIVIDUALLY BY MEANS OF A SINGLE, CENTRAL PURPOSE, DIGITAL COMPUTER. THE GENERAL ORGANIZATION OF EQUIPMENT CONSISTS OF A KEYSET FOR STUDENT RESPONSES, THE COMPUTER, STORAGE DEVICE (ELECTRIC BLACKBOARD), SLIDE SELECTOR (ELECTRICAL BOOK), AND TV…

  13. 14 CFR 27.1555 - Control markings.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Control markings. 27.1555 Section 27.1555... Control markings. (a) Each cockpit control, other than primary flight controls or control whose function... fuel controls— (1) Each fuel tank selector control must be marked to indicate the position...

  14. 47 CFR 36.124 - Tandem switching equipment-Category 2.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... circuits with each other or with local or tandem telephone central office trunks, intertoll dial selector equipment, or intertoll trunk equipment in No. 5 type electronic offices. Equipment, including switchboards... interconnection of: Toll center to toll center circuits; toll center to tributary circuits; tributary to tributary...

  15. 47 CFR 36.124 - Tandem switching equipment-Category 2.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... circuits with each other or with local or tandem telephone central office trunks, intertoll dial selector equipment, or intertoll trunk equipment in No. 5 type electronic offices. Equipment, including switchboards... interconnection of: Toll center to toll center circuits; toll center to tributary circuits; tributary to tributary...

  16. 47 CFR 36.124 - Tandem switching equipment-Category 2.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... circuits with each other or with local or tandem telephone central office trunks, intertoll dial selector equipment, or intertoll trunk equipment in No. 5 type electronic offices. Equipment, including switchboards... interconnection of: Toll center to toll center circuits; toll center to tributary circuits; tributary to tributary...

  17. 47 CFR 36.124 - Tandem switching equipment-Category 2.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... circuits with each other or with local or tandem telephone central office trunks, intertoll dial selector equipment, or intertoll trunk equipment in No. 5 type electronic offices. Equipment, including switchboards... interconnection of: Toll center to toll center circuits; toll center to tributary circuits; tributary to tributary...

  18. 47 CFR 36.124 - Tandem switching equipment-Category 2.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... circuits with each other or with local or tandem telephone central office trunks, intertoll dial selector equipment, or intertoll trunk equipment in No. 5 type electronic offices. Equipment, including switchboards... interconnection of: Toll center to toll center circuits; toll center to tributary circuits; tributary to tributary...

  19. Transient dynamics of NbOx threshold switches explained by Poole-Frenkel based thermal feedback mechanism

    NASA Astrophysics Data System (ADS)

    Wang, Ziwen; Kumar, Suhas; Nishi, Yoshio; Wong, H.-S. Philip

    2018-05-01

    Niobium oxide (NbOx) two-terminal threshold switches are potential candidates as selector devices in crossbar memory arrays and as building blocks for neuromorphic systems. However, the physical mechanism of NbOx threshold switches is still under debate. In this paper, we show that a thermal feedback mechanism based on Poole-Frenkel conduction can explain both the quasi-static and the transient electrical characteristics that are experimentally observed for NbOx threshold switches, providing strong support for the validity of this mechanism. Furthermore, a clear picture of the transient dynamics during the thermal-feedback-induced threshold switching is presented, providing useful insights required to model nonlinear devices where thermal feedback is important.

  20. Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.

    2016-12-01

    With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.

  1. 29 CFR 1915.100 - Retention of DOT markings, placards and labels.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... are readily visible. (d) For non-bulk packages which will not be reshipped, the provisions of this... permanently alter its energy-control capability. (5) Contract employer. An employer, such as a painting... isolate energy. Control-circuit devices (for example, push buttons, selector switches) are not considered...

  2. Solid state remote circuit selector switch

    NASA Technical Reports Server (NTRS)

    Peterson, V. S.

    1970-01-01

    Remote switching circuit utilizes voltage logic to switch on desired circuit. Circuit controls rotating multi-range pressure transducers in jet engine testing and can be used in coded remote circuit activator where sequence of switching has to occur in defined length of time to prevent false or undesired circuit activation.

  3. 76 FR 69869 - Energy Conservation Program: Test Procedures for Residential Clothes Washers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-09

    ... manual water fill control system, user- adjustable adaptive water fill control system, or adaptive water fill control system with alternate manual water fill control system, use the water fill selector... and transcripts, comments, and other supporting documents/ materials. All documents in the docket are...

  4. 14 CFR 29.1555 - Control markings.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....1555 Control markings. (a) Each cockpit control, other than primary flight controls or control whose... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Control markings. 29.1555 Section 29.1555... fuel controls— (1) Each fuel tank selector valve control must be marked to indicate the position...

  5. 14 CFR 29.1555 - Control markings.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ....1555 Control markings. (a) Each cockpit control, other than primary flight controls or control whose... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Control markings. 29.1555 Section 29.1555... fuel controls— (1) Each fuel tank selector valve control must be marked to indicate the position...

  6. 14 CFR 29.1555 - Control markings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....1555 Control markings. (a) Each cockpit control, other than primary flight controls or control whose... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Control markings. 29.1555 Section 29.1555... fuel controls— (1) Each fuel tank selector valve control must be marked to indicate the position...

  7. 14 CFR 29.1555 - Control markings.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....1555 Control markings. (a) Each cockpit control, other than primary flight controls or control whose... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Control markings. 29.1555 Section 29.1555... fuel controls— (1) Each fuel tank selector valve control must be marked to indicate the position...

  8. 14 CFR 29.1555 - Control markings.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ....1555 Control markings. (a) Each cockpit control, other than primary flight controls or control whose... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Control markings. 29.1555 Section 29.1555... fuel controls— (1) Each fuel tank selector valve control must be marked to indicate the position...

  9. 21 CFR 876.5900 - Ostomy pouch and accessories.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ....5900 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... created opening of the small intestine, large intestine, or the ureter on the surface of the body). This... bag, ostomy drainage bag with adhesive, stomal bag, ostomy protector, and the ostomy size selector...

  10. 21 CFR 876.5900 - Ostomy pouch and accessories.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ....5900 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... created opening of the small intestine, large intestine, or the ureter on the surface of the body). This... bag, ostomy drainage bag with adhesive, stomal bag, ostomy protector, and the ostomy size selector...

  11. 21 CFR 876.5900 - Ostomy pouch and accessories.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ....5900 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... created opening of the small intestine, large intestine, or the ureter on the surface of the body). This... bag, ostomy drainage bag with adhesive, stomal bag, ostomy protector, and the ostomy size selector...

  12. 21 CFR 876.5900 - Ostomy pouch and accessories.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ....5900 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... created opening of the small intestine, large intestine, or the ureter on the surface of the body). This... bag, ostomy drainage bag with adhesive, stomal bag, ostomy protector, and the ostomy size selector...

  13. ANALYSIS OF THE ENANTIOMERS OF CHIRAL PESTICIDES AND OTHER POLLUTANTS IN ENVIRONMENTAL SAMPLES BY CAPILLARY ELECTROPHORESIS

    EPA Science Inventory

    The generic method described here involves typical capillary electrophoresis (CE) techniques, with the addition of cyclodextrin chiral selectors to the electrolyte for enantiomer separation and also, in the case of neutral analytes, the further addition of a micelle forming comp...

  14. Effective Collection Developers: Librarians or Faculty?

    ERIC Educational Resources Information Center

    Vidor, David L.; Futas, Elizabeth

    1988-01-01

    A study at the Emory University School of Business Administration library compared the effectiveness of faculty members and librarians as book selectors. Effectiveness was measured by comparing selected titles with the Baker list published by the Harvard Business School and with business periodical reviews, and by examining circulation records.…

  15. Recruitment/Selectors' Perceptions of Male and Female Trainee Managers

    ERIC Educational Resources Information Center

    Kniveton, Bromley H.

    2008-01-01

    Purpose: The aim of this paper is to investigate whether those involved with recruitment/selection (RS) react differently towards male and female trainee managers. Design/methodology/approach: Measures of the perceptions towards trainee managers were collected from 440 managers and professionals involved in recruitment/selection (RS). Findings: It…

  16. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies

    PubMed Central

    Rukhin, Andrew L.

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583

  17. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    PubMed

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  18. Hurdle models for multilevel zero-inflated data via h-likelihood.

    PubMed

    Molas, Marek; Lesaffre, Emmanuel

    2010-12-30

    Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.

  19. MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS

    EPA Science Inventory

    Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...

  20. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-07-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.

  1. Asymptotic Properties of Induced Maximum Likelihood Estimates of Nonlinear Models for Item Response Variables: The Finite-Generic-Item-Pool Case.

    ERIC Educational Resources Information Center

    Jones, Douglas H.

    The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…

  2. Risk prediction and aversion by anterior cingulate cortex.

    PubMed

    Brown, Joshua W; Braver, Todd S

    2007-12-01

    The recently proposed error-likelihood hypothesis suggests that anterior cingulate cortex (ACC) and surrounding areas will become active in proportion to the perceived likelihood of an error. The hypothesis was originally derived from a computational model prediction. The same computational model now makes a further prediction that ACC will be sensitive not only to predicted error likelihood, but also to the predicted magnitude of the consequences, should an error occur. The product of error likelihood and predicted error consequence magnitude collectively defines the general "expected risk" of a given behavior in a manner analogous but orthogonal to subjective expected utility theory. New fMRI results from an incentivechange signal task now replicate the error-likelihood effect, validate the further predictions of the computational model, and suggest why some segments of the population may fail to show an error-likelihood effect. In particular, error-likelihood effects and expected risk effects in general indicate greater sensitivity to earlier predictors of errors and are seen in risk-averse but not risk-tolerant individuals. Taken together, the results are consistent with an expected risk model of ACC and suggest that ACC may generally contribute to cognitive control by recruiting brain activity to avoid risk.

  3. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  4. Model criticism based on likelihood-free inference, with an application to protein network evolution.

    PubMed

    Ratmann, Oliver; Andrieu, Christophe; Wiuf, Carsten; Richardson, Sylvia

    2009-06-30

    Mathematical models are an important tool to explain and comprehend complex phenomena, and unparalleled computational advances enable us to easily explore them without any or little understanding of their global properties. In fact, the likelihood of the data under complex stochastic models is often analytically or numerically intractable in many areas of sciences. This makes it even more important to simultaneously investigate the adequacy of these models-in absolute terms, against the data, rather than relative to the performance of other models-but no such procedure has been formally discussed when the likelihood is intractable. We provide a statistical interpretation to current developments in likelihood-free Bayesian inference that explicitly accounts for discrepancies between the model and the data, termed Approximate Bayesian Computation under model uncertainty (ABCmicro). We augment the likelihood of the data with unknown error terms that correspond to freely chosen checking functions, and provide Monte Carlo strategies for sampling from the associated joint posterior distribution without the need of evaluating the likelihood. We discuss the benefit of incorporating model diagnostics within an ABC framework, and demonstrate how this method diagnoses model mismatch and guides model refinement by contrasting three qualitative models of protein network evolution to the protein interaction datasets of Helicobacter pylori and Treponema pallidum. Our results make a number of model deficiencies explicit, and suggest that the T. pallidum network topology is inconsistent with evolution dominated by link turnover or lateral gene transfer alone.

  5. A general methodology for maximum likelihood inference from band-recovery data

    USGS Publications Warehouse

    Conroy, M.J.; Williams, B.K.

    1984-01-01

    A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.

  6. User-Focused Strategic Services for Technological University Libraries.

    ERIC Educational Resources Information Center

    Townley, Charles T.

    This paper describes the New Mexico State University (NMSU) Library's strategic plan to develop its services amid an atmosphere of change. A summary of the following components of the strategic plan is given: vision; mission; values; and goals. The revised organizational functions are then illustrated, as well as the role of the selector-liaison…

  7. 14 CFR 23.1335 - Flight director systems.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Flight director systems. 23.1335 Section 23...: Installation § 23.1335 Flight director systems. If a flight director system is installed, means must be provided to indicate to the flight crew its current mode of operation. Selector switch position is not...

  8. 14 CFR 23.1335 - Flight director systems.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Flight director systems. 23.1335 Section 23...: Installation § 23.1335 Flight director systems. If a flight director system is installed, means must be provided to indicate to the flight crew its current mode of operation. Selector switch position is not...

  9. Crossing Boundaries: Selecting for Research, Professional Development and Consumer Education in an Interdisciplinary Field, the Case of Mental Health

    ERIC Educational Resources Information Center

    Pettijohn, Patricia

    2004-01-01

    Both the demand for, and supply of, mental health information has increased across all sectors. Academic, public and special libraries must locate, evaluate and select materials that support consumer education, academic teaching, interdisciplinary research, and professional credentialing. Selectors must navigate disciplinary barriers to develop…

  10. 11. INTERIOR VIEW OF OPERATING HOUSE NO. 4, SHOWING WORM ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. INTERIOR VIEW OF OPERATING HOUSE NO. 4, SHOWING WORM WHEEL GEAR ASSEMBLY, ORIGINAL 20 HP EAST HOIST MOTOR, AND CONTROL GATES 7 AND 8 HAND BRAKES, WITH MOTOR SELECTOR SWITCH, MOTOR STARTING SWITCH, AND OIL CIRCUIT BREAKER IN BACKGROUND - Long Lake Hydroelectric Plant, Spillway Dam, Spanning Spokane River, Ford, Stevens County, WA

  11. 40 CFR 86.137-94 - Dynamometer test run, gaseous and particulate emissions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... within 20 minutes of the end of the sample collection phase of the test. Obtain methanol and formaldehyde... the sample collection phase of the test. Obtain methanol and formaldehyde sample analyses, if... methanol-fueled vehicles, with the sample selector valves in the “standby” position, insert fresh sample...

  12. 40 CFR 86.137-94 - Dynamometer test run, gaseous and particulate emissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... within 20 minutes of the end of the sample collection phase of the test. Obtain methanol and formaldehyde... the sample collection phase of the test. Obtain methanol and formaldehyde sample analyses, if... methanol-fueled vehicles, with the sample selector valves in the “standby” position, insert fresh sample...

  13. Physical Unclonable Function with Multiplexing Units and its Evaluation

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Masaya; Asai, Toshiya; Shiozaki, Mitsuru; Fujino, Takeshi

    Recently, semiconductor counterfeiting has become an increasingly serious problem. Therefore, techniques to prevent the counterfeit by using random characteristic patterns that are difficult to control artificially have attracted attention. The physical unclonable function (PUF) is one of the techniques. It is a method to derive ID information peculiar to a device by detecting random physical features that cannot be controlled during the device's manufacture. Because information such as the ID information is difficult to replicate, PUF is used as a technique to prevent counterfeiting. Several studies have been reported on PUF. Arbiter PUF, which utilizes the difference in signal propagation delay between selectors, is the typical method of composing PUF using delay characteristics. This paper proposed a new PUF which is based on the arbiter PUF. The proposed PUF introduces new multiplexing selector units. It attempts to generate an effective response using the orders of three signal arrivals. Experiments using FPGAs verify the validity of the proposed PUF. Although Uniqueness is deteriorated, Correctness, Steadiness, Randomness and Resistance against the machine learning attacks are improved in comparison with conventional one.

  14. Reduction of sludge generation by the addition of support material in a cyclic activated sludge system for municipal wastewater treatment.

    PubMed

    Araujo, Moacir Messias de; Lermontov, André; Araujo, Philippe Lopes da Silva; Zaiat, Marcelo

    2013-09-01

    An innovative biomass carrier (Biobob®) was tested for municipal wastewater treatment in an activated sludge system to evaluate the pollutant removal performance and the sludge generation for different carrier volumes. The experiment was carried out in a pilot-scale cyclic activated sludge system (CASS®) built with three cylindrical tanks in a series: an anoxic selector (2.1 m(3)), an aerobic selector (2.5 m(3)) and the main aerobic reactor (25.1 m(3)). The results showed that by adding the Biobob® carrier decreased the MLVSS concentration, which consequently reduced the waste sludge production of the system. Having 7% and 18% (v/v) support material in the aerobic reactor, the observed biomass yield decreased 18% and 36%, respectively, relative to the reactor operated with suspended biomass. The addition of media did not affect the system's performance for COD and TSS removal. However, TKN and TN removal were improved by 24% and 14%, respectively, using 18% (v/v) carrier. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. The incorporation of calix[6]arene and cyclodextrin derivatives into sol-gels for the preparation of stationary phases for gas chromatography.

    PubMed

    Delahousse, Guillaume; Peulon-Agasse, Valérie; Debray, Jean-Christophe; Vaccaro, Marie; Cravotto, Giancarlo; Jabin, Ivan; Cardinael, Pascal

    2013-11-29

    New polyethylene-glycol-based sol-gels containing cyclodextrin or calix[6]arene derivatives have been synthesized. An original method for sol-gel preparation and capillary column coating, which consumes smaller quantities of selectors and allows for control of their amounts in the stationary phase, is reported herein. The new stationary phases exhibited excellent column efficiencies over a large range of temperatures and thermal stability up to 280°C. The cyclodextrin derivative generally showed the best separation factors for aromatic positional isomers. The calix[6]arene derivative exhibited the best selectivity for the polychlorobiphenyl congeners and some polycyclic aromatic hydrocarbon isomers. The relationship between the structure and the chromatographic properties of the selectors is discussed. The tert-butyl groups on the upper rim of the calix[6]arene were found to possibly play an important role in the recognition of solutes. The incorporation of the cyclodextrin derivative into the sol-gel matrix did not affect its enantioselective recognition capabilities. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. An intelligent 1:2 demultiplexer as an intracellular theranostic device based on DNA/Ag cluster-gated nanovehicles

    NASA Astrophysics Data System (ADS)

    Ran, Xiang; Wang, Zhenzhen; Ju, Enguo; Pu, Fang; Song, Yanqiu; Ren, Jinsong; Qu, Xiaogang

    2018-02-01

    The logic device demultiplexer can convey a single input signal into one of multiple output channels. The choice of the output channel is controlled by a selector. Several molecules and biomolecules have been used to mimic the function of a demultiplexer. However, the practical application of logic devices still remains a big challenge. Herein, we design and construct an intelligent 1:2 demultiplexer as a theranostic device based on azobenzene (azo)-modified and DNA/Ag cluster-gated nanovehicles. The configuration of azo and the conformation of the DNA ensemble can be regulated by light irradiation and pH, respectively. The demultiplexer which uses light as the input and acid as the selector can emit red fluorescence or a release drug under different conditions. Depending on different cells, the intelligent logic device can select the mode of cellular imaging in healthy cells or tumor therapy in tumor cells. The study incorporates the logic gate with the theranostic device, paving the way for tangible applications of logic gates in the future.

  17. An intelligent 1:2 demultiplexer as an intracellular theranostic device based on DNA/Ag cluster-gated nanovehicles.

    PubMed

    Ran, Xiang; Wang, Zhenzhen; Ju, Enguo; Pu, Fang; Song, Yanqiu; Ren, Jinsong; Qu, Xiaogang

    2018-02-09

    The logic device demultiplexer can convey a single input signal into one of multiple output channels. The choice of the output channel is controlled by a selector. Several molecules and biomolecules have been used to mimic the function of a demultiplexer. However, the practical application of logic devices still remains a big challenge. Herein, we design and construct an intelligent 1:2 demultiplexer as a theranostic device based on azobenzene (azo)-modified and DNA/Ag cluster-gated nanovehicles. The configuration of azo and the conformation of the DNA ensemble can be regulated by light irradiation and pH, respectively. The demultiplexer which uses light as the input and acid as the selector can emit red fluorescence or a release drug under different conditions. Depending on different cells, the intelligent logic device can select the mode of cellular imaging in healthy cells or tumor therapy in tumor cells. The study incorporates the logic gate with the theranostic device, paving the way for tangible applications of logic gates in the future.

  18. In situ synthesis of twelve dialkyltartrate-boric acid complexes and two polyols-boric acid complexes and their applications as chiral ion-pair selectors in nonaqueous capillary electrophoresis.

    PubMed

    Wang, Li-Juan; Yang, Juan; Yang, Geng-Liang; Chen, Xing-Guo

    2012-07-27

    In this paper, twelve dialkyltartrate-boric acid complexes and two polyols-boric acid complexes were in situ synthesized by the reaction of different dialkyltartrates or polyols with boric acid in methanol containing triethylamine. All of the twelve dialkyltartrate-boric acid complexes were found to have relatively good chiral separation performance in nonaqueous capillary electrophoresis (NACE). Their chiral recognition effects in terms of both enantioselectivity (α) and resolution (R(s)) were similar when the number of carbon atoms was below six in the alkyl group of alcohol moiety. The dialkyltartrates containing alkyl groups of different structures but the same number of carbon atoms, i.e. one of straight chain and one of branched chain, also provided similar chiral recognition effects. Furthermore, it was demonstrated for the first time that two methanol insoluble polyols, D-mannitol and D-sorbitol, could react with boric acid to prepare chiral ion-pair selectors using methanol as the solvent medium. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Preparative enantioseparation of propafenone by counter-current chromatography using di-n-butyl L-tartrate combined with boric acid as the chiral selector.

    PubMed

    Tong, Shengqiang; Shen, Mangmang; Zheng, Ye; Chu, Chu; Li, Xing-Nuo; Yan, Jizhong

    2013-09-01

    This paper extends the research of the utilization of borate coordination complexes in chiral separation by counter-current chromatography (CCC). Racemic propafenone was successfully enantioseparated by CCC with di-n-butyl l-tartrate combined with boric acid as the chiral selector. The two-phase solvent system was composed of chloroform/ 0.05 mol/L acetate buffer pH 3.4 containing 0.10 mol/L boric acid (1:1, v/v), in which 0.10 mol/L di-n-butyl l-tartrate was added in the organic phase. The influence of factors in the enantioseparation of propafenone were investigated and optimized. A total of 92 mg of racemic propafenone was completely enantioseparated using high-speed CCC in a single run, yielding 40-42 mg of (R)- and (S)-propafenone enantiomers with an HPLC purity over 90-95%. The recovery for propafenone enantiomers from fractions of CCC was in the range of 85-90%. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Mass dependence of spectral and angular distributions of Cherenkov radiation from relativistic isotopes in solid radiators and its possible application as mass selector

    NASA Astrophysics Data System (ADS)

    Bogdanov, O. V.; Rozhkova, E. I.; Pivovarov, Yu. L.; Kuzminchuk-Feuerstein, N.

    2018-02-01

    The first proof of principle experiment with a prototype of a Time-of-Flight (TOF) - Cherenkov detector of relativistic heavy ions (RHI) exploiting a liquid Iodine Naphthalene radiator has been performed at Cave C at GSI (Darmstadt, Germany). A conceptual design for a liquid Cherenkov detector was proposed as a prototype for the future TOF measurements at the Super-FRS by detection of total number of Cherenkov photons. The ionization energy loss of RHI in a liquid radiator decreases only slightly this number, while in a solid radiator changes sufficiently not the total number of ChR photons, but ChR angular and spectral distributions. By means of computer simulations, we showed that these distributions are very sensitive to the isotope mass, due to different stopping powers of isotopes with initial equal relativistic factors. The results of simulations for light (Li, Be) and heavy (Xe) isotopes at 500-1000 MeV/u are presented indicating the possibility to use the isotopic effect in ChR of RHI as the mass selector.

  1. Characterization of a single-isomer carboxymethyl-beta-cyclodextrin in chiral capillary electrophoresis.

    PubMed

    Fejős, Ida; Varga, Erzsébet; Benkovics, Gábor; Malanga, Milo; Sohajda, Tamás; Szemán, Julianna; Béni, Szabolcs

    2017-08-01

    In this work, the synthesis, characterization, and chiral capillary electrophoretic study of heptakis-(2,3-di-O-methyl-6-O-carboxymethyl)-β-CD (HDMCM), a single-isomer carboxymethylated CD, are presented. The pH-dependent and selector concentration-dependent enantiorecognition properties of HDMCM were investigated and discussed herein. The enantioseparation was assessed applying a structurally diverse set of noncharged, basic, and zwitterionic racemates. The increase in the selector concentration and gross negative charge of HDMCM improved the enantioseparation that could be observed in the majority of the cases. HDMCM was also successfully applied as BGE additive in NACE using a methanol-based system in order to prove the separation selectivity features and to highlight the broad applicability of HDMCM. Over 25 racemates showed partial or baseline separation with HDMCM under the conditions investigated, among which optimal enantiomer migration order was found for the four stereoisomers of tadalafil, tapentadol, and dapoxetine, offering the possibility of a chiral CE method development for chiral purity profiling of these drugs. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Optical resolution of phenylthiohydantoin-amino acids by capillary electrophoresis and identification of the phenylthiohydantoin-D-amino acid residue of [D-Ala2]-methionine enkephalin.

    PubMed

    Kurosu, Y; Murayama, K; Shindo, N; Shisa, Y; Ishioka, N

    1996-11-01

    This is an initial report to propose a protein sequence analysis system with DL differentiation using capillary electrophoresis (CE). This system consists of a protein sequencer and a CE system. After fractionation of phenyl-thiohydantoin (PTH)-amino acids using a protein sequencer, optical resolution for each PTH-amino acid is performed by CE using some chiral selectors such as digitonin, beta-escin and others. As a model peptide, [D-Ala2]-methionine enkephalin (L-Tyr-D-Ala-Gly-L-Phe-L-Met), was used and the sequence with DL differentiation was determined, with the exception of the fourth amino acid, L-Phe, using our proposed system.

  3. Diffraction of digital micromirror device gratings and its effect on properties of tunable fiber lasers.

    PubMed

    Chen, Xiao; Yan, Bin-bin; Song, Fei-jun; Wang, Yi-quan; Xiao, Feng; Alameh, Kamal

    2012-10-20

    A digital micromirror device (DMD) is a kind of widely used spatial light modulator. We apply DMD as wavelength selector in tunable fiber lasers. Based on the two-dimensional diffraction theory, the diffraction of DMD and its effect on properties of fiber laser parameters are analyzed in detail. The theoretical results show that the diffraction efficiency is strongly dependent upon the angle of incident light and the pixel spacing of DMD. Compared with the other models of DMDs, the 0.55 in. DMD grating is an approximate blazed state in our configuration, which makes most of the diffracted radiation concentrated into one order. It is therefore a better choice to improve the stability and reliability of tunable fiber laser systems.

  4. Validated Densitometric TLC-Method for the Simultaneous Analysis of (R)- and (S)-Citalopram and its Related Substances Using Macrocyclic Antibiotic as a Chiral Selector: Application to the Determination of Enantiomeric Purity of Escitalopram

    PubMed Central

    Soliman, Suzan Mahmoud

    2012-01-01

    A novel economic procedure for the simultaneous stereospecific separation and analysis of (R)- and (S)-citalopram and its related substances or impurities has been developed and validated. Chromatography was performed on silica gel 60 F254 plates with acetonitrile: methanol: water (15:2.5:2.5: v/v/v) as a mobile phase containing 1.5 mM norvancomycin or 2.5 mM vancomycin as a selector at ambient temperature. (R)- and (S)-citalopram enantiomers in presence of its related substances; citalopram citadiol and citalopram N-oxide were well separated with significant Rf values of 0.33 ± 0.02, 0.85 ± 0.02, 0.45 ± 0.02 and 0.22 ± 0.02, respectively. The spots were detected with either iodine vapor, or by use of a UV lamp followed by densitometric measurement at 239 nm. All variables affecting the resolution, such as concentration of chiral selectors, mobile phase system at different temperatures and pH-values were investigated and the conditions were optimized. Calibration plots for analysis of (R)- and (S)-enantiomers were linear in the range of 0.2-16.8 μg/10 μl (R≥0.9994, n=6) with acceptable precision (%RSD<2.0) and accuracy (99.70 ± 0.85% and 99.51 ± 0.61% for (S)-citalopram and escitalopram, respectively). The limit of detection and quantification were 0.08 μg/10 μl and 0.25 μg/10 μl, respectively, for (R)- and (S)-citalopram. The proposed method is simple, selective, and robust and can be applied for quantitative determination of enantiomeric purity of (R)- and (S)-citalopram (escitalopram) as well as the related impurities in drug substances and pharmaceutical preparations. The method can be useful to investigate adulteration of pure isomer with the cheep racemic form. PMID:23675256

  5. A chiral enantioseparation generic strategy for anti-Alzheimer and antifungal drugs by short end injection capillary electrophoresis using an experimental design approach.

    PubMed

    Abdel-Megied, Ahmed M; Hanafi, Rasha S; Aboul-Enein, Hassan Y

    2018-02-01

    The present study describes a generic strategy using capillary electrophoretic (CE) method for chiral enantioseparation of anti-Alzheimer drugs, namely, donepezil (DON), rivastigmine (RIV), and antifungal drugs, namely, ketoconazole (KET), Itraconazole (ITR), fluconazole (FLU), and sertaconazole (SRT) in which these drugs have different basic and acidic properties. Several modified cyclodextrins (CDs) were applied for enantioseparation of racemates such as highly sulfated α, γ CDs, hydroxyl propyl-β-CD, and Sulfobutyl ether-β-CD. The starting screening conditions consist of 50-mM phosphate-triethanolamine buffer at pH 2.5, an applied voltage of 15 kV, and a temperature of 25°C. The CE strategy implemented in the separation starts by screening prior to the optimization stage in which an experimental design is applied. The design of experiment (DOE) was based on a full factorial design of the crucial two factors (pH and %CD) at three levels, to make a total of nine (3 2 ) experiments with high, intermediate, and low values for both factors. Evaluation of the proposed strategy pointed out that best resolution was obtained at pH 2.5 for five racemates using low percentages of HS-γ-CD, while SBE-β-CD was the most successful chiral selector offering acceptable resolution for all the six racemates, with the best separation at low pH values and at higher %CD within 10-min runtime. Regression study showed that the linear model shows a significant lack of fit for all chiral selectors, anticipating that higher orders of the factors are most likely to be present in the equation with possible interactions. © 2017 Wiley Periodicals, Inc.

  6. The Selector Gene apterous and Notch Are Required to Locally Increase Mechanical Cell Bond Tension at the Drosophila Dorsoventral Compartment Boundary

    PubMed Central

    Michel, Marcus; Aliee, Maryam; Rudolf, Katrin; Bialas, Lisa; Jülicher, Frank; Dahmann, Christian

    2016-01-01

    The separation of cells with distinct fates and functions is important for tissue and organ formation during animal development. Regions of different fates within tissues are often separated from another along straight boundaries. These compartment boundaries play a crucial role in tissue patterning and growth by stably positioning organizers. In Drosophila, the wing imaginal disc is subdivided into a dorsal and a ventral compartment. Cells of the dorsal, but not ventral, compartment express the selector gene apterous. Apterous expression sets in motion a gene regulatory cascade that leads to the activation of Notch signaling in a few cell rows on either side of the dorsoventral compartment boundary. Both Notch and apterous mutant clones disturb the separation of dorsal and ventral cells. Maintenance of the straight shape of the dorsoventral boundary involves a local increase in mechanical tension at cell bonds along the boundary. The mechanisms by which cell bond tension is locally increased however remain unknown. Here we use a combination of laser ablation of cell bonds, quantitative image analysis, and genetic mutants to show that Notch and Apterous are required to increase cell bond tension along the dorsoventral compartment boundary. Moreover, clonal expression of the Apterous target gene capricious results in cell separation and increased cell bond tension at the clone borders. Finally, using a vertex model to simulate tissue growth, we find that an increase in cell bond tension at the borders of cell clones, but not throughout the cell clone, can lead to cell separation. We conclude that Apterous and Notch maintain the characteristic straight shape of the dorsoventral compartment boundary by locally increasing cell bond tension. PMID:27552097

  7. Consistent model identification of varying coefficient quantile regression with BIC tuning parameter selection

    PubMed Central

    Zheng, Qi; Peng, Limin

    2016-01-01

    Quantile regression provides a flexible platform for evaluating covariate effects on different segments of the conditional distribution of response. As the effects of covariates may change with quantile level, contemporaneously examining a spectrum of quantiles is expected to have a better capacity to identify variables with either partial or full effects on the response distribution, as compared to focusing on a single quantile. Under this motivation, we study a general adaptively weighted LASSO penalization strategy in the quantile regression setting, where a continuum of quantile index is considered and coefficients are allowed to vary with quantile index. We establish the oracle properties of the resulting estimator of coefficient function. Furthermore, we formally investigate a BIC-type uniform tuning parameter selector and show that it can ensure consistent model selection. Our numerical studies confirm the theoretical findings and illustrate an application of the new variable selection procedure. PMID:28008212

  8. Unified framework to evaluate panmixia and migration direction among multiple sampling locations.

    PubMed

    Beerli, Peter; Palczewski, Michal

    2010-05-01

    For many biological investigations, groups of individuals are genetically sampled from several geographic locations. These sampling locations often do not reflect the genetic population structure. We describe a framework using marginal likelihoods to compare and order structured population models, such as testing whether the sampling locations belong to the same randomly mating population or comparing unidirectional and multidirectional gene flow models. In the context of inferences employing Markov chain Monte Carlo methods, the accuracy of the marginal likelihoods depends heavily on the approximation method used to calculate the marginal likelihood. Two methods, modified thermodynamic integration and a stabilized harmonic mean estimator, are compared. With finite Markov chain Monte Carlo run lengths, the harmonic mean estimator may not be consistent. Thermodynamic integration, in contrast, delivers considerably better estimates of the marginal likelihood. The choice of prior distributions does not influence the order and choice of the better models when the marginal likelihood is estimated using thermodynamic integration, whereas with the harmonic mean estimator the influence of the prior is pronounced and the order of the models changes. The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.

  9. Likelihood-based gene annotations for gap filling and quality assessment in genome-scale metabolic models

    DOE PAGES

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...

    2014-10-16

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less

  10. Likelihood-Based Gene Annotations for Gap Filling and Quality Assessment in Genome-Scale Metabolic Models

    PubMed Central

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.

    2014-01-01

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157

  11. Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders

    2007-01-01

    Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…

  12. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  13. A long-term earthquake rate model for the central and eastern United States from smoothed seismicity

    USGS Publications Warehouse

    Moschetti, Morgan P.

    2015-01-01

    I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.

  14. Likelihood testing of seismicity-based rate forecasts of induced earthquakes in Oklahoma and Kansas

    USGS Publications Warehouse

    Moschetti, Morgan P.; Hoover, Susan M.; Mueller, Charles

    2016-01-01

    Likelihood testing of induced earthquakes in Oklahoma and Kansas has identified the parameters that optimize the forecasting ability of smoothed seismicity models and quantified the recent temporal stability of the spatial seismicity patterns. Use of the most recent 1-year period of earthquake data and use of 10–20-km smoothing distances produced the greatest likelihood. The likelihood that the locations of January–June 2015 earthquakes were consistent with optimized forecasts decayed with increasing elapsed time between the catalogs used for model development and testing. Likelihood tests with two additional sets of earthquakes from 2014 exhibit a strong sensitivity of the rate of decay to the smoothing distance. Marked reductions in likelihood are caused by the nonstationarity of the induced earthquake locations. Our results indicate a multiple-fold benefit from smoothed seismicity models in developing short-term earthquake rate forecasts for induced earthquakes in Oklahoma and Kansas, relative to the use of seismic source zones.

  15. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  16. Tests for detecting overdispersion in models with measurement error in covariates.

    PubMed

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Optimized pulsed write schemes improve linearity and write speed for low-power organic neuromorphic devices

    NASA Astrophysics Data System (ADS)

    Keene, Scott T.; Melianas, Armantas; Fuller, Elliot J.; van de Burgt, Yoeri; Talin, A. Alec; Salleo, Alberto

    2018-06-01

    Neuromorphic devices are becoming increasingly appealing as efficient emulators of neural networks used to model real world problems. However, no hardware to date has demonstrated the necessary high accuracy and energy efficiency gain over CMOS in both (1) training via backpropagation and (2) in read via vector matrix multiplication. Such shortcomings are due to device non-idealities, particularly asymmetric conductance tuning in response to uniform voltage pulse inputs. Here, by formulating a general circuit model for capacitive ion-exchange neuromorphic devices, we show that asymmetric nonlinearity in organic electrochemical neuromorphic devices (ENODes) can be suppressed by an appropriately chosen write scheme. Simulations based upon our model suggest that a nonlinear write-selector could reduce the switching voltage and energy, enabling analog tuning via a continuous set of resistance states (100 states) with extremely low switching energy (~170 fJ · µm‑2). This work clarifies the pathway to neural algorithm accelerators capable of parallelism during both read and write operations.

  18. New mathematic model for predicting chiral separation using molecular docking: mechanism of chiral recognition of triadimenol analogues.

    PubMed

    Zhang, Guoqing; Sun, Qingyan; Hou, Ying; Hong, Zhanying; Zhang, Jun; Zhao, Liang; Zhang, Hai; Chai, Yifeng

    2009-07-01

    The purpose of this paper was to study the enantioseparation mechanism of triadimenol compounds by carboxymethylated (CM)-beta-CD mediated CE. All the enantiomers were separated under the same experimental conditions to study the chiral recognition mechanism using a 30 mM sodium dihydrogen phosphate buffer at pH 2.2 adjusted by phosphoric acid. The inclusion courses between CM-beta-CD and enantiomers were investigated by the means of molecular docking technique. It was found that there were at least three points (one hydrophobic bond and two hydrogen bonds) involved in the interaction of each enantiomer with the chiral selectors. A new mathematic model has been built up based on the results of molecular mechanics calculations, which could analyze the relationship between the resolution of enantioseparation and the interaction energy in the docking area. Comparing the results of the separation by CE, the established mathematic model demonstrated good capability to predict chiral separation of triadimenol enantiomers using CM-beta-CD mediated CE.

  19. Business Activity Monitoring: Real-Time Group Goals and Feedback Using an Overhead Scoreboard in a Distribution Center

    ERIC Educational Resources Information Center

    Goomas, David T.; Smith, Stuart M.; Ludwig, Timothy D.

    2011-01-01

    Companies operating large industrial settings often find delivering timely and accurate feedback to employees to be one of the toughest challenges they face in implementing performance management programs. In this report, an overhead scoreboard at a retailer's distribution center informed teams of order selectors as to how many tasks were…

  20. NASA atomic hydrogen standards program - An update

    NASA Technical Reports Server (NTRS)

    Reinhardt, V. S.; Kaufmann, D. C.; Adams, W. A.; Deluca, J. J.; Soucy, J. L.

    1976-01-01

    Some of the design features of NASA hydrogen masers are discussed including the large hydrogen source bulb, the palladium purified, the state selector, the replaceable pumps, the small entrance stem, magnetic shields, the elongated storage bulb, the aluminum cavity, the electronics package, and the autotuner. Attention is also given to the reliability and operating life of these hydrogen atomic standards.

  1. 40 CFR 86.309-79 - Sampling and analytical system; schematic drawing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... or parts of components that are wetted by the sample or corrosive calibration gases shall be either... must be within 2 inches of the analyzer entrance port. (vi) Calibration or span gases for the NOX... calibration gases. (ii) V2—optional heated selector valve to purge the sample probe, perform leak checks, or...

  2. 40 CFR 86.309-79 - Sampling and analytical system; schematic drawing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... or parts of components that are wetted by the sample or corrosive calibration gases shall be either... must be within 2 inches of the analyzer entrance port. (vi) Calibration or span gases for the NOX... calibration gases. (ii) V2—optional heated selector valve to purge the sample probe, perform leak checks, or...

  3. 14 CFR Appendix E to Part 135 - Helicopter Flight Recorder Specifications

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Keying On-Off (Discrete) 1 0.25 sec Power in Each Engine: Free Power Turbine Speed and Engine Torque 0... Hydraulic Pressure Low Discrete, each circuit 1 Flight Control Hydraulic Pressure Selector Switch Position, 1st and 2nd stage Discrete 1 AFCS Mode and Engagement Status Discrete (5 bits necessary) 1 Stability...

  4. Study of Man-Machine Communications Systems for Disabled Persons (The Handicapped). Volume VII. Final Report.

    ERIC Educational Resources Information Center

    Kafafian, Haig

    Teaching instructions, lesson plans, and exercises are provided for severely physically and/or neurologically handicapped persons learning to use the Cybertype electric writing machine with a tongue-body keyboard. The keyboard, which has eight double-throw toggle switches and a three-position state-selector switch, is designed to be used by…

  5. Managing Selection for Electronic Resources: Kent State University Develops a New System to Automate Selection

    ERIC Educational Resources Information Center

    Downey, Kay

    2012-01-01

    Kent State University has developed a centralized system that manages the communication and work related to the review and selection of commercially available electronic resources. It is an automated system that tracks the review process, provides selectors with price and trial information, and compiles reviewers' feedback about the resource. It…

  6. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  7. Multiple robustness in factorized likelihood models.

    PubMed

    Molina, J; Rotnitzky, A; Sued, M; Robins, J M

    2017-09-01

    We consider inference under a nonparametric or semiparametric model with likelihood that factorizes as the product of two or more variation-independent factors. We are interested in a finite-dimensional parameter that depends on only one of the likelihood factors and whose estimation requires the auxiliary estimation of one or several nuisance functions. We investigate general structures conducive to the construction of so-called multiply robust estimating functions, whose computation requires postulating several dimension-reducing models but which have mean zero at the true parameter value provided one of these models is correct.

  8. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  9. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    PubMed

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  10. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Ischemic stroke lesion segmentation in multi-spectral MR images with support vector machine classifiers

    NASA Astrophysics Data System (ADS)

    Maier, Oskar; Wilms, Matthias; von der Gablentz, Janina; Krämer, Ulrike; Handels, Heinz

    2014-03-01

    Automatic segmentation of ischemic stroke lesions in magnetic resonance (MR) images is important in clinical practice and for neuroscientific trials. The key problem is to detect largely inhomogeneous regions of varying sizes, shapes and locations. We present a stroke lesion segmentation method based on local features extracted from multi-spectral MR data that are selected to model a human observer's discrimination criteria. A support vector machine classifier is trained on expert-segmented examples and then used to classify formerly unseen images. Leave-one-out cross validation on eight datasets with lesions of varying appearances is performed, showing our method to compare favourably with other published approaches in terms of accuracy and robustness. Furthermore, we compare a number of feature selectors and closely examine each feature's and MR sequence's contribution.

  12. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  13. Genealogical Working Distributions for Bayesian Model Testing with Phylogenetic Uncertainty

    PubMed Central

    Baele, Guy; Lemey, Philippe; Suchard, Marc A.

    2016-01-01

    Marginal likelihood estimates to compare models using Bayes factors frequently accompany Bayesian phylogenetic inference. Approaches to estimate marginal likelihoods have garnered increased attention over the past decade. In particular, the introduction of path sampling (PS) and stepping-stone sampling (SS) into Bayesian phylogenetics has tremendously improved the accuracy of model selection. These sampling techniques are now used to evaluate complex evolutionary and population genetic models on empirical data sets, but considerable computational demands hamper their widespread adoption. Further, when very diffuse, but proper priors are specified for model parameters, numerical issues complicate the exploration of the priors, a necessary step in marginal likelihood estimation using PS or SS. To avoid such instabilities, generalized SS (GSS) has recently been proposed, introducing the concept of “working distributions” to facilitate—or shorten—the integration process that underlies marginal likelihood estimation. However, the need to fix the tree topology currently limits GSS in a coalescent-based framework. Here, we extend GSS by relaxing the fixed underlying tree topology assumption. To this purpose, we introduce a “working” distribution on the space of genealogies, which enables estimating marginal likelihoods while accommodating phylogenetic uncertainty. We propose two different “working” distributions that help GSS to outperform PS and SS in terms of accuracy when comparing demographic and evolutionary models applied to synthetic data and real-world examples. Further, we show that the use of very diffuse priors can lead to a considerable overestimation in marginal likelihood when using PS and SS, while still retrieving the correct marginal likelihood using both GSS approaches. The methods used in this article are available in BEAST, a powerful user-friendly software package to perform Bayesian evolutionary analyses. PMID:26526428

  14. Performance, Accuracy, Data Delivery, and Feedback Methods in Order Selection: A Comparison of Voice, Handheld, and Paper Technologies

    ERIC Educational Resources Information Center

    Ludwig, Timothy D.; Goomas, David T.

    2007-01-01

    Field study was conducted in auto-parts after-market distribution centers where selectors used handheld computers to receive instructions and feedback about their product selection process. A wireless voice-interaction technology was then implemented in a multiple baseline fashion across three departments of a warehouse (N = 14) and was associated…

  15. Direct HPLC separation of beta-aminoester enantiomers on totally synthetic chiral stationary phases.

    PubMed

    Gasparrini, F; D'Acquarica, I; Villani, C; Cimarelli, C; Palmieri, G

    1997-01-01

    The direct separation of beta-aminoester enantiomers by HPLC on synthetic chiral stationary phases based on a pi-acidic derivative of trans 1,2-diaminocyclohexane as selector is described. The application of different columns containing the stationary phase with opposite configurations and in the racemic form to the determination of enantiomeric excess in chemically impure samples is demonstrated.

  16. New Roads for Patron-Driven E-Books: Collection Development and Technical Services Implications of a Patron-Driven Acquisitions Pilot at Rutgers

    ERIC Educational Resources Information Center

    De Fino, Melissa; Lo, Mei Ling

    2011-01-01

    Collection development librarians have long struggled to meet user demands for new titles. Too often, required resources are not purchased, whereas some purchased resources do not circulate. E-books selected through patron-driven plans are a solution but present new challenges for both selectors and catalogers. Radical changes to traditional…

  17. Immediate Feedback on Accuracy and Performance: The Effects of Wireless Technology on Food Safety Tracking at a Distribution Center

    ERIC Educational Resources Information Center

    Goomas, David T.

    2012-01-01

    The effects of wireless ring scanners, which provided immediate auditory and visual feedback, were evaluated to increase the performance and accuracy of order selectors at a meat distribution center. The scanners not only increased performance and accuracy compared to paper pick sheets, but were also instrumental in immediate and accurate data…

  18. The beetle Tribolium castaneum has a fushi tarazu homolog expressed in stripes during segmentation

    NASA Technical Reports Server (NTRS)

    Brown, S. J.; Hilgenfeld, R. B.; Denell, R. E.; Spooner, B. S. (Principal Investigator)

    1994-01-01

    The genetic control of embryonic organization is far better understood for the fruit fly Drosophila melanogaster than for any other metazoan. A gene hierarchy acts during oogenesis and embryogenesis to regulate the establishment of segmentation along the anterior-posterior axis, and homeotic selector genes define developmental commitments within each parasegmental unit delineated. One of the most intensively studied Drosophila segmentation genes is fushi tarazu (ftz), a pair-rule gene expressed in stripes that is important for the establishment of the parasegmental boundaries. Although ftz is flanked by homeotic selector genes conserved throughout the metazoa, there is no evidence that it was part of the ancestral homeotic complex, and it has been unclear when the gene arose and acquired a role in segmentation. We show here that the beetle Tribolium castaneum has a ftz homolog located in its Homeotic complex and expressed in a pair-rule fashion, albeit in a register differing from that of the fly gene. These and other observations demonstrate that a ftz gene preexisted the radiation of holometabolous insects and suggest that it has a role in beetle embryogenesis which differs somewhat from that described in flies.

  19. ISS Solar Array Management

    NASA Technical Reports Server (NTRS)

    Williams, James P.; Martin, Keith D.; Thomas, Justin R.; Caro, Samuel

    2010-01-01

    The International Space Station (ISS) Solar Array Management (SAM) software toolset provides the capabilities necessary to operate a spacecraft with complex solar array constraints. It monitors spacecraft telemetry and provides interpretations of solar array constraint data in an intuitive manner. The toolset provides extensive situational awareness to ensure mission success by analyzing power generation needs, array motion constraints, and structural loading situations. The software suite consists of several components including samCS (constraint set selector), samShadyTimers (array shadowing timers), samWin (visualization GUI), samLock (array motion constraint computation), and samJet (attitude control system configuration selector). It provides high availability and uptime for extended and continuous mission support. It is able to support two-degrees-of-freedom (DOF) array positioning and supports up to ten simultaneous constraints with intuitive 1D and 2D decision support visualizations of constraint data. Display synchronization is enabled across a networked control center and multiple methods for constraint data interpolation are supported. Use of this software toolset increases flight safety, reduces mission support effort, optimizes solar array operation for achieving mission goals, and has run for weeks at a time without issues. The SAM toolset is currently used in ISS real-time mission operations.

  20. Three-dimensional crossbar arrays of self-rectifying Si/SiO 2/Si memristors

    DOE PAGES

    Li, Can; Han, Lili; Jiang, Hao; ...

    2017-06-05

    Memristors are promising building blocks for the next generation memory, unconventional computing systems and beyond. Currently common materials used to build memristors are not necessarily compatible with the silicon dominant complementary metal-oxide-semiconductor (CMOS) technology. Furthermore, external selector devices or circuits are usually required in order for large memristor arrays to function properly, resulting in increased circuit complexity. Here we demonstrate fully CMOS-compatible, all-silicon based and self-rectifying memristors that negate the need for external selectors in large arrays. It consists of p- and n-type doped single crystalline silicon electrodes and a thin chemically produced silicon oxide switching layer. The device exhibitsmore » repeatable resistance switching behavior with high rectifying ratio (10 5), high ON/OFF conductance ratio (10 4) and attractive retention at 300 °C. We further build a 5-layer 3-dimensional (3D) crossbar array of 100 nm memristors by stacking fluid supported silicon membranes. The CMOS compatibility and self-rectifying behavior open up opportunities for mass production of memristor arrays and 3D hybrid circuits on full-wafer scale silicon and flexible substrates without increasing circuit complexity.« less

  1. A New Class of Macrocyclic Chiral Selectors for Stereochemical Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1999-03-11

    This report summarizes the work accomplished in the authors laboratories over the previous three years. During the funding period they have had 23 monographs published or in press, 1 book chapter, 1 patent issued and have delivered 28 invited seminars or plenary lectures on DOE sponsored research. This report covers the work that has been published (or accepted). The most notable aspect of this work involves the successful development and understanding of a new class of fused macrocyclic compounds as pseudophases and selectors in high performance separations (including high performance liquid chromatography, HPLC; capillary electrophoresis, CE; and thin layer chromatography,more » TLC). They have considerably extended their chiral biomarker work from amber to crude oil and coal. In the process of doing this we've developed several novel separation approaches. They finished their work on the new GSC-PLOT column which is now being used by researchers world-wide for the analysis of gases, light hydrocarbons and halocarbons. Finally, we completed basic studies on immobilizing a cyclodextrin/oligosiloxane hybrid on the wall of fused silica, as well as a basic study on the separation behavior of buckminster fullerene and higher fullerenes.« less

  2. Poly-proline-based chiral stationary phases: a molecular dynamics study of triproline, tetraproline, pentaproline and hexaproline interfaces.

    PubMed

    Ashtari, M; Cann, N M

    2012-11-23

    Poly-proline chains and derivatives have been recently examined as the basis for new chiral stationary phases in high performance liquid chromatography. The selectivity of poly-proline has been measured for peptides with up to ten proline units. In this article, we employ molecular dynamics simulations to examine the interfacial structure and solvation of surface-bound poly-proline chiral selectors. Specifically, we study the interfacial structure of trimethylacetyl-terminated poly-proline chains with three-to-six prolines. The surface includes silanol groups and end-caps, to better capture the characteristics of the stationary phase, and the solvent is either a polar water/methanol or a relatively apolar n-hexane/2-propanol mixture. We begin with a comprehensive ab initio study of the conformers, their energies, and an assessment of conformer flexibility. Force fields have been developed for each poly-proline selector. Molecular dynamics simulations are employed to study the preferred backbone conformations and solvent hydrogen bonding for different poly-proline/solvent interfaces. For triproline, the effect of two different terminal groups, trimethylacetyl and t-butyl carbamate are compared. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Transgene expression of green fluorescent protein and germ line transmission in cloned pigs derived from in vitro transfected adult fibroblasts.

    PubMed

    Brunetti, Dario; Perota, Andrea; Lagutina, Irina; Colleoni, Silvia; Duchi, Roberto; Calabrese, Fiorella; Seveso, Michela; Cozzi, Emanuele; Lazzari, Giovanna; Lucchini, Franco; Galli, Cesare

    2008-12-01

    The pig represents the xenogeneic donor of choice for future organ transplantation in humans for anatomical and physiological reasons. However, to bypass several immunological barriers, strong and stable human genes expression must occur in the pig's organs. In this study we created transgenic pigs using in vitro transfection of cultured cells combined with somatic cell nuclear transfer (SCNT) to evaluate the ubiquitous transgene expression driven by pCAGGS vector in presence of different selectors. pCAGGS confirmed to be a very effective vector for ubiquitous transgene expression, irrespective of the selector that was used. Green fluorescent protein (GFP) expression observed in transfected fibroblasts was also maintained after nuclear transfer, through pre- and postimplantation development, at birth and during adulthood. Germ line transmission without silencing of the transgene was demonstrated. The ubiquitous expression of GFP was clearly confirmed in several tissues including endothelial cells, thus making it a suitable vector for the expression of multiple genes relevant to xenotransplantation where tissue specificity is not required. Finally cotransfection of green and red fluorescence protein transgenes was performed in fibroblasts and after nuclear transfer blastocysts expressing both fluorescent proteins were obtained.

  4. Analysis of optical purity and impurity of synthetic D-phenylalanine products using sulfated beta-cyclodextrin as chiral selector by reversed-polarity capillary electrophoresis.

    PubMed

    Zhao, Yan; Yang, Xing-Bin; Jiang, Ru; Sun, Xiao-Li; Li, Xiao-Ye; Liu, Wen-Min; Zhang, Sheng-Yong

    2006-02-01

    A new capillary electrophoresis (CE) method has been achieved for simultaneous separation and quantification of phenylalanine, N-acetylphenylalanine enantiomers, and prochiral N-acetylaminocinnamic acid, possibly co-existent in reaction systems or synthesized products of D-phenylalanine. The separation was carried out in an uncoated capillary under reversed-electrophoretic mode. Among the diverse charged cyclodextrins (CDs) examined, highly sulfated (HS)-beta-CD as the chiral selector exhibited the best enantioselectivity. The complete separation of the analytes was obtained under the optimum conditions of pH 2.5, 35 mM Tris buffer containing 4% HS-beta-CD, applied voltage -15 kV, and capillary temperature 25 degrees C. Furthermore, the proposed method was applied to the determination of optical purity and trace impurities in three batches of the asymmetric synthetic samples of D-phenylalanine, and satisfactory results were obtained. The determination recoveries of the samples were in the range of 97.8-103.8%, and precisions fell within 2.3-5.0% (RSD). The results demonstrate that this CE method is a useful, simple technique and is applicable to purity assays of D-phenylalanine. (c) 2005 Wiley-Liss, Inc.

  5. Influence of steric hindrance on enantioseparation of Dns-amino acids and pesticides on terguride based chiral selectors in capillary electrophoresis.

    PubMed

    Honzátko, Ales; Cvak, Jan; Vaingátová, Silvie; Flieger, Miroslav

    2005-05-01

    Three urea derivatives of ergoline-based chiral selectors (CSs), differing in the size of the urea side chain, i.e. dimethyl- (CSI), diethyl- (CSII), and diisopropylurea (CSIII), were used to study the effect of steric hindrance on the enantioseparation of dansyl amino acids (Dns-AAs), pesticides, and mandelic acid under condition of capillary electrophoresis (CE) in linear polyacrylamide coated capillaries. A mixture of organic modifiers (MeOH/THF, 4:1 v/v) in a BGE consisting of 100 mM beta-alanine-acetate was used to increase the solubility of CSs up to 25 mM. The capillary was filled with CS (high UV absorption), and the inlet and outlet vials contained buffer solutions only. The best enantioseparation of Dns-AAs was achieved on CSI. Increased steric hindrance of the chiral binding site led to reduction of both enantioselectivity and resolution. The opposite pattern was observed for the separation of mandelic acid enantiomers, where the best enantioseparation and resolution was obtained with CSIII. Most of the pesticides studied reached maximum selectivity on the diethylurea ergoline derivative (CSII). Enantioseparation of fenoxaprop was found to be independent of steric hindrance.

  6. Analysis of repaglinide enantiomers in pharmaceutical formulations by capillary electrophoresis using 2,6-di-o-methyl-β-cyclodextrin as a chiral selector.

    PubMed

    Li, Cen; Jiang, Ye

    2012-09-01

    This study used the general applicability of 2,6-didi-o-methyl-β-cyclodextrin (DM-β-CD) as the chiral selector in capillary electrophoresis for fast and efficient chiral separation of repaglinide enantiomers. A systematic study of the parameters affecting separation was performed with UV detection at 243 nm. The optimum conditions were determined to be 1.25% (w/v) DM-β-CD in 20 mM sodium phosphate (pH 2.5) as the running buffer and separation voltage at 20 kV. DM-β-CD had the best enantiomer resolution properties under the tested conditions, whereas other β-cyclodextrins showed inferior performances or no performance. The proposed method had a linear calibration curve in the concentration range of 12.5-400 µg/mL. The limit of detection was 100 ng/mL. The intra-day and inter-day precisions were 2.8 and 3.2%, respectively. Recoveries of 97.9-100.9% were obtained. The proposed method was fast and convenient, and was determined to be efficient for separating enantiomers and applicable for analyzing repaglinide enantiomers in quality control of pharmaceutical production.

  7. Chiral separation of phenylalanine and tryptophan by capillary electrophoresis using a mixture of β-CD and chiral ionic liquid ([TBA] [L-ASP]) as selectors.

    PubMed

    Yujiao, Wu; Guoyan, Wang; Wenyan, Zhao; Hongfen, Zhang; Huanwang, Jing; Anjia, Chen

    2014-05-01

    In this paper, a simple, effective and green capillary electrophoresis separation and detection method was developed for the quantification of underivatized amino acids (dl-phenylalanine; dl-tryptophan) using β-Cyclodextrin and chiral ionic liquid ([TBA] [l-ASP]) as selectors. Separation parameters such as buffer concentrations, pH, β-CD and chiral ionic liquid concentrations and separation voltage were investigated for the enantioseparation in order to achieve the maximum possible resolution. A good separation was achieved in a background electrolyte composed of 15 mm sodium tetraborate, 5 mm β-CD and 4 mm chiral ionic liquid at pH 9.5, and an applied voltage of 10 kV. Under optimum conditions, linearity was achieved within concentration ranges from 0.08 to 10 µg/mL for the analytes with correlation coefficients from 0.9956 to 0.9998, and the analytes were separated in less than 6 min with efficiencies up to 970,000 plates/m. The proposed method was successfully applied to the determination of amino acid enantiomers in compound amino acids injections, such as 18AA-I, 18AA-II and 3AA.

  8. Micromechanical slit positioning system as a transmissive spatial light modulator

    NASA Astrophysics Data System (ADS)

    Riesenberg, Rainer

    2001-11-01

    Micro-slits have been prepared with a slit-width and a slit- length of 2 ... 1000 micrometers . Linear and two-dimensional arrays up to 10 x 110 slits have been developed and completed with a piezo-actuator for shifting. This system is a so-called mechanical slit positioning system. The light is switched by simple one- or two-dimensional displacement of coded slit masks in a one- or two-layer architecture. The slit positioning system belongs to the transmissive class of MEMS-based spatial light modulators (SLM). It has fundamental advantages for optical contrast and also can be used in the full spectral region. Therefore transmissive versions of SLM should be a future solution. Instrument architectures based on the slit positioning system can increase the resolution by subpixel generation, the throughput by HADAMARD transform mode, or select objects for multi-object-spectroscopy. The linear slit positioning system was space qualified within an advanced micro- spectrometer. A NIR multi-object-spectrometer for the Next Generation Space Telescope (NGST) is based on a field selector for selecting objects. The field selector is a SLM, which could be implemented by a slit positioning system.

  9. Using a Novel Wireless-Networked Decentralized Control Scheme under Unpredictable Environmental Conditions

    PubMed Central

    Chang, Chung-Liang; Huang, Yi-Ming; Hong, Guo-Fong

    2015-01-01

    The direction of sunshine or the installation sites of environmental control facilities in the greenhouse result in different temperature and humidity levels in the various zones of the greenhouse, and thus, the production quality of crop is inconsistent. This study proposed a wireless-networked decentralized fuzzy control scheme to regulate the environmental parameters of various culture zones within a greenhouse. The proposed scheme can create different environmental conditions for cultivating different crops in various zones and achieve diversification or standardization of crop production. A star-type wireless sensor network is utilized to communicate with each sensing node, actuator node, and control node in various zones within the greenhouse. The fuzzy rule-based inference system is used to regulate the environmental parameters for temperature and humidity based on real-time data of plant growth response provided by a growth stage selector. The growth stage selector defines the control ranges of temperature and humidity of the various culture zones according to the leaf area of the plant, the number of leaves, and the cumulative amount of light. The experimental results show that the proposed scheme is stable and robust and provides basis for future greenhouse applications. PMID:26569264

  10. Optimal heavy tail estimation - Part 1: Order selection

    NASA Astrophysics Data System (ADS)

    Mudelsee, Manfred; Bermejo, Miguel A.

    2017-12-01

    The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1) serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.

  11. Development of a temperature gradient focusing method for in situ extraterrestrial biomarker analysis.

    PubMed

    Danger, Grégoire; Ross, David

    2008-08-01

    Scanning temperature gradient focusing (TGF) is a recently described technique for the simultaneous concentration and separation of charged analytes. It allows for high analyte peak capacities and low LODs in microcolumn electrophoretic separations. In this paper, we present the application of scanning TGF for chiral separations of amino acids. Using a mixture of seven carboxyfluorescein succinimidyl ester-labeled amino acids (including five chiral amino acids) which constitute the Mars7 standard, we show that scanning TGF is a very simple and efficient method for chiral separations. The modulation of TGF separation parameters (temperature window, pressure scan rate, temperature range, and chiral selector concentration) allows optimization of peak efficiencies and analyte resolutions. The use of hydroxypropyl-beta-CD at low concentration (1-5 mmol/L) as a chiral selector, with an appropriate pressure scan rate ( -0.25 Pa/s) and with a low temperature range (3-25 degrees C over 1 cm) provided high resolution between enantiomers (Rs >1.5 for each pair of enantiomers) using a short, 4 cm long capillary. With these new results, the scanning TGF method appears to be a viable method for in situ trace biomarker analysis for future missions to Mars or other solar system bodies.

  12. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  13. Reconceptualizing Social Influence in Counseling: The Elaboration Likelihood Model.

    ERIC Educational Resources Information Center

    McNeill, Brian W.; Stoltenberg, Cal D.

    1989-01-01

    Presents Elaboration Likelihood Model (ELM) of persuasion (a reconceptualization of the social influence process) as alternative model of attitude change. Contends ELM unifies conflicting social psychology results and can potentially account for inconsistent research findings in counseling psychology. Provides guidelines on integrating…

  14. Likelihood analysis of spatial capture-recapture models for stratified or class structured populations

    USGS Publications Warehouse

    Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.

    2015-01-01

    We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.

  15. Gaussian Mixture Models of Between-Source Variation for Likelihood Ratio Computation from Multivariate Data

    PubMed Central

    Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680

  16. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  17. Computation of nonlinear least squares estimator and maximum likelihood using principles in matrix calculus

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.

    2017-11-01

    This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation

  18. Developing a non-point source P loss indicator in R and its parameter uncertainty assessment using GLUE: a case study in northern China.

    PubMed

    Su, Jingjun; Du, Xinzhong; Li, Xuyong

    2018-05-16

    Uncertainty analysis is an important prerequisite for model application. However, the existing phosphorus (P) loss indexes or indicators were rarely evaluated. This study applied generalized likelihood uncertainty estimation (GLUE) method to assess the uncertainty of parameters and modeling outputs of a non-point source (NPS) P indicator constructed in R language. And the influences of subjective choices of likelihood formulation and acceptability threshold of GLUE on model outputs were also detected. The results indicated the following. (1) Parameters RegR 2 , RegSDR 2 , PlossDP fer , PlossDP man , DPDR, and DPR were highly sensitive to overall TP simulation and their value ranges could be reduced by GLUE. (2) Nash efficiency likelihood (L 1 ) seemed to present better ability in accentuating high likelihood value simulations than the exponential function (L 2 ) did. (3) The combined likelihood integrating the criteria of multiple outputs acted better than single likelihood in model uncertainty assessment in terms of reducing the uncertainty band widths and assuring the fitting goodness of whole model outputs. (4) A value of 0.55 appeared to be a modest choice of threshold value to balance the interests between high modeling efficiency and high bracketing efficiency. Results of this study could provide (1) an option to conduct NPS modeling under one single computer platform, (2) important references to the parameter setting for NPS model development in similar regions, (3) useful suggestions for the application of GLUE method in studies with different emphases according to research interests, and (4) important insights into the watershed P management in similar regions.

  19. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  20. A Bayesian Alternative for Multi-objective Ecohydrological Model Specification

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.

    2015-12-01

    Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.

  1. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  2. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    PubMed

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)

    PubMed Central

    Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.

    2015-01-01

    Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128

  4. Anatomy of Ag/Hafnia-Based Selectors with 10 10 Nonlinearity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Midya, Rivu; Wang, Zhongrui; Zhang, Jiaming

    We developed a novel Ag/oxide-based threshold switching device with attractive features including ≈10 10 nonlinearity. Furthermore, in a high-resolution transmission electron microscopic analysis of the nanoscale crosspoint device it is suggested that elongation of an Ag nanoparticle under voltage bias followed by spontaneous reformation of a more spherical shape after power off, is responsible for the observed threshold switching.

  5. B-1 Systems Approach to Training. Task Analysis Listings

    DTIC Science & Technology

    1975-07-01

    OFF FUEL VALVES AND PUMPS PHR-OFF FUEL VALVES AND PUMPS = AUTO ^FT TFR MODE LAND SELECTOR SWITCHES TQ *QFF...TFR MODE SWITCH-RIGHT «JFT L TFR MODE SELECT SWITCH TQ * TF1 CHECKLIST TFR MODE SWITCH-LEFT TFR MODE SWITCH-LEFT...DOOR HANDLE ENTRY LADDER CONTROL SWITCH ENTRY LADDER CONTROL SWITCH = DN* 16.1.1.001.OC* SET TANK FILL VALVE SWS ON

  6. Handbook of International Alloy Compositions and Designations. Volume 1. Titanium

    DTIC Science & Technology

    1976-11-01

    product, and MMA-9744 for the Martin Marietta Aluminum Company brand of Ti-6Al-2Sn-4Zr-2Mo alloy. Other producers use meaningful symbols for designa...Operating Standards. Martin Marietta Aluminum, Torrance, California (1976). 70. Materials Selector, Material Engineering, 80 (4), (September 1974), pp...Corporation of America, Timet Div. (TMCA). Pittsburgh, Pennsylvania Martin Marietta Aluminum, Titanium Division, Torrance, California RMI Company

  7. Development of a Simulink Library for the Design, Testing and Simulation of Software Defined GPS Radios. With Application to the Development of Parallel Correlator Structures

    DTIC Science & Technology

    2014-05-01

    function Value = Select_Element(Index,Signal) %# eml Value = Signal(Index); Code Listing 1 Code for Selector Block 12 | P a g e 4.3...code for the Simulink function shiftedSignal = fcn(signal,Shift) %# eml shiftedSignal = circshift(signal,Shift); Code Listing 2 Code for CircShift

  8. Anatomy of Ag/Hafnia-Based Selectors with 10 10 Nonlinearity

    DOE PAGES

    Midya, Rivu; Wang, Zhongrui; Zhang, Jiaming; ...

    2017-01-30

    We developed a novel Ag/oxide-based threshold switching device with attractive features including ≈10 10 nonlinearity. Furthermore, in a high-resolution transmission electron microscopic analysis of the nanoscale crosspoint device it is suggested that elongation of an Ag nanoparticle under voltage bias followed by spontaneous reformation of a more spherical shape after power off, is responsible for the observed threshold switching.

  9. Balloon Borne Ultraviolet Spectrometer.

    DTIC Science & Technology

    1978-12-28

    n.c.aaary ond lden lfy by block numb.r) ultraviolet ground support equipment (GSE) spectrometers flight electronics instrumentation balloons \\ solar ...Assembly 4 Fig. 3 Solar Balloon Experiment Ass ’y 7 Fig. 4 Mechanical Interface , UV Spectrometer 8 Fig . 5 Spectrometer Body Assemb ly 10 Fig. 6...Diagram, GSE )bnitor 48 Selector and Battery Charger Fig. 25 Schematic Diagram, GSE Serial to 49 Parallel Data Converter Fig. 26 Schematic Diagram

  10. Replacing Voice Input with Technology that Provided Immediate Visual and Audio Feedback to Reduce Employee Errors

    ERIC Educational Resources Information Center

    Goomas, David T.

    2010-01-01

    In this report from the field at two auto parts distribution centers, order selectors picked auto accessories (e.g., fuses, oil caps, tool kits) into industrial plastic totes as part of store orders. Accurately identifying all store order totes via the license plate number was a prerequisite for the warehouse management system (WMS) to track each…

  11. Evaluation of RPE-Select: A Web-Based Respiratory Protective Equipment Selector Tool.

    PubMed

    Vaughan, Nick; Rajan-Sithamparanadarajah, Bob; Atkinson, Robert

    2016-08-01

    This article describes the evaluation of an open-access web-based respiratory protective equipment selector tool (RPE-Select, accessible at http://www.healthyworkinglives.com/rpe-selector). This tool is based on the principles of the COSHH-Essentials (C-E) control banding (CB) tool, which was developed for the exposure risk management of hazardous chemicals in the workplace by small and medium sized enterprises (SMEs) and general practice H&S professionals. RPE-Select can be used for identifying adequate and suitable RPE for dusts, fibres, mist (solvent, water, and oil based), sprays, volatile solids, fumes, gases, vapours, and actual or potential oxygen deficiency. It can be applied for substances and products with safety data sheets as well as for a large number of commonly encountered process-generated substances (PGS), such as poultry house dusts or welding fume. Potential international usability has been built-in by using the Hazard Statements developed for the Globally Harmonised System (GHS) and providing recommended RPE in picture form as well as with a written specification. Illustration helps to compensate for the variabilities in assigned protection factors across the world. RPE-Select uses easily understandable descriptions/explanations and an interactive stepwise flow for providing input/answers at each step. The output of the selection process is a report summarising the user input data and a selection of RPE, including types of filters where applicable, from which the user can select the appropriate one for each wearer. In addition, each report includes 'Dos' and 'Don'ts' for the recommended RPE. RPE-Select outcomes, based on up to 20 hypothetical use scenarios, were evaluated in comparison with other available RPE selection processes and tools, and by 32 independent users with a broad range of familiarities with industrial use scenarios in general and respiratory protection in particular. For scenarios involving substances having safety data sheets, 87% of RPE-Select outcomes resulted in a 'safe' RPE selection, while 98% 'safe' outcomes were achieved for scenarios involving process-generated substances. Reasons for the outliers were examined. User comments and opinions on the mechanics and usability of RPE-Select are also presented. © Crown copyright 2016.

  12. Construction of diagnosis system and gene regulatory networks based on microarray analysis.

    PubMed

    Hong, Chun-Fu; Chen, Ying-Chen; Chen, Wei-Chun; Tu, Keng-Chang; Tsai, Meng-Hsiun; Chan, Yung-Kuan; Yu, Shyr Shen

    2018-05-01

    A microarray analysis generally contains expression data of thousands of genes, but most of them are irrelevant to the disease of interest, making analyzing the genes concerning specific diseases complicated. Therefore, filtering out a few essential genes as well as their regulatory networks is critical, and a disease can be easily diagnosed just depending on the expression profiles of a few critical genes. In this study, a target gene screening (TGS) system, which is a microarray-based information system that integrates F-statistics, pattern recognition matching, a two-layer K-means classifier, a Parameter Detection Genetic Algorithm (PDGA), a genetic-based gene selector (GBG selector) and the association rule, was developed to screen out a small subset of genes that can discriminate malignant stages of cancers. During the first stage, F-statistic, pattern recognition matching, and a two-layer K-means classifier were applied in the system to filter out the 20 critical genes most relevant to ovarian cancer from 9600 genes, and the PDGA was used to decide the fittest values of the parameters for these critical genes. Among the 20 critical genes, 15 are associated with cancer progression. In the second stage, we further employed a GBG selector and the association rule to screen out seven target gene sets, each with only four to six genes, and each of which can precisely identify the malignancy stage of ovarian cancer based on their expression profiles. We further deduced the gene regulatory networks of the 20 critical genes by applying the Pearson correlation coefficient to evaluate the correlationship between the expression of each gene at the same stages and at different stages. Correlationships between gene pairs were calculated, and then, three regulatory networks were deduced. Their correlationships were further confirmed by the Ingenuity pathway analysis. The prognostic significances of the genes identified via regulatory networks were examined using online tools, and most represented biomarker candidates. In summary, our proposed system provides a new strategy to identify critical genes or biomarkers, as well as their regulatory networks, from microarray data. Copyright © 2018. Published by Elsevier Inc.

  13. Regulatory Mechanisms Controlling Maturation of Serotonin Neuron Identity and Function

    PubMed Central

    Spencer, William C.; Deneris, Evan S.

    2017-01-01

    The brain serotonin (5-hydroxytryptamine; 5-HT) system has been extensively studied for its role in normal physiology and behavior, as well as, neuropsychiatric disorders. The broad influence of 5-HT on brain function, is in part due to the vast connectivity pattern of 5-HT-producing neurons throughout the CNS. 5-HT neurons are born and terminally specified midway through embryogenesis, then enter a protracted period of maturation, where they functionally integrate into CNS circuitry and then are maintained throughout life. The transcriptional regulatory networks controlling progenitor cell generation and terminal specification of 5-HT neurons are relatively well-understood, yet the factors controlling 5-HT neuron maturation are only recently coming to light. In this review, we first provide an update on the regulatory network controlling 5-HT neuron development, then delve deeper into the properties and regulatory strategies governing 5-HT neuron maturation. In particular, we discuss the role of the 5-HT neuron terminal selector transcription factor (TF) Pet-1 as a key regulator of 5-HT neuron maturation. Pet-1 was originally shown to positively regulate genes needed for 5-HT synthesis, reuptake and vesicular transport, hence 5-HT neuron-type transmitter identity. It has now been shown to regulate, both positively and negatively, many other categories of genes in 5-HT neurons including ion channels, GPCRs, transporters, neuropeptides, and other transcription factors. Its function as a terminal selector results in the maturation of 5-HT neuron excitability, firing characteristics, and synaptic modulation by several neurotransmitters. Furthermore, there is a temporal requirement for Pet-1 in the control of postmitotic gene expression trajectories thus indicating a direct role in 5-HT neuron maturation. Proper regulation of the maturation of cellular identity is critical for normal neuronal functioning and perturbations in the gene regulatory networks controlling these processes may result in long-lasting changes in brain function in adulthood. Further study of 5-HT neuron gene regulatory networks is likely to provide additional insight into how neurons acquire their mature identities and how terminal selector-type TFs function in postmitotic vertebrate neurons. PMID:28769770

  14. Regulatory Mechanisms Controlling Maturation of Serotonin Neuron Identity and Function.

    PubMed

    Spencer, William C; Deneris, Evan S

    2017-01-01

    The brain serotonin (5-hydroxytryptamine; 5-HT) system has been extensively studied for its role in normal physiology and behavior, as well as, neuropsychiatric disorders. The broad influence of 5-HT on brain function, is in part due to the vast connectivity pattern of 5-HT-producing neurons throughout the CNS. 5-HT neurons are born and terminally specified midway through embryogenesis, then enter a protracted period of maturation, where they functionally integrate into CNS circuitry and then are maintained throughout life. The transcriptional regulatory networks controlling progenitor cell generation and terminal specification of 5-HT neurons are relatively well-understood, yet the factors controlling 5-HT neuron maturation are only recently coming to light. In this review, we first provide an update on the regulatory network controlling 5-HT neuron development, then delve deeper into the properties and regulatory strategies governing 5-HT neuron maturation. In particular, we discuss the role of the 5-HT neuron terminal selector transcription factor (TF) Pet-1 as a key regulator of 5-HT neuron maturation. Pet-1 was originally shown to positively regulate genes needed for 5-HT synthesis, reuptake and vesicular transport, hence 5-HT neuron-type transmitter identity. It has now been shown to regulate, both positively and negatively, many other categories of genes in 5-HT neurons including ion channels, GPCRs, transporters, neuropeptides, and other transcription factors. Its function as a terminal selector results in the maturation of 5-HT neuron excitability, firing characteristics, and synaptic modulation by several neurotransmitters. Furthermore, there is a temporal requirement for Pet-1 in the control of postmitotic gene expression trajectories thus indicating a direct role in 5-HT neuron maturation. Proper regulation of the maturation of cellular identity is critical for normal neuronal functioning and perturbations in the gene regulatory networks controlling these processes may result in long-lasting changes in brain function in adulthood. Further study of 5-HT neuron gene regulatory networks is likely to provide additional insight into how neurons acquire their mature identities and how terminal selector-type TFs function in postmitotic vertebrate neurons.

  15. Evaluation of RPE-Select: A Web-Based Respiratory Protective Equipment Selector Tool

    PubMed Central

    Vaughan, Nick; Rajan-Sithamparanadarajah, Bob; Atkinson, Robert

    2016-01-01

    This article describes the evaluation of an open-access web-based respiratory protective equipment selector tool (RPE-Select, accessible at http://www.healthyworkinglives.com/rpe-selector). This tool is based on the principles of the COSHH-Essentials (C-E) control banding (CB) tool, which was developed for the exposure risk management of hazardous chemicals in the workplace by small and medium sized enterprises (SMEs) and general practice H&S professionals. RPE-Select can be used for identifying adequate and suitable RPE for dusts, fibres, mist (solvent, water, and oil based), sprays, volatile solids, fumes, gases, vapours, and actual or potential oxygen deficiency. It can be applied for substances and products with safety data sheets as well as for a large number of commonly encountered process-generated substances (PGS), such as poultry house dusts or welding fume. Potential international usability has been built-in by using the Hazard Statements developed for the Globally Harmonised System (GHS) and providing recommended RPE in picture form as well as with a written specification. Illustration helps to compensate for the variabilities in assigned protection factors across the world. RPE-Select uses easily understandable descriptions/explanations and an interactive stepwise flow for providing input/answers at each step. The output of the selection process is a report summarising the user input data and a selection of RPE, including types of filters where applicable, from which the user can select the appropriate one for each wearer. In addition, each report includes ‘Dos’ and ‘Don’ts’ for the recommended RPE. RPE-Select outcomes, based on up to 20 hypothetical use scenarios, were evaluated in comparison with other available RPE selection processes and tools, and by 32 independent users with a broad range of familiarities with industrial use scenarios in general and respiratory protection in particular. For scenarios involving substances having safety data sheets, 87% of RPE-Select outcomes resulted in a ‘safe’ RPE selection, while 98% ‘safe’ outcomes were achieved for scenarios involving process-generated substances. Reasons for the outliers were examined. User comments and opinions on the mechanics and usability of RPE-Select are also presented. PMID:27286763

  16. The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.

    ERIC Educational Resources Information Center

    Baldwin, Beatrice

    The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…

  17. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    ERIC Educational Resources Information Center

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  18. Robust analysis of semiparametric renewal process models

    PubMed Central

    Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.

    2013-01-01

    Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568

  19. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  20. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  1. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  2. Predicting the likelihood of altered streamflows at ungauged rivers across the conterminous United States

    USGS Publications Warehouse

    Eng, Kenny; Carlisle, Daren M.; Wolock, David M.; Falcone, James A.

    2013-01-01

    An approach is presented in this study to aid water-resource managers in characterizing streamflow alteration at ungauged rivers. Such approaches can be used to take advantage of the substantial amounts of biological data collected at ungauged rivers to evaluate the potential ecological consequences of altered streamflows. National-scale random forest statistical models are developed to predict the likelihood that ungauged rivers have altered streamflows (relative to expected natural condition) for five hydrologic metrics (HMs) representing different aspects of the streamflow regime. The models use human disturbance variables, such as number of dams and road density, to predict the likelihood of streamflow alteration. For each HM, separate models are derived to predict the likelihood that the observed metric is greater than (‘inflated’) or less than (‘diminished’) natural conditions. The utility of these models is demonstrated by applying them to all river segments in the South Platte River in Colorado, USA, and for all 10-digit hydrologic units in the conterminous United States. In general, the models successfully predicted the likelihood of alteration to the five HMs at the national scale as well as in the South Platte River basin. However, the models predicting the likelihood of diminished HMs consistently outperformed models predicting inflated HMs, possibly because of fewer sites across the conterminous United States where HMs are inflated. The results of these analyses suggest that the primary predictors of altered streamflow regimes across the Nation are (i) the residence time of annual runoff held in storage in reservoirs, (ii) the degree of urbanization measured by road density and (iii) the extent of agricultural land cover in the river basin.

  3. A Penalized Likelihood Framework For High-Dimensional Phylogenetic Comparative Methods And An Application To New-World Monkeys Brain Evolution.

    PubMed

    Julien, Clavel; Leandro, Aristide; Hélène, Morlon

    2018-06-19

    Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.

  4. Substituent effects on the enantioselective retention of anti-HIV 5-aryl-delta 2-1,2,4-oxadiazolines on R,R-DACH-DNB chiral stationary phase.

    PubMed

    Altomare, C; Cellamare, S; Carotti, A; Barreca, M L; Chimirri, A; Monforte, A M; Gasparrini, F; Villani, C; Cirilli, M; Mazza, F

    1996-01-01

    A series of racemic 3-phenyl-4-(1-adamantyl)-5-X-phenyl- delta 2-1,2,4-oxadiazo lines (PAdOx) were directly resolved by HPLC using a Pirkle-type stationary phase containing N,N'-(3,5-dinitrobenzoyl)-1(R),2(R)-diaminocyclohexane as chiral selector. The more retained enantiomers have S configuration, as demonstrated by X-ray crystallography and circular dichroism measurements. The influence of aromatic ring substituents on enantioselective retention was quantitatively assessed by traditional linear free energy-related (LFER) equations and comparative molecular field analysis (CoMFA). In good agreement with previous findings, the results from this study indicate that the increase in retention (k') is favoured mainly by the phi-basicity and the hydrophilicity of solute, whereas enantioselectivity (alpha) can be satisfactorily modeled by electronic and bulk parameters or CoMFA descriptors. The LFER equations and CoMFA models gave helpful insights into chiral recognition mechanisms.

  5. Maximum Likelihood Item Easiness Models for Test Theory Without an Answer Key

    PubMed Central

    Batchelder, William H.

    2014-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce two extensions to the basic model in order to account for item rating easiness/difficulty. The first extension is a multiplicative model and the second is an additive model. We show how the multiplicative model is related to the Rasch model. We describe several maximum-likelihood estimation procedures for the models and discuss issues of model fit and identifiability. We describe how the CCT models could be used to give alternative consensus-based measures of reliability. We demonstrate the utility of both the basic and extended models on a set of essay rating data and give ideas for future research. PMID:29795812

  6. Modeling gene expression measurement error: a quasi-likelihood approach

    PubMed Central

    Strimmer, Korbinian

    2003-01-01

    Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637

  7. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  8. Identification of Piecewise Linear Uniform Motion Blur

    NASA Astrophysics Data System (ADS)

    Patanukhom, Karn; Nishihara, Akinori

    A motion blur identification scheme is proposed for nonlinear uniform motion blurs approximated by piecewise linear models which consist of more than one linear motion component. The proposed scheme includes three modules that are a motion direction estimator, a motion length estimator and a motion combination selector. In order to identify the motion directions, the proposed scheme is based on a trial restoration by using directional forward ramp motion blurs along different directions and an analysis of directional information via frequency domain by using a Radon transform. Autocorrelation functions of image derivatives along several directions are employed for estimation of the motion lengths. A proper motion combination is identified by analyzing local autocorrelation functions of non-flat component of trial restored results. Experimental examples of simulated and real world blurred images are given to demonstrate a promising performance of the proposed scheme.

  9. SEMModComp: An R Package for Calculating Likelihood Ratio Tests for Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Levy, Roy

    2010-01-01

    SEMModComp, a software package for conducting likelihood ratio tests for mean and covariance structure modeling is described. The package is written in R and freely available for download or on request.

  10. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  11. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  12. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  13. Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Lee, Sik-Yum

    2005-01-01

    In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…

  14. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  15. Thinking versus feeling: differentiating between cognitive and affective components of perceived cancer risk.

    PubMed

    Janssen, Eva; van Osch, Liesbeth; Lechner, Lilian; Candel, Math; de Vries, Hein

    2012-01-01

    Despite the increased recognition of affect in guiding probability estimates, perceived risk has been mainly operationalised in a cognitive way and the differentiation between rational and intuitive judgements is largely unexplored. This study investigated the validity of a measurement instrument differentiating cognitive and affective probability beliefs and examined whether behavioural decision making is mainly guided by cognition or affect. Data were obtained from four surveys focusing on smoking (N=268), fruit consumption (N=989), sunbed use (N=251) and sun protection (N=858). Correlational analyses showed that affective likelihood was more strongly correlated with worry compared to cognitive likelihood and confirmatory factor analysis provided support for a two-factor model of perceived likelihood instead of a one-factor model (i.e. cognition and affect combined). Furthermore, affective likelihood was significantly associated with the various outcome variables, whereas the association for cognitive likelihood was absent in three studies. The findings provide support for the construct validity of the measures used to assess cognitive and affective likelihood. Since affective likelihood might be a better predictor of health behaviour than the commonly used cognitive operationalisation, both dimensions should be considered in future research.

  16. Parameter estimation of history-dependent leaky integrate-and-fire neurons using maximum-likelihood methods

    PubMed Central

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2012-01-01

    When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282

  17. Speed-Selector Guard For Machine Tool

    NASA Technical Reports Server (NTRS)

    Shakhshir, Roda J.; Valentine, Richard L.

    1992-01-01

    Simple guardplate prevents accidental reversal of direction of rotation or sudden change of speed of lathe, milling machine, or other machine tool. Custom-made for specific machine and control settings. Allows control lever to be placed at only one setting. Operator uses handle to slide guard to engage or disengage control lever. Protects personnel from injury and equipment from damage occurring if speed- or direction-control lever inadvertently placed in wrong position.

  18. Likelihoods for fixed rank nomination networks

    PubMed Central

    HOFF, PETER; FOSDICK, BAILEY; VOLFOVSKY, ALEX; STOVEL, KATHERINE

    2014-01-01

    Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586

  19. Chiral separation and quantitation of cetirizine and hydroxyzine by maltodextrin-mediated CE in human plasma: effect of zwitterionic property of cetirizine on enantioseparation.

    PubMed

    Nojavan, Saeed; Fakhari, Ali Reza

    2011-03-01

    In the present study, a very simple CE method for chiral separation and quantitation of zwitterionic cetirizine (CTZ), as the main metabolite of hydroxyzine (HZ), and HZ has been developed. In addition, the effect of zwitterionic property of CTZ on enantioseparation was investigated. Maltodextrin, a linear polysaccharide, as a chiral selector was used and several parameters affecting the separation such as pH of BGE, concentration of chiral selector and applied voltage were studied. The best BGE conditions for CTZ and HZ enantiomers were optimized as 75 mM sodium phosphate solution at pH of 2.0, containing 5% w/v maltodextrin. Results showed that, compared to HZ, pH of BGE was an effective parameter in enantioseparation of CTZ due to the zwitterionic property of CTZ. The linear range of the method was over 30-1200 ng/mL for all enantiomers of CTZ and HZ. The quantification and detection limits (S/N=3) of all enantiomers were 30 and 10 ng/mL, respectively. The method was used to quantitative enantioseparation of CTZ and HZ in spiked human plasma. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Self-assembled cyclodextrin-modified gold nanoparticles on silica beads as stationary phase for chiral liquid chromatography and hydrophilic interaction chromatography.

    PubMed

    Li, Yuanyuan; Wei, Manman; Chen, Tong; Zhu, Nan; Ma, Yulong

    2016-11-01

    A facile strategy based on self-assembly of Au nanoparticles (AuNPs) (60±10nm in size) on the surfaces of amino-functionalized porous silica spheres under mild conditions was proposed. The resulting material possessed a core-shell structure in which AuNPs were the shell and silica spheres were the core. Then, thiolated-β-cyclodextrin (SH-β-CD) was covalently attached onto the AuNPs as chiral selector for the enantioseparation. The resultant packing material was evaluated by high-performance liquid chromatography (HPLC). The separations of nine pairs of enantiomers were achieved by using the new chiral stationary phase (CSP) in the reversed-phase liquid chromatography (RPLC) mode, respectively. The results showed the new CSP have more sufficient interaction with the analytes due to the existence of AuNPs on silica surfaces, resulting in faster mass transfer rate, compared with β-CD modified silica column. The result shed light on potential usage of chemical modified NPs as chiral selector for enantioseparation based on HPLC. In addition, the new phase was also used in hydrophilic interaction liquid chromatography (HILIC) to separate polar compounds and highly hydrophilic compounds. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Capillary electrophoretic enantioseparation of basic drugs using a new single-isomer cyclodextrin derivative and theoretical study of the chiral recognition mechanism.

    PubMed

    Liu, Yongjing; Deng, Miaoduo; Yu, Jia; Jiang, Zhen; Guo, Xingjie

    2016-05-01

    A novel single-isomer cyclodextrin derivative, heptakis {2,6-di-O-[3-(1,3-dicarboxyl propylamino)-2-hydroxypropyl]}-β-cyclodextrin (glutamic acid-β-cyclodextrin) was synthesized and used as a chiral selector in capillary electrophoresis for the enantioseparation of 12 basic drugs, including terbutaline, clorprenaline, tulobuterol, clenbuterol, procaterol, carvedilol, econazole, miconazole, homatropine methyl bromide, brompheniramine, chlorpheniramine and pheniramine. The primary factors affecting separation efficiency, which include the background electrolyte pH, the concentration of glutamic acid-β-cyclodextrin and phosphate buffer concentration, were investigated. Satisfactory enantioseparations were obtained using an uncoated fused-silica capillary of 50 cm (effective length 40 cm) × 50 μm id with 120 mM phosphate buffer (pH 2.5-4.0) containing 0.5-4.5 mM glutamic acid-β-cyclodextrin as background electrolyte. A voltage of 20 kV was applied and the capillary temperature was kept at 20°C. The results proved that glutamic acid-β-cyclodextrin was an effective chiral selector for studied 12 basic drugs. Moreover, the possible chiral recognition mechanism of brompheniramine, chlorpheniramine and pheniramine on glutamic acid-β-cyclodextrin was investigated using the semi-empirical Parametric Method 3. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Zwitterionic chiral stationary phases based on cinchona and chiral sulfonic acids for the direct stereoselective separation of amino acids and other amphoteric compounds.

    PubMed

    Zhang, Tong; Holder, Emilie; Franco, Pilar; Lindner, Wolfgang

    2014-06-01

    An extensive series of free amino acids and analogs were directly resolved into enantiomers (and stereoisomers where appropriate) by HPLC on zwitterionic chiral stationary phases (Chiralpak ZWIX(+) and Chiralpak ZWIX(-)). The interaction and chiral recognition mechanisms were based on the synergistic double ion-paring process between the analyte and the chiral selectors. The chiral separation and elution order were found to be predictable for primary α-amino acids with apolar aliphatic side chains. A systematic investigation was undertaken to gain an insight into the influence of the structural features on the enantiorecognition. The presence of polar and/or aromatic groups in the analyte structure is believed to tune the double ion-paring equilibrium by the involvement of the secondary interaction forces such as hydrogen bonding, Van der Waals forces and π-π stacking in concert with steric parameters. The ZWIX chiral columns were able to separate enantiomers and stereoisomers of various amphoteric compounds with no need for precolumn derivatization. Column switching between ZWIX(+) and ZWIX(-) is believed to be an instrumental tool to reverse or control the enantiomers elution order, due to the complementarity of the applied chiral selectors. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  4. Gaussian copula as a likelihood function for environmental models

    NASA Astrophysics Data System (ADS)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.

  5. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  6. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  7. Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan

    2005-01-01

    In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…

  8. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    ERIC Educational Resources Information Center

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  9. Source and Message Factors in Persuasion: A Reply to Stiff's Critique of the Elaboration Likelihood Model.

    ERIC Educational Resources Information Center

    Petty, Richard E.; And Others

    1987-01-01

    Answers James Stiff's criticism of the Elaboration Likelihood Model (ELM) of persuasion. Corrects certain misperceptions of the ELM and criticizes Stiff's meta-analysis that compares ELM predictions with those derived from Kahneman's elastic capacity model. Argues that Stiff's presentation of the ELM and the conclusions he draws based on the data…

  10. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    ERIC Educational Resources Information Center

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  11. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    1992-01-01

    Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…

  12. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  13. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    PubMed

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  14. High-Performance Clock Synchronization Algorithms for Distributed Wireless Airborne Computer Networks with Applications to Localization and Tracking of Targets

    DTIC Science & Technology

    2010-06-01

    GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non

  15. Comparison of two weighted integration models for the cueing task: linear and likelihood

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2003-01-01

    In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.

  16. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  17. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  18. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    ERIC Educational Resources Information Center

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  19. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    NASA Astrophysics Data System (ADS)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.

  20. Model-on-Demand Predictive Control for Nonlinear Hybrid Systems With Application to Adaptive Behavioral Interventions

    PubMed Central

    Nandola, Naresh N.; Rivera, Daniel E.

    2011-01-01

    This paper presents a data-centric modeling and predictive control approach for nonlinear hybrid systems. System identification of hybrid systems represents a challenging problem because model parameters depend on the mode or operating point of the system. The proposed algorithm applies Model-on-Demand (MoD) estimation to generate a local linear approximation of the nonlinear hybrid system at each time step, using a small subset of data selected by an adaptive bandwidth selector. The appeal of the MoD approach lies in the fact that model parameters are estimated based on a current operating point; hence estimation of locations or modes governed by autonomous discrete events is achieved automatically. The local MoD model is then converted into a mixed logical dynamical (MLD) system representation which can be used directly in a model predictive control (MPC) law for hybrid systems using multiple-degree-of-freedom tuning. The effectiveness of the proposed MoD predictive control algorithm for nonlinear hybrid systems is demonstrated on a hypothetical adaptive behavioral intervention problem inspired by Fast Track, a real-life preventive intervention for improving parental function and reducing conduct disorder in at-risk children. Simulation results demonstrate that the proposed algorithm can be useful for adaptive intervention problems exhibiting both nonlinear and hybrid character. PMID:21874087

  1. The discounting model selector: Statistical software for delay discounting applications.

    PubMed

    Gilroy, Shawn P; Franck, Christopher T; Hantula, Donald A

    2017-05-01

    Original, open-source computer software was developed and validated against established delay discounting methods in the literature. The software executed approximate Bayesian model selection methods from user-supplied temporal discounting data and computed the effective delay 50 (ED50) from the best performing model. Software was custom-designed to enable behavior analysts to conveniently apply recent statistical methods to temporal discounting data with the aid of a graphical user interface (GUI). The results of independent validation of the approximate Bayesian model selection methods indicated that the program provided results identical to that of the original source paper and its methods. Monte Carlo simulation (n = 50,000) confirmed that true model was selected most often in each setting. Simulation code and data for this study were posted to an online repository for use by other researchers. The model selection approach was applied to three existing delay discounting data sets from the literature in addition to the data from the source paper. Comparisons of model selected ED50 were consistent with traditional indices of discounting. Conceptual issues related to the development and use of computer software by behavior analysts and the opportunities afforded by free and open-sourced software are discussed and a review of possible expansions of this software are provided. © 2017 Society for the Experimental Analysis of Behavior.

  2. Bayesian experimental design for models with intractable likelihoods.

    PubMed

    Drovandi, Christopher C; Pettitt, Anthony N

    2013-12-01

    In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.

  3. Poisson point process modeling for polyphonic music transcription.

    PubMed

    Peeling, Paul; Li, Chung-fai; Godsill, Simon

    2007-04-01

    Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.

  4. Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.

    2003-01-01

    The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…

  5. The Elaboration Likelihood Model: Implications for the Practice of School Psychology.

    ERIC Educational Resources Information Center

    Petty, Richard E.; Heesacker, Martin; Hughes, Jan N.

    1997-01-01

    Reviews a contemporary theory of attitude change, the Elaboration Likelihood Model (ELM) of persuasion, and addresses its relevance to school psychology. Claims that a key postulate of ELM is that attitude change results from thoughtful (central route) or nonthoughtful (peripheral route) processes. Illustrations of ELM's utility for school…

  6. Counseling Pretreatment and the Elaboration Likelihood Model of Attitude Change.

    ERIC Educational Resources Information Center

    Heesacker, Martin

    1986-01-01

    Results of the application of the Elaboration Likelihood Model (ELM) to a counseling context revealed that more favorable attitudes toward counseling occurred as subjects' ego involvement increased and as intervention quality improved. Counselor credibility affected the degree to which subjects' attitudes reflected argument quality differences.…

  7. Application of the Elaboration Likelihood Model of Attitude Change to Assertion Training.

    ERIC Educational Resources Information Center

    Ernst, John M.; Heesacker, Martin

    1993-01-01

    College students (n=113) participated in study comparing effects of elaboration likelihood model (ELM) based assertion workshop with those of typical assertion workshop. ELM-based workshop was significantly better at producing favorable attitude change, greater intention to act assertively, and more favorable evaluations of workshop content.…

  8. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  9. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    EPA Science Inventory

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  10. Modeling abundance effects in distance sampling

    USGS Publications Warehouse

    Royle, J. Andrew; Dawson, D.K.; Bates, S.

    2004-01-01

    Distance-sampling methods are commonly used in studies of animal populations to estimate population density. A common objective of such studies is to evaluate the relationship between abundance or density and covariates that describe animal habitat or other environmental influences. However, little attention has been focused on methods of modeling abundance covariate effects in conventional distance-sampling models. In this paper we propose a distance-sampling model that accommodates covariate effects on abundance. The model is based on specification of the distance-sampling likelihood at the level of the sample unit in terms of local abundance (for each sampling unit). This model is augmented with a Poisson regression model for local abundance that is parameterized in terms of available covariates. Maximum-likelihood estimation of detection and density parameters is based on the integrated likelihood, wherein local abundance is removed from the likelihood by integration. We provide an example using avian point-transect data of Ovenbirds (Seiurus aurocapillus) collected using a distance-sampling protocol and two measures of habitat structure (understory cover and basal area of overstory trees). The model yields a sensible description (positive effect of understory cover, negative effect on basal area) of the relationship between habitat and Ovenbird density that can be used to evaluate the effects of habitat management on Ovenbird populations.

  11. Modeling regional variation in riverine fish biodiversity in the Arkansas-White-Red River basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schweizer, Peter E; Jager, Yetta

    The patterns of biodiversity in freshwater systems are shaped by biogeography, environmental gradients, and human-induced factors. In this study, we developed empirical models to explain fish species richness in subbasins of the Arkansas White Red River basin as a function of discharge, elevation, climate, land cover, water quality, dams, and longitudinal position. We used information-theoretic criteria to compare generalized linear mixed models and identified well-supported models. Subbasin attributes that were retained as predictors included discharge, elevation, number of downstream dams, percent forest, percent shrubland, nitrate, total phosphorus, and sediment. The random component of our models, which assumed a negative binomialmore » distribution, included spatial correlation within larger river basins and overdispersed residual variance. This study differs from previous biodiversity modeling efforts in several ways. First, obtaining likelihoods for negative binomial mixed models, and thereby avoiding reliance on quasi-likelihoods, has only recently become practical. We found the ranking of models based on these likelihood estimates to be more believable than that produced using quasi-likelihoods. Second, because we had access to a regional-scale watershed model for this river basin, we were able to include model-estimated water quality attributes as predictors. Thus, the resulting models have potential value as tools with which to evaluate the benefits of water quality improvements to fish.« less

  12. Regression estimators for generic health-related quality of life and quality-adjusted life years.

    PubMed

    Basu, Anirban; Manca, Andrea

    2012-01-01

    To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.

  13. Measurement of CIB power spectra with CAM-SPEC from Planck HFI maps

    NASA Astrophysics Data System (ADS)

    Mak, Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine

    2015-08-01

    We present new measurements of the cosmic infrared background (CIB) anisotropies and its first likelihood using Planck HFI data at 353, 545, and 857 GHz. The measurements are based on cross-frequency power spectra and likelihood analysis using the CAM-SPEC package, rather than map based template removal of foregrounds as done in previous Planck CIB analysis. We construct the likelihood of the CIB temperature fluctuations, an extension of CAM-SPEC likelihood as used in CMB analysis to higher frequency, and use it to drive the best estimate of the CIB power spectrum over three decades in multiple moment, l, covering 50 ≤ l ≤ 2500. We adopt parametric models of the CIB and foreground contaminants (Galactic cirrus, infrared point sources, and cosmic microwave background anisotropies), and calibrate the dataset uniformly across frequencies with known Planck beam and noise properties in the likelihood construction. We validate our likelihood through simulations and extensive suite of consistency tests, and assess the impact of instrumental and data selection effects on the final CIB power spectrum constraints. Two approaches are developed for interpreting the CIB power spectrum. The first approach is based on simple parametric model which model the cross frequency power using amplitudes, correlation coefficients, and known multipole dependence. The second approach is based on the physical models for galaxy clustering and the evolution of infrared emission of galaxies. The new approaches fit all auto- and cross- power spectra very well, with the best fit of χ2ν = 1.04 (parametric model). Using the best foreground solution, we find that the cleaned CIB power spectra are in good agreement with previous Planck and Herschel measurements.

  14. Finding Dantzig Selectors with a Proximity Operator based Fixed-point Algorithm

    DTIC Science & Technology

    2014-11-01

    experiments showed that this method usually outperforms the method in [2] in terms of CPU time while producing solutions of comparable quality. The... method proposed in [19]. To alleviate the difficulty caused by the subprob- lem without a closed form solution , a linearized ADM was proposed for the...a closed form solution , but the β-related subproblem does not and is solved approximately by using the nonmonotone gradient method in [18]. The

  15. Resource Allocation over a GRID Military Network

    DTIC Science & Technology

    2006-12-01

    The behaviour is called PHB (Per Hop Behaviour) and it is defined locally; i.e., it is not an end- to-end specification (as for RSVP) but it is...UNLIMITED UNCLASSIFIED/UNLIMITED The class selector PHB offers three forwarding priorities: Expedited Forwarding (EF) characterized by a minimum...14] J. Heinanen, F. Baker, W. Weiss, J. Wroclawski, “Assured Forwarding PHB Group,” IETF RFC 2597, June 1999. [15] E. Crawley, R. Nair, B

  16. Nonlinear frequency conversion of radiation from a copper-vapor laser

    NASA Astrophysics Data System (ADS)

    Polunin, Iu. P.; Troitskii, V. O.

    1987-11-01

    The nonlinear frequency conversion of copper-vapor laser radiation in a KDP crystal was studied experimentally. Output powers of 600 mW and 120 mW were obtained at wavelengths of 271 nm (the sum frequency) and 289 nm (the second harmonic of the yellow line), respectively. The conversion efficiency in both cases was about 3 percent; when selector losses were taken into accounted, the efficiency amounted to 5 percent.

  17. Multimodal Likelihoods in Educational Assessment: Will the Real Maximum Likelihood Score Please Stand up?

    ERIC Educational Resources Information Center

    Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike

    2011-01-01

    It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…

  18. A review and comparison of Bayesian and likelihood-based inferences in beta regression and zero-or-one-inflated beta regression.

    PubMed

    Liu, Fang; Eugenio, Evercita C

    2018-04-01

    Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.

  19. Indeterminate lung nodules in cancer patients: pretest probability of malignancy and the role of 18F-FDG PET/CT.

    PubMed

    Evangelista, Laura; Panunzio, Annalori; Polverosi, Roberta; Pomerri, Fabio; Rubello, Domenico

    2014-03-01

    The purpose of this study was to determine likelihood of malignancy for indeterminate lung nodules identified on CT comparing two standardized models with (18)F-FDG PET/CT. Fifty-nine cancer patients with indeterminate lung nodules (solid tumors; diameter, ≥5 mm) on CT had FDG PET/CT for lesion characterization. Mayo Clinic and Veterans Affairs Cooperative Study models of likelihood of malignancy were applied to solitary pulmonary nodules. High probability of malignancy was assigned a priori for multiple nodules. Low (<5%), intermediate (5-60%), and high (>60%) pretest malignancy probabilities were analyzed separately. Patients were reclassified with PET/CT. Histopathology or 2-year imaging follow-up established diagnosis. Outcome-based reclassification differences were defined as net reclassification improvement. A null hypothesis of asymptotic test was applied. Thirty-one patients had histology-proven malignancy. PET/CT was true-positive in 24 and true-negative in 25 cases. Negative predictive value was 78% and positive predictive value was 89%. On the basis of the Mayo Clinic model (n=31), 18 patients had low, 12 had intermediate, and one had high pretest likelihood; on the basis of the Veterans Affairs model (n=26), 5 patients had low, 20 had intermediate, and one had high pretest likelihood. Because of multiple lung nodules, 28 patients were classified as having high malignancy risk. PET/CT showed 32 negative and 27 positive scans. Net reclassification improvements respectively were 0.95 and 1.6 for Mayo Clinic and Veterans Affairs models (both p<0.0001). Fourteen of 31 (45.2%) and 12 of 26 (46.2%) patients with low and intermediate pretest likelihood, respectively, had positive findings on PET/CT for the Mayo Clinic and Veterans Affairs models, respectively. Of 15 patients with high pretest likelihood and negative findings on PET/CT, 13 (86.7%) did not have lung malignancy. PET/CT improves stratification of cancer patients with indeterminate pulmonary nodules. A substantial number of patients considered at low and intermediate pretest likelihood of malignancy with histology-proven lung malignancy showed abnormal PET/CT findings.

  20. Charge transfer in rectifying oxide heterostructures and oxide access elements in ReRAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stefanovich, G. B.; Pergament, A. L.; Boriskov, P. P.

    2016-05-15

    The main aspects of the synthesis and experimental research of oxide diode heterostructures are discussed with respect to their use as selector diodes, i.e., access elements in oxide resistive memory. It is shown that charge transfer in these materials differs significantly from the conduction mechanism in p–n junctions based on conventional semiconductors (Si, Ge, A{sup III}–B{sup V}), and the model should take into account the electronic properties of oxides, primarily the low carrier drift mobility. It is found that an increase in the forward current requires an oxide with a small band gap (<1.3 eV) in the heterostructure composition. Heterostructuresmore » with Zn, In–Zn (IZO), Ti, Ni, and Cu oxides are studied; it is found that the CuO–IZO heterojunction has the highest forward current density (10{sup 4} A/cm{sup 2}).« less

  1. LACIE performance predictor FOC users manual

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The LACIE Performance Predictor (LPP) is a computer simulation of the LACIE process for predicting worldwide wheat production. The simulation provides for the introduction of various errors into the system and provides estimates based on these errors, thus allowing the user to determine the impact of selected error sources. The FOC LPP simulates the acquisition of the sample segment data by the LANDSAT Satellite (DAPTS), the classification of the agricultural area within the sample segment (CAMS), the estimation of the wheat yield (YES), and the production estimation and aggregation (CAS). These elements include data acquisition characteristics, environmental conditions, classification algorithms, the LACIE aggregation and data adjustment procedures. The operational structure for simulating these elements consists of the following key programs: (1) LACIE Utility Maintenance Process, (2) System Error Executive, (3) Ephemeris Generator, (4) Access Generator, (5) Acquisition Selector, (6) LACIE Error Model (LEM), and (7) Post Processor.

  2. A decision framework for identifying models to estimate forest ecosystem services gains from restoration

    USGS Publications Warehouse

    Christin, Zachary; Bagstad, Kenneth J.; Verdone, Michael

    2016-01-01

    Restoring degraded forests and agricultural lands has become a global conservation priority. A growing number of tools can quantify ecosystem service tradeoffs associated with forest restoration. This evolving “tools landscape” presents a dilemma: more tools are available, but selecting appropriate tools has become more challenging. We present a Restoration Ecosystem Service Tool Selector (RESTS) framework that describes key characteristics of 13 ecosystem service assessment tools. Analysts enter information about their decision context, services to be analyzed, and desired outputs. Tools are filtered and presented based on five evaluative criteria: scalability, cost, time requirements, handling of uncertainty, and applicability to benefit-cost analysis. RESTS uses a spreadsheet interface but a web-based interface is planned. Given the rapid evolution of ecosystem services science, RESTS provides an adaptable framework to guide forest restoration decision makers toward tools that can help quantify ecosystem services in support of restoration.

  3. Fast separation of enantiomers by capillary electrophoresis using a combination of two capillaries with different internal diameters.

    PubMed

    Šebestová, Andrea; Petr, Jan

    2017-12-01

    The combination of capillaries with different internal diameters was used to accelerate the separation of enantiomers in capillary electrophoresis. Separation of R,S-1,1'-binaphthalene-2,2'-diyl hydrogen phosphate using isopropyl derivative of cyclofructan 6 was studied as a model system. The best separation conditions included 500 mM sodium borate pH 9.5 with 60 mM concentration of the chiral selector. Separation lasted approx. 1.5 min using the combination of 50 and 100 μm id capillaries of 9.7 cm and 22.9 cm, respectively. It allowed approx. 12-fold acceleration in comparison to the traditional long-end separation mainly due to the higher electroosmotic flow generated in the connected capillaries. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Responses of Aquatic Plants to Eutrophication in Rivers: A Revised Conceptual Model

    PubMed Central

    O’Hare, Matthew T.; Baattrup-Pedersen, Annette; Baumgarte, Inga; Freeman, Anna; Gunn, Iain D. M.; Lázár, Attila N.; Sinclair, Raeannon; Wade, Andrew J.; Bowes, Michael J.

    2018-01-01

    Compared to research on eutrophication in lakes, there has been significantly less work carried out on rivers despite the importance of the topic. However, over the last decade, there has been a surge of interest in the response of aquatic plants to eutrophication in rivers. This is an area of applied research and the work has been driven by the widespread nature of the impacts and the significant opportunities for system remediation. A conceptual model has been put forward to describe how aquatic plants respond to eutrophication. Since the model was created, there have been substantial increases in our understanding of a number of the underlying processes. For example, we now know the threshold nutrient concentrations at which nutrients no longer limit algal growth. We also now know that the physical habitat template of rivers is a primary selector of aquatic plant communities. As such, nutrient enrichment impacts on aquatic plant communities are strongly influenced, both directly and indirectly, by physical habitat. A new conceptual model is proposed that incorporates these findings. The application of the model to management, system remediation, target setting, and our understanding of multi-stressor systems is discussed. We also look to the future and the potential for new numerical models to guide management. PMID:29755484

  5. Cross-validation to select Bayesian hierarchical models in phylogenetics.

    PubMed

    Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C

    2016-05-26

    Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.

  6. Inverse Ising problem in continuous time: A latent variable approach

    NASA Astrophysics Data System (ADS)

    Donner, Christian; Opper, Manfred

    2017-12-01

    We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.

  7. A Solution to Separation and Multicollinearity in Multiple Logistic Regression

    PubMed Central

    Shen, Jianzhao; Gao, Sujuan

    2010-01-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286

  8. A Solution to Separation and Multicollinearity in Multiple Logistic Regression.

    PubMed

    Shen, Jianzhao; Gao, Sujuan

    2008-10-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.

  9. Quasar microlensing models with constraints on the Quasar light curves

    NASA Astrophysics Data System (ADS)

    Tie, S. S.; Kochanek, C. S.

    2018-01-01

    Quasar microlensing analyses implicitly generate a model of the variability of the source quasar. The implied source variability may be unrealistic yet its likelihood is generally not evaluated. We used the damped random walk (DRW) model for quasar variability to evaluate the likelihood of the source variability and applied the revized algorithm to a microlensing analysis of the lensed quasar RX J1131-1231. We compared estimates of the size of the quasar disc and the average stellar mass of the lens galaxy with and without applying the DRW likelihoods for the source variability model and found no significant effect on the estimated physical parameters. The most likely explanation is that unreliastic source light-curve models are generally associated with poor microlensing fits that already make a negligible contribution to the probability distributions of the derived parameters.

  10. Analysis of hourly crash likelihood using unbalanced panel data mixed logit model and real-time driving environmental big data.

    PubMed

    Chen, Feng; Chen, Suren; Ma, Xiaoxiang

    2018-06-01

    Driving environment, including road surface conditions and traffic states, often changes over time and influences crash probability considerably. It becomes stretched for traditional crash frequency models developed in large temporal scales to capture the time-varying characteristics of these factors, which may cause substantial loss of critical driving environmental information on crash prediction. Crash prediction models with refined temporal data (hourly records) are developed to characterize the time-varying nature of these contributing factors. Unbalanced panel data mixed logit models are developed to analyze hourly crash likelihood of highway segments. The refined temporal driving environmental data, including road surface and traffic condition, obtained from the Road Weather Information System (RWIS), are incorporated into the models. Model estimation results indicate that the traffic speed, traffic volume, curvature and chemically wet road surface indicator are better modeled as random parameters. The estimation results of the mixed logit models based on unbalanced panel data show that there are a number of factors related to crash likelihood on I-25. Specifically, weekend indicator, November indicator, low speed limit and long remaining service life of rutting indicator are found to increase crash likelihood, while 5-am indicator and number of merging ramps per lane per mile are found to decrease crash likelihood. The study underscores and confirms the unique and significant impacts on crash imposed by the real-time weather, road surface, and traffic conditions. With the unbalanced panel data structure, the rich information from real-time driving environmental big data can be well incorporated. Copyright © 2018 National Safety Council and Elsevier Ltd. All rights reserved.

  11. A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits

    PubMed Central

    Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling

    2013-01-01

    Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762

  12. Model selection and parameter estimation in structural dynamics using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Ben Abdessalem, Anis; Dervilis, Nikolaos; Wagg, David; Worden, Keith

    2018-01-01

    This paper will introduce the use of the approximate Bayesian computation (ABC) algorithm for model selection and parameter estimation in structural dynamics. ABC is a likelihood-free method typically used when the likelihood function is either intractable or cannot be approached in a closed form. To circumvent the evaluation of the likelihood function, simulation from a forward model is at the core of the ABC algorithm. The algorithm offers the possibility to use different metrics and summary statistics representative of the data to carry out Bayesian inference. The efficacy of the algorithm in structural dynamics is demonstrated through three different illustrative examples of nonlinear system identification: cubic and cubic-quintic models, the Bouc-Wen model and the Duffing oscillator. The obtained results suggest that ABC is a promising alternative to deal with model selection and parameter estimation issues, specifically for systems with complex behaviours.

  13. New prior sampling methods for nested sampling - Development and testing

    NASA Astrophysics Data System (ADS)

    Stokes, Barrie; Tuyl, Frank; Hudson, Irene

    2017-06-01

    Nested Sampling is a powerful algorithm for fitting models to data in the Bayesian setting, introduced by Skilling [1]. The nested sampling algorithm proceeds by carrying out a series of compressive steps, involving successively nested iso-likelihood boundaries, starting with the full prior distribution of the problem parameters. The "central problem" of nested sampling is to draw at each step a sample from the prior distribution whose likelihood is greater than the current likelihood threshold, i.e., a sample falling inside the current likelihood-restricted region. For both flat and informative priors this ultimately requires uniform sampling restricted to the likelihood-restricted region. We present two new methods of carrying out this sampling step, and illustrate their use with the lighthouse problem [2], a bivariate likelihood used by Gregory [3] and a trivariate Gaussian mixture likelihood. All the algorithm development and testing reported here has been done with Mathematica® [4].

  14. Formulating the Rasch Differential Item Functioning Model under the Marginal Maximum Likelihood Estimation Context and Its Comparison with Mantel-Haenszel Procedure in Short Test and Small Sample Conditions

    ERIC Educational Resources Information Center

    Paek, Insu; Wilson, Mark

    2011-01-01

    This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…

  15. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  16. Counseling Pretreatment and the Elaboration Likelihood Model of Attitude Change.

    ERIC Educational Resources Information Center

    Heesacker, Martin

    The importance of high levels of involvement in counseling has been related to theories of interpersonal influence. To examine differing effects of counselor credibility as a function of how personally involved counselors are, the Elaboration Likelihood Model (ELM) of attitude change was applied to counseling pretreatment. Students (N=256) were…

  17. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  18. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  19. Evaluation of Smoking Prevention Television Messages Based on the Elaboration Likelihood Model

    ERIC Educational Resources Information Center

    Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.

    2011-01-01

    Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from…

  20. A likelihood-based time series modeling approach for application in dendrochronology to examine the growth-climate relations and forest disturbance history

    EPA Science Inventory

    A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...

  1. Additive hazards regression and partial likelihood estimation for ecological monitoring data across space.

    PubMed

    Lin, Feng-Chang; Zhu, Jun

    2012-01-01

    We develop continuous-time models for the analysis of environmental or ecological monitoring data such that subjects are observed at multiple monitoring time points across space. Of particular interest are additive hazards regression models where the baseline hazard function can take on flexible forms. We consider time-varying covariates and take into account spatial dependence via autoregression in space and time. We develop statistical inference for the regression coefficients via partial likelihood. Asymptotic properties, including consistency and asymptotic normality, are established for parameter estimates under suitable regularity conditions. Feasible algorithms utilizing existing statistical software packages are developed for computation. We also consider a simpler additive hazards model with homogeneous baseline hazard and develop hypothesis testing for homogeneity. A simulation study demonstrates that the statistical inference using partial likelihood has sound finite-sample properties and offers a viable alternative to maximum likelihood estimation. For illustration, we analyze data from an ecological study that monitors bark beetle colonization of red pines in a plantation of Wisconsin.

  2. ModelTest Server: a web-based tool for the statistical selection of models of nucleotide substitution online

    PubMed Central

    Posada, David

    2006-01-01

    ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102

  3. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width.

    PubMed

    Learn, R; Feigenbaum, E

    2016-06-01

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  4. 34. Interior of elevator tower, Block 31, looking northeast. Otis ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    34. Interior of elevator tower, Block 31, looking northeast. Otis Tandem Gearless Elevator Hoist (1941); floor selector (far left), in foreground is the motor generator set which includes exciter (left), AC motor (center), DC generator (right); beyond is the passenger motor (right), hoist cable and drum (center), freight motor (left). - Columbia Basin Project, Grand Coulee Dam & Franklin D. Roosevelt Lake, Across Columbia River, Southeast of Town of Grand Coulee, Grand Coulee, Grant County, WA

  5. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Learn, R.; Feigenbaum, E.

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  6. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE PAGES

    Learn, R.; Feigenbaum, E.

    2016-05-27

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  7. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  8. Simultaneous Enrichment of Polycyclic Aromatic Hydrocarbons and Cu(2+) in Water Using Tetraazacalix[2]arene[2]triazine as a Solid-Phase Extraction Selector.

    PubMed

    Zhao, Wenjie; Yang, Liu; He, Lijun; Zhang, Shusheng

    2016-08-10

    On the basis of the definite retention mechanism proven by the stationary phase for high-performance liquid chromatography, tetraazacalix[2]arene[2]triazine featuring multiple recognition sites was assessed as a solid-phase extraction (SPE) selector. The applicability of its silica support was used for the extraction of trace amounts of polycyclic aromatic hydrocarbons (PAHs) and Cu(2+) in aqueous samples, followed by high-performance liquid chromatography fluorometric and graphite furnace atomic absorption spectrometric determination. On the basis of the π-π interaction with PAHs and the chelating interaction with Cu(2+), the simultaneous extraction of PAHs and Cu(2+) and stepwise elution through tuning the eluent were successfully achieved, respectively. The SPE conditions affecting the extraction efficiency were optimized, including type and concentration of organic modifier, sample solution pH, flow rate, and volume. As a result of the special adsorption and desorption mechanism, high extraction efficiency was achieved with relative recoveries of 94.3-102.4% and relative standard deviations of less than 10.5%. The limits of detection were obtained with 0.4-3.1 ng L(-1) for PAHs and 15 ng L(-1) for Cu(2+), respectively. The method was applied to the analyses of PAHs and Cu(2+) in Xiliu Lake water samples collected in Zhengzhou, China.

  9. Efficacy of a sperm-selection chamber in terms of morphology, aneuploidy and DNA packaging.

    PubMed

    Seiringer, M; Maurer, M; Shebl, O; Dreier, K; Tews, G; Ziehr, S; Schappacher-Tilp, G; Petek, E; Ebner, T

    2013-07-01

    Since most current techniques analysing spermatozoa will inevitably exclude these gametes from further use, attempts have been made to enrich semen samples with physiological spermatozoa with good prognosis using special sperm-processing methods. A particular sperm-selection chamber, called the Zech-selector, was found to be effective in completely eliminating spermatozoa with DNA strand breaks. The aim of this study was to further analyse the subgroup of spermatozoa accumulated using the Zech-selector. In detail, the potential of the chamber to select for proper sperm morphology, DNA status and chromatin condensation was tested. Two samples, native and processed semen, of 53 patients were analysed for sperm morphology (×1000, ×6300), DNA packaging (fragmentation, chromatin condensation) and chromosomal status (X, Y, 18). Migration time (the time needed for proper sperm accumulation) was significantly correlated to fast progressive motility (P=0.002). The present sperm-processing method was highly successful with respect to all parameters analysed (P<0.001). In particular, spermatozoa showing numeric (17.4% of patients without aneuploidy) or structural chromosomal abnormalities (90% of patients without strand-breaks) were separated most effectively. To summarize, further evidence is provided that separating spermatozoa without exposure to centrifugation stress results in a population of highly physiological spermatozoa. Copyright © 2013 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  10. Updated logistic regression equations for the calculation of post-fire debris-flow likelihood in the western United States

    USGS Publications Warehouse

    Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2016-06-30

    Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.

  11. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  12. Explaining the effect of event valence on unrealistic optimism.

    PubMed

    Gold, Ron S; Brown, Mark G

    2009-05-01

    People typically exhibit 'unrealistic optimism' (UO): they believe they have a lower chance of experiencing negative events and a higher chance of experiencing positive events than does the average person. UO has been found to be greater for negative than positive events. This 'valence effect' has been explained in terms of motivational processes. An alternative explanation is provided by the 'numerosity model', which views the valence effect simply as a by-product of a tendency for likelihood estimates pertaining to the average member of a group to increase with the size of the group. Predictions made by the numerosity model were tested in two studies. In each, UO for a single event was assessed. In Study 1 (n = 115 students), valence was manipulated by framing the event either negatively or positively, and participants estimated their own likelihood and that of the average student at their university. In Study 2 (n = 139 students), valence was again manipulated and participants again estimated their own likelihood; additionally, group size was manipulated by having participants estimate the likelihood of the average student in a small, medium-sized, or large group. In each study, the valence effect was found, but was due to an effect on estimates of own likelihood, not the average person's likelihood. In Study 2, valence did not interact with group size. The findings contradict the numerosity model, but are in accord with the motivational explanation. Implications for health education are discussed.

  13. Bayesian structural equation modeling in sport and exercise psychology.

    PubMed

    Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus

    2015-08-01

    Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.

  14. Performance of the likelihood ratio difference (G2 Diff) test for detecting unidimensionality in applications of the multidimensional Rasch model.

    PubMed

    Harrell-Williams, Leigh; Wolfe, Edward W

    2014-01-01

    Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.

  15. The Elaboration Likelihood Model and Proxemic Violations as Peripheral Cues to Information Processing.

    ERIC Educational Resources Information Center

    Eaves, Michael

    This paper provides a literature review of the elaboration likelihood model (ELM) as applied in persuasion. Specifically, the paper addresses distraction with regard to effects on persuasion. In addition, the application of proxemic violations as peripheral cues in message processing is discussed. Finally, the paper proposes to shed new light on…

  16. Influencing Attitudes Regarding Special Class Placement Using a Psychoeducational Report: An Investigation of the Elaboration Likelihood Model.

    ERIC Educational Resources Information Center

    Andrews, Lester W.; Gutkin, Terry B.

    1994-01-01

    Investigates variables drawn from the Elaboration Likelihood Model (ELM) that might be manipulated to enhance the persuasiveness of a psychoeducational report. Results showed teachers in training were more persuaded by reports with high message quality. Findings are discussed in terms of the ELM and professional school psychology practice. (RJM)

  17. Examining Sex Differences in Altering Attitudes About Rape: A Test of the Elaboration Likelihood Model.

    ERIC Educational Resources Information Center

    Heppner, Mary J.; And Others

    1995-01-01

    Intervention sought to improve first-year college students' attitudes about rape. Used the Elaboration Likelihood Model to examine men's and women's attitude change process. Found numerous sex differences in ways men and women experienced and changed during and after intervention. Women's attitude showed more lasting change while men's was more…

  18. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  19. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  20. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  1. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution.

    PubMed

    Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn

    2013-03-06

    Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.

  2. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  3. Exclusion probabilities and likelihood ratios with applications to mixtures.

    PubMed

    Slooten, Klaas-Jan; Egeland, Thore

    2016-01-01

    The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.

  4. GLOBALLY ADAPTIVE QUANTILE REGRESSION WITH ULTRA-HIGH DIMENSIONAL DATA

    PubMed Central

    Zheng, Qi; Peng, Limin; He, Xuming

    2015-01-01

    Quantile regression has become a valuable tool to analyze heterogeneous covaraite-response associations that are often encountered in practice. The development of quantile regression methodology for high dimensional covariates primarily focuses on examination of model sparsity at a single or multiple quantile levels, which are typically prespecified ad hoc by the users. The resulting models may be sensitive to the specific choices of the quantile levels, leading to difficulties in interpretation and erosion of confidence in the results. In this article, we propose a new penalization framework for quantile regression in the high dimensional setting. We employ adaptive L1 penalties, and more importantly, propose a uniform selector of the tuning parameter for a set of quantile levels to avoid some of the potential problems with model selection at individual quantile levels. Our proposed approach achieves consistent shrinkage of regression quantile estimates across a continuous range of quantiles levels, enhancing the flexibility and robustness of the existing penalized quantile regression methods. Our theoretical results include the oracle rate of uniform convergence and weak convergence of the parameter estimators. We also use numerical studies to confirm our theoretical findings and illustrate the practical utility of our proposal. PMID:26604424

  5. Modified Maxium Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model

    DTIC Science & Technology

    2015-08-01

    McCullagh, P.; Nelder, J.A. Generalized Linear Model , 2nd ed.; Chapman and Hall: London, 1989. 7. Johnston, J. Econometric Methods, 3rd ed.; McGraw...FOR A DOSE-RESPONSE MODEL ECBC-TN-068 Kyong H. Park Steven J. Lagan RESEARCH AND TECHNOLOGY DIRECTORATE August 2015 Approved for public release...Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model 5a. CONTRACT NUMBER 5b. GRANT

  6. Modeling Goal-Directed User Exploration in Human-Computer Interaction

    DTIC Science & Technology

    2011-02-01

    scent, other factors including the layout position and grouping of options in the user-interface also affect user exploration and the likelihood of...grouping of options in the user-interface also affect user exploration and the likelihood of success. This dissertation contributes a new model of goal...better inform UI design. 1.1 RESEARCH GAPS IN MODELING In addition to infoscent, the layout of the UI also affects the choices made during

  7. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution

    PubMed Central

    2013-01-01

    Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171

  8. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    PubMed

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  9. Improvement and comparison of likelihood functions for model calibration and parameter uncertainty analysis within a Markov chain Monte Carlo scheme

    NASA Astrophysics Data System (ADS)

    Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim

    2014-11-01

    In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.

  10. A unifying framework for marginalized random intercept models of correlated binary outcomes

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.

    2013-01-01

    We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871

  11. Likelihood ratio decisions in memory: three implied regularities.

    PubMed

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  12. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  13. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    ERIC Educational Resources Information Center

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  14. A hybrid model for combining case-control and cohort studies in systematic reviews of diagnostic tests

    PubMed Central

    Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao

    2014-01-01

    Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179

  15. Equivalence of binormal likelihood-ratio and bi-chi-squared ROC curve models

    PubMed Central

    Hillis, Stephen L.

    2015-01-01

    A basic assumption for a meaningful diagnostic decision variable is that there is a monotone relationship between it and its likelihood ratio. This relationship, however, generally does not hold for a decision variable that results in a binormal ROC curve. As a result, receiver operating characteristic (ROC) curve estimation based on the assumption of a binormal ROC-curve model produces improper ROC curves that have “hooks,” are not concave over the entire domain, and cross the chance line. Although in practice this “improperness” is usually not noticeable, sometimes it is evident and problematic. To avoid this problem, Metz and Pan proposed basing ROC-curve estimation on the assumption of a binormal likelihood-ratio (binormal-LR) model, which states that the decision variable is an increasing transformation of the likelihood-ratio function of a random variable having normal conditional diseased and nondiseased distributions. However, their development is not easy to follow. I show that the binormal-LR model is equivalent to a bi-chi-squared model in the sense that the families of corresponding ROC curves are the same. The bi-chi-squared formulation provides an easier-to-follow development of the binormal-LR ROC curve and its properties in terms of well-known distributions. PMID:26608405

  16. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  17. Flexible crossbar-structured resistive memory arrays on plastic substrates via inorganic-based laser lift-off.

    PubMed

    Kim, Seungjun; Son, Jung Hwan; Lee, Seung Hyun; You, Byoung Kuk; Park, Kwi-Il; Lee, Hwan Keon; Byun, Myunghwan; Lee, Keon Jae

    2014-11-26

    Crossbar-structured memory comprising 32 × 32 arrays with one selector-one resistor (1S-1R) components are initially fabricated on a rigid substrate. They are transferred without mechanical damage via an inorganic-based laser lift-off (ILLO) process as a result of laser-material interaction. Addressing tests of the transferred memory arrays are successfully performed to verify mitigation of cross-talk on a plastic substrate. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. The SANS facility at the Pitesti 14MW TRIGA reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ionita, I.; Grabcev, B.; Todireanu, S.

    2006-12-15

    The SANS facility existing at the Pitesti 14MW TRIGA reactor is presented. The main characteristics and the preliminary evaluation of the installation performances are given. A monochromatic neutron beam with 1.5 A {<=} {lambda} {<=} 5 A is produced by a mechanical velocity selector with helical slots. A fruitful partnership was established between INR Pitesti (Romania) and JINR Dubna (Russia). The first step in this cooperation consists in the manufacturing in Dubna of a battery of gas-filled positional detectors devoted to the SANS instrument.

  19. Acoustic detection of air shower cores

    NASA Technical Reports Server (NTRS)

    Gao, X.; Liu, Y.; Du, S.

    1985-01-01

    At an altitude of 1890m, a pre-test with an Air shower (AS) core selector and a small acoustic array set up in an anechoic pool with a volume of 20x7x7 cu m was performed, beginning in Aug. 1984. In analyzing the waveforms recorded during the effective working time of 186 hrs, three acoustic signals which cannot be explained as from any source other than AS cores were obtained, and an estimation of related parameters was made.

  20. Signal Selector, Spectrum Receivers and Touch Panel Control for the SATCOM Signal Analyzer.

    DTIC Science & Technology

    1980-06-01

    that the entire system may be exercised in a test mode with the push of a single button. Normally test functions are divided into separate areas so that...source. The major com- ponents of the SS include power dividers and RF switches. The second of the two modules is the Spectrum Receiver (SR). Four... dividers and adjustable attenuators may be mounted on the opposite side of the panel. Component size is res- tricted on the panel back side due to a two

  1. Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.

    PubMed

    dos Reis, Mario; Yang, Ziheng

    2011-07-01

    The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.

  2. ELASTIC NET FOR COX'S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM.

    PubMed

    Wu, Yichao

    2012-01-01

    For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox's proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox's proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems.

  3. Evaluation of risk from acts of terrorism :the adversary/defender model using belief and fuzzy sets.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.

    Risk from an act of terrorism is a combination of the likelihood of an attack, the likelihood of success of the attack, and the consequences of the attack. The considerable epistemic uncertainty in each of these three factors can be addressed using the belief/plausibility measure of uncertainty from the Dempster/Shafer theory of evidence. The adversary determines the likelihood of the attack. The success of the attack and the consequences of the attack are determined by the security system and mitigation measures put in place by the defender. This report documents a process for evaluating risk of terrorist acts using anmore » adversary/defender model with belief/plausibility as the measure of uncertainty. Also, the adversary model is a linguistic model that applies belief/plausibility to fuzzy sets used in an approximate reasoning rule base.« less

  4. Pseudomonas aeruginosa dose response and bathing water infection.

    PubMed

    Roser, D J; van den Akker, B; Boase, S; Haas, C N; Ashbolt, N J; Rice, S A

    2014-03-01

    Pseudomonas aeruginosa is the opportunistic pathogen mostly implicated in folliculitis and acute otitis externa in pools and hot tubs. Nevertheless, infection risks remain poorly quantified. This paper reviews disease aetiologies and bacterial skin colonization science to advance dose-response theory development. Three model forms are identified for predicting disease likelihood from pathogen density. Two are based on Furumoto & Mickey's exponential 'single-hit' model and predict infection likelihood and severity (lesions/m2), respectively. 'Third-generation', mechanistic, dose-response algorithm development is additionally scoped. The proposed formulation integrates dispersion, epidermal interaction, and follicle invasion. The review also details uncertainties needing consideration which pertain to water quality, outbreaks, exposure time, infection sites, biofilms, cerumen, environmental factors (e.g. skin saturation, hydrodynamics), and whether P. aeruginosa is endogenous or exogenous. The review's findings are used to propose a conceptual infection model and identify research priorities including pool dose-response modelling, epidermis ecology and infection likelihood-based hygiene management.

  5. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  6. Negotiating Multicollinearity with Spike-and-Slab Priors.

    PubMed

    Ročková, Veronika; George, Edward I

    2014-08-01

    In multiple regression under the normal linear model, the presence of multicollinearity is well known to lead to unreliable and unstable maximum likelihood estimates. This can be particularly troublesome for the problem of variable selection where it becomes more difficult to distinguish between subset models. Here we show how adding a spike-and-slab prior mitigates this difficulty by filtering the likelihood surface into a posterior distribution that allocates the relevant likelihood information to each of the subset model modes. For identification of promising high posterior models in this setting, we consider three EM algorithms, the fast closed form EMVS version of Rockova and George (2014) and two new versions designed for variants of the spike-and-slab formulation. For a multimodal posterior under multicollinearity, we compare the regions of convergence of these three algorithms. Deterministic annealing versions of the EMVS algorithm are seen to substantially mitigate this multimodality. A single simple running example is used for illustration throughout.

  7. Does the portrayal of tanning in Australian women's magazines relate to real women's tanning beliefs and behavior?

    PubMed

    Dixon, Helen G; Warne, Charles D; Scully, Maree L; Wakefield, Melanie A; Dobbinson, Suzanne J

    2011-04-01

    Content analysis data on the tans of 4,422 female Caucasian models sampled from spring and summer magazine issues were combined with readership data to generate indices of potential exposure to social modeling of tanning via popular women's magazines over a 15-year period (1987 to 2002). Associations between these indices and cross-sectional telephone survey data from the same period on 5,675 female teenagers' and adults' tanning attitudes, beliefs, and behavior were examined using logistic regression models. Among young women, greater exposure to tanning in young women's magazines was associated with increased likelihood of endorsing pro-tan attitudes and beliefs. Among women of all ages, greater exposure to tanned models via the most popular women's magazines was associated with increased likelihood of attempting to get a tan but lower likelihood of endorsing pro-tan attitudes. Popular women's magazines may promote and reflect real women's tanning beliefs and behavior.

  8. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    NASA Technical Reports Server (NTRS)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  9. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    DTIC Science & Technology

    1990-11-01

    1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and

  10. Parameter estimation in astronomy through application of the likelihood ratio. [satellite data analysis techniques

    NASA Technical Reports Server (NTRS)

    Cash, W.

    1979-01-01

    Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.

  11. On the Existence and Uniqueness of JML Estimates for the Partial Credit Model

    ERIC Educational Resources Information Center

    Bertoli-Barsotti, Lucio

    2005-01-01

    A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…

  12. An Elaboration Likelihood Model Based Longitudinal Analysis of Attitude Change during the Process of IT Acceptance via Education Program

    ERIC Educational Resources Information Center

    Lee, Woong-Kyu

    2012-01-01

    The principal objective of this study was to gain insight into attitude changes occurring during IT acceptance from the perspective of elaboration likelihood model (ELM). In particular, the primary target of this study was the process of IT acceptance through an education program. Although the Internet and computers are now quite ubiquitous, and…

  13. Modeling forest bird species' likelihood of occurrence in Utah with Forest Inventory and Analysis and Landfire map products and ecologically based pseudo-absence points

    Treesearch

    Phoebe L. Zarnetske; Thomas C., Jr. Edwards; Gretchen G. Moisen

    2007-01-01

    Estimating species likelihood of occurrence across extensive landscapes is a powerful management tool. Unfortunately, available occurrence data for landscape-scale modeling is often lacking and usually only in the form of observed presences. Ecologically based pseudo-absence points were generated from within habitat envelopes to accompany presence-only data in habitat...

  14. A Computer Program for Solving a Set of Conditional Maximum Likelihood Equations Arising in the Rasch Model for Questionnaires.

    ERIC Educational Resources Information Center

    Andersen, Erling B.

    A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…

  15. Moral Identity Predicts Doping Likelihood via Moral Disengagement and Anticipated Guilt.

    PubMed

    Kavussanu, Maria; Ring, Christopher

    2017-08-01

    In this study, we integrated elements of social cognitive theory of moral thought and action and the social cognitive model of moral identity to better understand doping likelihood in athletes. Participants (N = 398) recruited from a variety of team sports completed measures of moral identity, moral disengagement, anticipated guilt, and doping likelihood. Moral identity predicted doping likelihood indirectly via moral disengagement and anticipated guilt. Anticipated guilt about potential doping mediated the relationship between moral disengagement and doping likelihood. Our findings provide novel evidence to suggest that athletes, who feel that being a moral person is central to their self-concept, are less likely to use banned substances due to their lower tendency to morally disengage and the more intense feelings of guilt they expect to experience for using banned substances.

  16. A black box optimization approach to parameter estimation in a model for long/short term variations dynamics of commodity prices

    NASA Astrophysics Data System (ADS)

    De Santis, Alberto; Dellepiane, Umberto; Lucidi, Stefano

    2012-11-01

    In this paper we investigate the estimation problem for a model of the commodity prices. This model is a stochastic state space dynamical model and the problem unknowns are the state variables and the system parameters. Data are represented by the commodity spot prices, very seldom time series of Futures contracts are available for free. Both the system joint likelihood function (state variables and parameters) and the system marginal likelihood (the state variables are eliminated) function are addressed.

  17. A comparison of abundance estimates from extended batch-marking and Jolly–Seber-type experiments

    PubMed Central

    Cowen, Laura L E; Besbeas, Panagiotis; Morgan, Byron J T; Schwarz, Carl J

    2014-01-01

    Little attention has been paid to the use of multi-sample batch-marking studies, as it is generally assumed that an individual's capture history is necessary for fully efficient estimates. However, recently, Huggins et al. (2010) present a pseudo-likelihood for a multi-sample batch-marking study where they used estimating equations to solve for survival and capture probabilities and then derived abundance estimates using a Horvitz–Thompson-type estimator. We have developed and maximized the likelihood for batch-marking studies. We use data simulated from a Jolly–Seber-type study and convert this to what would have been obtained from an extended batch-marking study. We compare our abundance estimates obtained from the Crosbie–Manly–Arnason–Schwarz (CMAS) model with those of the extended batch-marking model to determine the efficiency of collecting and analyzing batch-marking data. We found that estimates of abundance were similar for all three estimators: CMAS, Huggins, and our likelihood. Gains are made when using unique identifiers and employing the CMAS model in terms of precision; however, the likelihood typically had lower mean square error than the pseudo-likelihood method of Huggins et al. (2010). When faced with designing a batch-marking study, researchers can be confident in obtaining unbiased abundance estimators. Furthermore, they can design studies in order to reduce mean square error by manipulating capture probabilities and sample size. PMID:24558576

  18. Using latent class analysis to model prescription medications in the measurement of falling among a community elderly population

    PubMed Central

    2013-01-01

    Background Falls among the elderly are a major public health concern. Therefore, the possibility of a modeling technique which could better estimate fall probability is both timely and needed. Using biomedical, pharmacological and demographic variables as predictors, latent class analysis (LCA) is demonstrated as a tool for the prediction of falls among community dwelling elderly. Methods Using a retrospective data-set a two-step LCA modeling approach was employed. First, we looked for the optimal number of latent classes for the seven medical indicators, along with the patients’ prescription medication and three covariates (age, gender, and number of medications). Second, the appropriate latent class structure, with the covariates, were modeled on the distal outcome (fall/no fall). The default estimator was maximum likelihood with robust standard errors. The Pearson chi-square, likelihood ratio chi-square, BIC, Lo-Mendell-Rubin Adjusted Likelihood Ratio test and the bootstrap likelihood ratio test were used for model comparisons. Results A review of the model fit indices with covariates shows that a six-class solution was preferred. The predictive probability for latent classes ranged from 84% to 97%. Entropy, a measure of classification accuracy, was good at 90%. Specific prescription medications were found to strongly influence group membership. Conclusions In conclusion the LCA method was effective at finding relevant subgroups within a heterogenous at-risk population for falling. This study demonstrated that LCA offers researchers a valuable tool to model medical data. PMID:23705639

  19. On the occurrence of false positives in tests of migration under an isolation with migration model

    PubMed Central

    Hey, Jody; Chung, Yujin; Sethuraman, Arun

    2015-01-01

    The population genetic study of divergence is often done using a Bayesian genealogy sampler, like those implemented in IMa2 and related programs, and these analyses frequently include a likelihood-ratio test of the null hypothesis of no migration between populations. Cruickshank and Hahn (2014, Molecular Ecology, 23, 3133–3157) recently reported a high rate of false positive test results with IMa2 for data simulated with small numbers of loci under models with no migration and recent splitting times. We confirm these findings and discover that they are caused by a failure of the assumptions underlying likelihood ratio tests that arises when using marginal likelihoods for a subset of model parameters. We also show that for small data sets, with little divergence between samples from two populations, an excellent fit can often be found by a model with a low migration rate and recent splitting time and a model with a high migration rate and a deep splitting time. PMID:26456794

  20. Applying the elaboration likelihood model of persuasion to a videotape-based eating disorders primary prevention program for adolescent girls.

    PubMed

    Withers, Giselle F; Wertheim, Eleanor H

    2004-01-01

    This study applied principles from the Elaboration Likelihood Model of Persuasion to the prevention of disordered eating. Early adolescent girls watched either a preventive videotape only (n=114) or video plus post-video activity (verbal discussion, written exercises, or control discussion) (n=187); or had no intervention (n=104). Significantly more body image and knowledge improvements occurred at post video and follow-up in the intervention groups compared to no intervention. There were no outcome differences among intervention groups, or between girls with high or low elaboration likelihood. Further research is needed in integrating the videotape into a broader prevention package.

  1. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    PubMed

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).

  2. A new model to predict weak-lensing peak counts. II. Parameter constraint strategies

    NASA Astrophysics Data System (ADS)

    Lin, Chieh-An; Kilbinger, Martin

    2015-11-01

    Context. Peak counts have been shown to be an excellent tool for extracting the non-Gaussian part of the weak lensing signal. Recently, we developed a fast stochastic forward model to predict weak-lensing peak counts. Our model is able to reconstruct the underlying distribution of observables for analysis. Aims: In this work, we explore and compare various strategies for constraining a parameter using our model, focusing on the matter density Ωm and the density fluctuation amplitude σ8. Methods: First, we examine the impact from the cosmological dependency of covariances (CDC). Second, we perform the analysis with the copula likelihood, a technique that makes a weaker assumption than does the Gaussian likelihood. Third, direct, non-analytic parameter estimations are applied using the full information of the distribution. Fourth, we obtain constraints with approximate Bayesian computation (ABC), an efficient, robust, and likelihood-free algorithm based on accept-reject sampling. Results: We find that neglecting the CDC effect enlarges parameter contours by 22% and that the covariance-varying copula likelihood is a very good approximation to the true likelihood. The direct techniques work well in spite of noisier contours. Concerning ABC, the iterative process converges quickly to a posterior distribution that is in excellent agreement with results from our other analyses. The time cost for ABC is reduced by two orders of magnitude. Conclusions: The stochastic nature of our weak-lensing peak count model allows us to use various techniques that approach the true underlying probability distribution of observables, without making simplifying assumptions. Our work can be generalized to other observables where forward simulations provide samples of the underlying distribution.

  3. Clarifying the Hubble constant tension with a Bayesian hierarchical model of the local distance ladder

    NASA Astrophysics Data System (ADS)

    Feeney, Stephen M.; Mortlock, Daniel J.; Dalmasso, Niccolò

    2018-05-01

    Estimates of the Hubble constant, H0, from the local distance ladder and from the cosmic microwave background (CMB) are discrepant at the ˜3σ level, indicating a potential issue with the standard Λ cold dark matter (ΛCDM) cosmology. A probabilistic (i.e. Bayesian) interpretation of this tension requires a model comparison calculation, which in turn depends strongly on the tails of the H0 likelihoods. Evaluating the tails of the local H0 likelihood requires the use of non-Gaussian distributions to faithfully represent anchor likelihoods and outliers, and simultaneous fitting of the complete distance-ladder data set to ensure correct uncertainty propagation. We have hence developed a Bayesian hierarchical model of the full distance ladder that does not rely on Gaussian distributions and allows outliers to be modelled without arbitrary data cuts. Marginalizing over the full ˜3000-parameter joint posterior distribution, we find H0 = (72.72 ± 1.67) km s-1 Mpc-1 when applied to the outlier-cleaned Riess et al. data, and (73.15 ± 1.78) km s-1 Mpc-1 with supernova outliers reintroduced (the pre-cut Cepheid data set is not available). Using our precise evaluation of the tails of the H0 likelihood, we apply Bayesian model comparison to assess the evidence for deviation from ΛCDM given the distance-ladder and CMB data. The odds against ΛCDM are at worst ˜10:1 when considering the Planck 2015 XIII data, regardless of outlier treatment, considerably less dramatic than naïvely implied by the 2.8σ discrepancy. These odds become ˜60:1 when an approximation to the more-discrepant Planck Intermediate XLVI likelihood is included.

  4. Statistical methods for the beta-binomial model in teratology.

    PubMed Central

    Yamamoto, E; Yanagimoto, T

    1994-01-01

    The beta-binomial model is widely used for analyzing teratological data involving littermates. Recent developments in statistical analyses of teratological data are briefly reviewed with emphasis on the model. For statistical inference of the parameters in the beta-binomial distribution, separation of the likelihood introduces an likelihood inference. This leads to reducing biases of estimators and also to improving accuracy of empirical significance levels of tests. Separate inference of the parameters can be conducted in a unified way. PMID:8187716

  5. Gyre and gimble: a maximum-likelihood replacement for Patterson correlation refinement.

    PubMed

    McCoy, Airlie J; Oeffner, Robert D; Millán, Claudia; Sammito, Massimo; Usón, Isabel; Read, Randy J

    2018-04-01

    Descriptions are given of the maximum-likelihood gyre method implemented in Phaser for optimizing the orientation and relative position of rigid-body fragments of a model after the orientation of the model has been identified, but before the model has been positioned in the unit cell, and also the related gimble method for the refinement of rigid-body fragments of the model after positioning. Gyre refinement helps to lower the root-mean-square atomic displacements between model and target molecular-replacement solutions for the test case of antibody Fab(26-10) and improves structure solution with ARCIMBOLDO_SHREDDER.

  6. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  7. A single-index threshold Cox proportional hazard model for identifying a treatment-sensitive subset based on multiple biomarkers.

    PubMed

    He, Ye; Lin, Huazhen; Tu, Dongsheng

    2018-06-04

    In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  9. Cramer-Rao Bound, MUSIC, and Maximum Likelihood. Effects of Temporal Phase Difference

    DTIC Science & Technology

    1990-11-01

    Technical Report 1373 November 1990 Cramer-Rao Bound, MUSIC , And Maximum Likelihood Effects of Temporal Phase o Difference C. V. TranI OTIC Approved... MUSIC , and Maximum Likelihood (ML) asymptotic variances corresponding to the two-source direction-of-arrival estimation where sources were modeled as...1pI = 1.00, SNR = 20 dB ..................................... 27 2. MUSIC for two equipowered signals impinging on a 5-element ULA (a) IpI = 0.50, SNR

  10. Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach

    NASA Astrophysics Data System (ADS)

    Billman, Caleb; Gonthier, P. L.; Harding, A. K.

    2012-01-01

    We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.

  11. Bayesian multimodel inference of soil microbial respiration models: Theory, application and future prospective

    NASA Astrophysics Data System (ADS)

    Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.

    2015-12-01

    Models in biogeoscience involve uncertainties in observation data, model inputs, model structure, model processes and modeling scenarios. To accommodate for different sources of uncertainty, multimodal analysis such as model combination, model selection, model elimination or model discrimination are becoming more popular. To illustrate theoretical and practical challenges of multimodal analysis, we use an example about microbial soil respiration modeling. Global soil respiration releases more than ten times more carbon dioxide to the atmosphere than all anthropogenic emissions. Thus, improving our understanding of microbial soil respiration is essential for improving climate change models. This study focuses on a poorly understood phenomena, which is the soil microbial respiration pulses in response to episodic rainfall pulses (the "Birch effect"). We hypothesize that the "Birch effect" is generated by the following three mechanisms. To test our hypothesis, we developed and assessed five evolving microbial-enzyme models against field measurements from a semiarid Savannah that is characterized by pulsed precipitation. These five model evolve step-wise such that the first model includes none of these three mechanism, while the fifth model includes the three mechanisms. The basic component of Bayesian multimodal analysis is the estimation of marginal likelihood to rank the candidate models based on their overall likelihood with respect to observation data. The first part of the study focuses on using this Bayesian scheme to discriminate between these five candidate models. The second part discusses some theoretical and practical challenges, which are mainly the effect of likelihood function selection and the marginal likelihood estimation methods on both model ranking and Bayesian model averaging. The study shows that making valid inference from scientific data is not a trivial task, since we are not only uncertain about the candidate scientific models, but also about the statistical methods that are used to discriminate between these models.

  12. Efficient simulation and likelihood methods for non-neutral multi-allele models.

    PubMed

    Joyce, Paul; Genz, Alan; Buzbas, Erkan Ozge

    2012-06-01

    Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a , 1994b , 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 10(9) rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection.

  13. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  14. The Tn7 transposition regulator TnsC interacts with the transposase subunit TnsB and target selector TnsD

    PubMed Central

    Choi, Ki Young; Spencer, Jeanelle M.; Craig, Nancy L.

    2014-01-01

    The excision of transposon Tn7 from a donor site and its insertion into its preferred target site, attachment site attTn7, is mediated by four Tn7-encoded transposition proteins: TnsA, TnsB, TnsC, and TnsD. Transposition requires the assembly of a nucleoprotein complex containing all four Tns proteins and the DNA substrates, the donor site containing Tn7, and the preferred target site attTn7. TnsA and TnsB together form the heteromeric Tn7 transposase, and TnsD is a target-selecting protein that binds specifically to attTn7. TnsC is the key regulator of transposition, interacting with both the TnsAB transposase and TnsD-attTn7. We show here that TnsC interacts directly with TnsB, and identify the specific region of TnsC involved in the TnsB–TnsC interaction during transposition. We also show that a TnsC mutant defective in interaction with TnsB is defective for Tn7 transposition both in vitro and in vivo. Tn7 displays cis-acting target immunity, which blocks Tn7 insertion into a target DNA that already contains Tn7. We provide evidence that the direct TnsB–TnsC interaction that we have identified also mediates cis-acting Tn7 target immunity. We also show that TnsC interacts directly with the target selector protein TnsD. PMID:24982178

  15. Enantioseparation of Racemic Flurbiprofen by Aqueous Two-Phase Extraction With Binary Chiral Selectors of L-dioctyl Tartrate and L-tryptophan.

    PubMed

    Chen, Zhi; Zhang, Wei; Wang, Liping; Fan, Huajun; Wan, Qiang; Wu, Xuehao; Tang, Xunyou; Tang, James Z

    2015-09-01

    A novel method for chiral separation of flurbiprofen enantiomers was developed using aqueous two-phase extraction (ATPE) coupled with biphasic recognition chiral extraction (BRCE). An aqueous two-phase system (ATPS) was used as an extracting solvent which was composed of ethanol (35.0% w/w) and ammonium sulfate (18.0% w/w). The chiral selectors in ATPS for BRCE consideration were L-dioctyl tartrate and L-tryptophan, which were screened from amino acids, β-cyclodextrin derivatives, and L-tartrate esters. Factors such as the amounts of L-dioctyl tartrate and L-tryptophan, pH, flurbiprofen concentration, and the operation temperature were investigated in terms of chiral separation of flurbiprofen enantiomers. The optimum conditions were as follows: L-dioctyl tartrate, 80 mg; L-tryptophan, 40 mg; pH, 4.0; flurbiprofen concentration, 0.10 mmol/L; and temperature, 25 °C. The maximum separation factor α for flurbiprofen enantiomers could reach 2.34. The mechanism of chiral separation of flurbiprofen enantiomers is discussed and studied. The results showed that synergistic extraction has been established by L-dioctyl tartrate and L-tryptophan, which enantioselectively recognized R- and S-enantiomers in top and bottom phases, respectively. Compared to conventional liquid-liquid extraction, ATPE coupled with BRCE possessed higher separation efficiency and enantioselectivity without the use of any other organic solvents. The proposed method is a potential and powerful alternative to conventional extraction for separation of various enantiomers. © 2015 Wiley Periodicals, Inc.

  16. Preparative enantioseparation of loxoprofen precursor by recycling countercurrent chromatography with hydroxypropyl-β-cyclodextrin as a chiral selector.

    PubMed

    Zhang, Hui; Qiu, Xujun; Lv, Liqiong; Sun, Wenyu; Wang, Chaoyue; Yan, Jizhong; Tong, Shengqiang

    2018-04-17

    Recycling countercurrent chromatography was successfully applied to the resolution of 2-(4-bromomethylphenyl)propionic acid, a key synthetic intermediate for synthesis of nonsteroidal anti-inflammatory drug loxoprofen, using hydroxypropyl-β-cyclodextrin as chiral selector. The two-phase solvent system composed of n-hexane/n-butyl acetate/0.1 mol/L citrate buffer solution with pH 2.4 (8:2:10, v/v/v) was selected. Influence factors for the enantioseparation were optimized, including type of substituted β-cyclodextrin, concentration of hydroxypropyl-β-cyclodextrin, separation temperature, and pH of aqueous phase. Under optimized separation conditions, 50 mg of 2-(4-bromomethylphenyl)propionic acid was enantioseparated using preparative recycling countercurrent chromatography. Technical details for recycling elution mode were discussed. The purities of both the S and R enantiomers were over 99.0% as determined by high-performance liquid chromatography. The enantiomeric excess of the S and R enantiomers reached 98.0%. The recovery of the enantiomers from eluted fractions was 40.8-65.6%, yielding 16.4 mg of the S enantiomer and 10.2 mg of the R enantiomer. At the same time, we attempted to enantioseparate the anti-inflammatory drug loxoprofen by countercurrent chromatography and high-performance liquid chromatography using a chiral mobile phase additive. However, no successful enantioseparation was achieved so far. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Enantioseparation of cetirizine by chromatographic methods and discrimination by 1H-NMR.

    PubMed

    Taha, Elham A; Salama, Nahla N; Wang, Shudong

    2009-03-01

    Cetirizine is an antihistaminic drug used to prevent and treat allergic conditions. It is currently marketed as a racemate. The H1-antagonist activity of cetirizine is primarily due to (R)-levocetirizine. This has led to the introduction of (R)-levocetirizine into clinical practice, and the chiral switching is expected to be more selective and safer. The present work represents three methods for the analysis and chiral discrimination of cetirizine. The first method was based on the enantioseparation of cetirizine on silica gel TLC plates using different chiral selectors as mobile phase additives. The mobile phase enabling successful resolution was acetonitrile-water 17: 3, (v/v) containing 1 mM of chiral selector, namely hydroxypropyl-beta-cyclodextrin, chondroitin sulphate or vancomycin hydrochloride. The second method was a validated high performance liquid chromatography (HPLC), based on stereoselective separation of cetirizine and quantitative determination of its eutomer (R)-levocetirizine on a monolithic C18 column using hydroxypropyl-beta-cyclodextrin as a chiral mobile phase additive. The resolved peaks of (R)-levocetirizine and (S)-dextrocetirizine were confirmed by further mass spectrometry. The third method used a (1)H-NMR technique to characterize cetirizine and (R)-levocetirizine. These methods are selective and accurate, and can be easily applied for chiral discrimination and determination of cetirizine in drug substance and drug product in quality control laboratory. Moreover, chiral purity testing of (R)-levocetirizine can also be monitored by the chromatographic methods. Copyright 2009 John Wiley & Sons, Ltd.

  18. Simultaneous detection of genetically modified organisms by multiplex ligation-dependent genome amplification and capillary gel electrophoresis with laser-induced fluorescence.

    PubMed

    García-Cañas, Virginia; Mondello, Monica; Cifuentes, Alejandro

    2010-07-01

    In this work, an innovative method useful to simultaneously analyze multiple genetically modified organisms is described. The developed method consists in the combination of multiplex ligation-dependent genome dependent amplification (MLGA) with CGE and LIF detection using bare-fused silica capillaries. The MLGA process is based on oligonucleotide constructs, formed by a universal sequence (vector) and long specific oligonucleotides (selectors) that facilitate the circularization of specific DNA target regions. Subsequently, the circularized target sequences are simultaneously amplified with the same couple of primers and analyzed by CGE-LIF using a bare-fused silica capillary and a run electrolyte containing 2-hydroxyethyl cellulose acting as both sieving matrix and dynamic capillary coating. CGE-LIF is shown to be very useful and informative for optimizing MLGA parameters such as annealing temperature, number of ligation cycles, and selector probes concentration. We demonstrate the specificity of the method in detecting the presence of transgenic DNA in certified reference and raw commercial samples. The method developed is sensitive and allows the simultaneous detection in a single run of percentages of transgenic maize as low as 1% of GA21, 1% of MON863, and 1% of MON810 in maize samples with signal-to-noise ratios for the corresponding DNA peaks of 15, 12, and 26, respectively. These results demonstrate, to our knowledge for the first time, the great possibilities of MLGA techniques for genetically modified organisms analysis.

  19. Developing a virtual community for health sciences library book selection: Doody's Core Titles.

    PubMed

    Shedlock, James; Walton, Linda J

    2006-01-01

    The purpose of this article is to describe Doody's Core Titles in the Health Sciences as a new selection guide and a virtual community based on an effective use of online systems and to describe its potential impact on library collection development. The setting is the availability of health sciences selection guides. Participants include Doody Enterprise staff, Doody's Library Board of Advisors, content specialists, and library selectors. Resources include the online system used to create Doody's Core Titles along with references to complementary databases. Doody's Core Titles is described and discussed in relation to the literature of selection guides, especially in comparison to the Brandon/Hill selected lists that were published from 1965 to 2003. Doody's Core Titles seeks to fill the vacuum created when the Brandon/Hill lists ceased publication. Doody's Core Titles is a unique selection guide based on its method of creating an online community of experts to identify and score a core list of titles in 119 health sciences specialties and disciplines. The result is a new selection guide, now available annually, that will aid health sciences librarians in identifying core titles for local collections. Doody's Core Titles organizes the evaluation of core titles that are identified and recommended by content specialists associated with Doody's Book Review Service and library selectors. A scoring mechanism is used to create the selection of core titles, similar to the star rating system employed in other Doody Enterprise products and services.

  20. Conflict effects without conflict in anterior cingulate cortex: multiple response effects and context specific representations

    PubMed Central

    Brown, Joshua W.

    2009-01-01

    The error likelihood computational model of anterior cingulate cortex (ACC) (Brown & Braver, 2005) has successfully predicted error likelihood effects, risk prediction effects, and how individual differences in conflict and error likelihood effects vary with trait differences in risk aversion. The same computational model now makes a further prediction that apparent conflict effects in ACC may result in part from an increasing number of simultaneously active responses, regardless of whether or not the cued responses are mutually incompatible. In Experiment 1, the model prediction was tested with a modification of the Eriksen flanker task, in which some task conditions require two otherwise mutually incompatible responses to be generated simultaneously. In that case, the two response processes are no longer in conflict with each other. The results showed small but significant medial PFC effects in the incongruent vs. congruent contrast, despite the absence of response conflict, consistent with model predictions. This is the multiple response effect. Nonetheless, actual response conflict led to greater ACC activation, suggesting that conflict effects are specific to particular task contexts. In Experiment 2, results from a change signal task suggested that the context dependence of conflict signals does not depend on error likelihood effects. Instead, inputs to ACC may reflect complex and task specific representations of motor acts, such as bimanual responses. Overall, the results suggest the existence of a richer set of motor signals monitored by medial PFC and are consistent with distinct effects of multiple responses, conflict, and error likelihood in medial PFC. PMID:19375509

  1. Estimation of parameters of dose volume models and their confidence limits

    NASA Astrophysics Data System (ADS)

    van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.

    2003-07-01

    Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.

  2. Predicting Likelihood of Surgery Prior to First Visit in Patients with Back and Lower Extremity Symptoms: A simple mathematical model based on over 8000 patients.

    PubMed

    Boden, Lauren M; Boden, Stephanie A; Premkumar, Ajay; Gottschalk, Michael B; Boden, Scott D

    2018-02-09

    Retrospective analysis of prospectively collected data. To create a data-driven triage system stratifying patients by likelihood of undergoing spinal surgery within one year of presentation. Low back pain (LBP) and radicular lower extremity (LE) symptoms are common musculoskeletal problems. There is currently no standard data-derived triage process based on information that can be obtained prior to the initial physician-patient encounter to direct patients to the optimal physician type. We analyzed patient-reported data from 8006 patients with a chief complaint of LBP and/or LE radicular symptoms who presented to surgeons at a large multidisciplinary spine center between September 1, 2005 and June 30, 2016. Univariate and multivariate analysis identified independent risk factors for undergoing spinal surgery within one year of initial visit. A model incorporating these risk factors was created using a random sample of 80% of the total patients in our cohort, and validated on the remaining 20%. The baseline one-year surgery rate within our cohort was 39% for all patients and 42% for patients with LE symptoms. Those identified as high likelihood by the center's existing triage process had a surgery rate of 45%. The new triage scoring system proposed in this study was able to identify a high likelihood group in which 58% underwent surgery, which is a 46% higher surgery rate than in non-triaged patients and a 29% improvement from our institution's existing triage system. The data-driven triage model and scoring system derived and validated in this study (Spine Surgery Likelihood model [SSL-11]), significantly improved existing processes in predicting the likelihood of undergoing spinal surgery within one year of initial presentation. This triage system will allow centers to more selectively screen for surgical candidates and more effectively direct patients to surgeons or non-operative spine specialists. 4.

  3. Testing students' e-learning via Facebook through Bayesian structural equation modeling.

    PubMed

    Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.

  4. Testing students’ e-learning via Facebook through Bayesian structural equation modeling

    PubMed Central

    Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. PMID:28886019

  5. The role of self-regulatory efficacy, moral disengagement and guilt on doping likelihood: A social cognitive theory perspective.

    PubMed

    Ring, Christopher; Kavussanu, Maria

    2018-03-01

    Given the concern over doping in sport, researchers have begun to explore the role played by self-regulatory processes in the decision whether to use banned performance-enhancing substances. Grounded on Bandura's (1991) theory of moral thought and action, this study examined the role of self-regulatory efficacy, moral disengagement and anticipated guilt on the likelihood to use a banned substance among college athletes. Doping self-regulatory efficacy was associated with doping likelihood both directly (b = -.16, P < .001) and indirectly (b = -.29, P < .001) through doping moral disengagement. Moral disengagement also contributed directly to higher doping likelihood and lower anticipated guilt about doping, which was associated with higher doping likelihood. Overall, the present findings provide evidence to support a model of doping based on Bandura's social cognitive theory of moral thought and action, in which self-regulatory efficacy influences the likelihood to use banned performance-enhancing substances both directly and indirectly via moral disengagement.

  6. Synthesizing Regression Results: A Factored Likelihood Method

    ERIC Educational Resources Information Center

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  7. Maximum likelihood estimation for Cox's regression model under nested case-control sampling.

    PubMed

    Scheike, Thomas H; Juul, Anders

    2004-04-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.

  8. Robust Likelihoods for Inflationary Gravitational Waves from Maps of Cosmic Microwave Background Polarization

    NASA Technical Reports Server (NTRS)

    Switzer, Eric Ryan; Watts, Duncan J.

    2016-01-01

    The B-mode polarization of the cosmic microwave background provides a unique window into tensor perturbations from inflationary gravitational waves. Survey effects complicate the estimation and description of the power spectrum on the largest angular scales. The pixel-space likelihood yields parameter distributions without the power spectrum as an intermediate step, but it does not have the large suite of tests available to power spectral methods. Searches for primordial B-modes must rigorously reject and rule out contamination. Many forms of contamination vary or are uncorrelated across epochs, frequencies, surveys, or other data treatment subsets. The cross power and the power spectrum of the difference of subset maps provide approaches to reject and isolate excess variance. We develop an analogous joint pixel-space likelihood. Contamination not modeled in the likelihood produces parameter-dependent bias and complicates the interpretation of the difference map. We describe a null test that consistently weights the difference map. Excess variance should either be explicitly modeled in the covariance or be removed through reprocessing the data.

  9. Support vector machine in crash prediction at the level of traffic analysis zones: Assessing the spatial proximity effects.

    PubMed

    Dong, Ni; Huang, Helai; Zheng, Liang

    2015-09-01

    In zone-level crash prediction, accounting for spatial dependence has become an extensively studied topic. This study proposes Support Vector Machine (SVM) model to address complex, large and multi-dimensional spatial data in crash prediction. Correlation-based Feature Selector (CFS) was applied to evaluate candidate factors possibly related to zonal crash frequency in handling high-dimension spatial data. To demonstrate the proposed approaches and to compare them with the Bayesian spatial model with conditional autoregressive prior (i.e., CAR), a dataset in Hillsborough county of Florida was employed. The results showed that SVM models accounting for spatial proximity outperform the non-spatial model in terms of model fitting and predictive performance, which indicates the reasonableness of considering cross-zonal spatial correlations. The best model predictive capability, relatively, is associated with the model considering proximity of the centroid distance by choosing the RBF kernel and setting the 10% of the whole dataset as the testing data, which further exhibits SVM models' capacity for addressing comparatively complex spatial data in regional crash prediction modeling. Moreover, SVM models exhibit the better goodness-of-fit compared with CAR models when utilizing the whole dataset as the samples. A sensitivity analysis of the centroid-distance-based spatial SVM models was conducted to capture the impacts of explanatory variables on the mean predicted probabilities for crash occurrence. While the results conform to the coefficient estimation in the CAR models, which supports the employment of the SVM model as an alternative in regional safety modeling. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. 3D segmentation of annulus fibrosus and nucleus pulposus from T2-weighted magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Castro-Mateos, Isaac; Pozo, Jose M.; Eltes, Peter E.; Del Rio, Luis; Lazary, Aron; Frangi, Alejandro F.

    2014-12-01

    Computational medicine aims at employing personalised computational models in diagnosis and treatment planning. The use of such models to help physicians in finding the best treatment for low back pain (LBP) is becoming popular. One of the challenges of creating such models is to derive patient-specific anatomical and tissue models of the lumbar intervertebral discs (IVDs), as a prior step. This article presents a segmentation scheme that obtains accurate results irrespective of the degree of IVD degeneration, including pathological discs with protrusion or herniation. The segmentation algorithm, employing a novel feature selector, iteratively deforms an initial shape, which is projected into a statistical shape model space at first and then, into a B-Spline space to improve accuracy. The method was tested on a MR dataset of 59 patients suffering from LBP. The images follow a standard T2-weighted protocol in coronal and sagittal acquisitions. These two image volumes were fused in order to overcome large inter-slice spacing. The agreement between expert-delineated structures, used here as gold-standard, and our automatic segmentation was evaluated using Dice Similarity Index and surface-to-surface distances, obtaining a mean error of 0.68 mm in the annulus segmentation and 1.88 mm in the nucleus, which are the best results with respect to the image resolution in the current literature.

  11. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.

    2013-12-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  12. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    USGS Publications Warehouse

    Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.

    2014-01-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  13. Rational selection of experimental readout and intervention sites for reducing uncertainties in computational model predictions.

    PubMed

    Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai

    2015-01-16

    Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.

  14. Identification of Lmo1 as part of a Hox-dependent regulatory network for hindbrain patterning.

    PubMed

    Matis, Christelle; Oury, Franck; Remacle, Sophie; Lampe, Xavier; Gofflot, Françoise; Picard, Jacques J; Rijli, Filippo M; Rezsohazy, René

    2007-09-01

    The embryonic functions of Hox proteins have been extensively investigated in several animal phyla. These transcription factors act as selectors of developmental programmes, to govern the morphogenesis of multiple structures and organs. However, despite the variety of morphogenetic processes Hox proteins are involved in, only a limited set of their target genes has been identified so far. To find additional targets, we used a strategy based upon the simultaneous overexpression of Hoxa2 and its cofactors Pbx1 and Prep in a cellular model. Among genes whose expression was upregulated, we identified LMO1, which codes for an intertwining LIM-only factor involved in protein-DNA oligomeric complexes. By analysing its expression in Hox knockout mice, we show that Lmo1 is differentially regulated by Hoxa2 and Hoxb2, in specific columns of hindbrain neuronal progenitors. These results suggest that Lmo1 takes part in a Hox paralogue 2-dependent network regulating anteroposterior and dorsoventral hindbrain patterning. (c) 2007 Wiley-Liss, Inc.

  15. Harnessing the Big Data Paradigm for ICME: Shifting from Materials Selection to Materials Enabled Design

    NASA Astrophysics Data System (ADS)

    Broderick, Scott R.; Santhanam, Ganesh Ram; Rajan, Krishna

    2016-08-01

    As the size of databases has significantly increased, whether through high throughput computation or through informatics-based modeling, the challenge of selecting the optimal material for specific design requirements has also arisen. Given the multiple, and often conflicting, design requirements, this selection process is not as trivial as sorting the database for a given property value. We suggest that the materials selection process should minimize selector bias, as well as take data uncertainty into account. For this reason, we discuss and apply decision theory for identifying chemical additions to Ni-base alloys. We demonstrate and compare results for both a computational array of chemistries and standard commercial superalloys. We demonstrate how we can use decision theory to select the best chemical additions for enhancing both property and processing, which would not otherwise be easily identifiable. This work is one of the first examples of introducing the mathematical framework of set theory and decision analysis into the domain of the materials selection process.

  16. Spectral likelihood expansions for Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nagel, Joseph B.; Sudret, Bruno

    2016-03-01

    A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

  17. Evaluating the use of verbal probability expressions to communicate likelihood information in IPCC reports

    NASA Astrophysics Data System (ADS)

    Harris, Adam

    2014-05-01

    The Intergovernmental Panel on Climate Change (IPCC) prescribes that the communication of risk and uncertainty information pertaining to scientific reports, model predictions etc. be communicated with a set of 7 likelihood expressions. These range from "Extremely likely" (intended to communicate a likelihood of greater than 99%) through "As likely as not" (33-66%) to "Extremely unlikely" (less than 1%). Psychological research has investigated the degree to which these expressions are interpreted as intended by the IPCC, both within and across cultures. I will present a selection of this research and demonstrate some problems associated with communicating likelihoods in this way, as well as suggesting some potential improvements.

  18. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    PubMed

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  19. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  20. Accounting for informatively missing data in logistic regression by means of reassessment sampling.

    PubMed

    Lin, Ji; Lyles, Robert H

    2015-05-20

    We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Hybrid pairwise likelihood analysis of animal behavior experiments.

    PubMed

    Cattelan, Manuela; Varin, Cristiano

    2013-12-01

    The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons. © 2013, The International Biometric Society.

  2. ELASTIC NET FOR COX’S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM

    PubMed Central

    Wu, Yichao

    2012-01-01

    For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox’s proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox’s proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems. PMID:23226932

  3. Negotiating Multicollinearity with Spike-and-Slab Priors

    PubMed Central

    Ročková, Veronika

    2014-01-01

    In multiple regression under the normal linear model, the presence of multicollinearity is well known to lead to unreliable and unstable maximum likelihood estimates. This can be particularly troublesome for the problem of variable selection where it becomes more difficult to distinguish between subset models. Here we show how adding a spike-and-slab prior mitigates this difficulty by filtering the likelihood surface into a posterior distribution that allocates the relevant likelihood information to each of the subset model modes. For identification of promising high posterior models in this setting, we consider three EM algorithms, the fast closed form EMVS version of Rockova and George (2014) and two new versions designed for variants of the spike-and-slab formulation. For a multimodal posterior under multicollinearity, we compare the regions of convergence of these three algorithms. Deterministic annealing versions of the EMVS algorithm are seen to substantially mitigate this multimodality. A single simple running example is used for illustration throughout. PMID:25419004

  4. The Atacama Cosmology Telescope: Likelihood for Small-Scale CMB Data

    NASA Technical Reports Server (NTRS)

    Dunkley, J.; Calabrese, E.; Sievers, J.; Addison, G. E.; Battaglia, N.; Battistelli, E. S.; Bond, J. R.; Das, S.; Devlin, M. J.; Dunner, R.; hide

    2013-01-01

    The Atacama Cosmology Telescope has measured the angular power spectra of microwave fluctuations to arcminute scales at frequencies of 148 and 218 GHz, from three seasons of data. At small scales the fluctuations in the primordial Cosmic Microwave Background (CMB) become increasingly obscured by extragalactic foregounds and secondary CMB signals. We present results from a nine-parameter model describing these secondary effects, including the thermal and kinematic Sunyaev-Zel'dovich (tSZ and kSZ) power; the clustered and Poisson-like power from Cosmic Infrared Background (CIB) sources, and their frequency scaling; the tSZ-CIB correlation coefficient; the extragalactic radio source power; and thermal dust emission from Galactic cirrus in two different regions of the sky. In order to extract cosmological parameters, we describe a likelihood function for the ACT data, fitting this model to the multi-frequency spectra in the multipole range 500 < l < 10000. We extend the likelihood to include spectra from the South Pole Telescope at frequencies of 95, 150, and 220 GHz. Accounting for different radio source levels and Galactic cirrus emission, the same model provides an excellent fit to both datasets simultaneously, with ?2/dof= 675/697 for ACT, and 96/107 for SPT. We then use the multi-frequency likelihood to estimate the CMB power spectrum from ACT in bandpowers, marginalizing over the secondary parameters. This provides a simplified 'CMB-only' likelihood in the range 500 < l < 3500 for use in cosmological parameter estimation

  5. Two stochastic models useful in petroleum exploration

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1972-01-01

    A model of the petroleum exploration process that tests empirically the hypothesis that at an early stage in the exploration of a basin, the process behaves like sampling without replacement is proposed along with a model of the spatial distribution of petroleum reserviors that conforms to observed facts. In developing the model of discovery, the following topics are discussed: probabilitistic proportionality, likelihood function, and maximum likelihood estimation. In addition, the spatial model is described, which is defined as a stochastic process generating values of a sequence or random variables in a way that simulates the frequency distribution of areal extent, the geographic location, and shape of oil deposits

  6. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  7. A maximum likelihood convolutional decoder model vs experimental data comparison

    NASA Technical Reports Server (NTRS)

    Chen, R. Y.

    1979-01-01

    This article describes the comparison of a maximum likelihood convolutional decoder (MCD) prediction model and the actual performance of the MCD at the Madrid Deep Space Station. The MCD prediction model is used to develop a subroutine that has been utilized by the Telemetry Analysis Program (TAP) to compute the MCD bit error rate for a given signal-to-noise ratio. The results indicate that that the TAP can predict quite well compared to the experimental measurements. An optimal modulation index also can be found through TAP.

  8. Recompression Chamber Communication Systems Test and Evaluation.

    DTIC Science & Technology

    1984-04-01

    consisted of a selector switch and two banana jacks in order to use the same connecting cable on all of the systems tested. This modification would be...thin 6. din dic din dill 31. bark mark lark park 7. dun dud dung dug 32. gale pale bale male 8. f fig fin fizz fib 33. peel feel eel keel 9. leave...tale male bale S. fin fig fib fizz fill 33. heel keel feel peel eel 9. leash lead leave leach liege 34. till hill bill kill will 10. taj tog tong toss

  9. SWITCHING TRANSMITTER POSITIONING OF SYNCHROS

    DOEpatents

    Wolff, H.

    1962-03-13

    A transformer apparatus is designed for effecting the step positioning of synchro motors. The apparatus is provided with ganged switches and pre- selected contacts to permit the units and tens selection of the desired angular position for the synchro motor rotor with only the movement of two selector knobs required. With the selection thus made, the appropriate pre-selected signal is delivered to the synchro motor for positioning the rotor of the latter as selected. The transformer apparatus is divided into smaller arrangements to conform with coraputed trigonometric relations which will give the desired results. (AEC)

  10. Structural diversity in the dandelion (Taraxacum officinale) polyphenol oxidase family results in different responses to model substrates.

    PubMed

    Dirks-Hofmeister, Mareike E; Singh, Ratna; Leufken, Christine M; Inlow, Jennifer K; Moerschbacher, Bruno M

    2014-01-01

    Polyphenol oxidases (PPOs) are ubiquitous type-3 copper enzymes that catalyze the oxygen-dependent conversion of o-diphenols to the corresponding quinones. In most plants, PPOs are present as multiple isoenzymes that probably serve distinct functions, although the precise relationship between sequence, structure and function has not been addressed in detail. We therefore compared the characteristics and activities of recombinant dandelion PPOs to gain insight into the structure-function relationships within the plant PPO family. Phylogenetic analysis resolved the 11 isoenzymes of dandelion into two evolutionary groups. More detailed in silico and in vitro analyses of four representative PPOs covering both phylogenetic groups were performed. Molecular modeling and docking predicted differences in enzyme-substrate interactions, providing a structure-based explanation for grouping. One amino acid side chain positioned at the entrance to the active site (position HB2+1) potentially acts as a "selector" for substrate binding. In vitro activity measurements with the recombinant, purified enzymes also revealed group-specific differences in kinetic parameters when the selected PPOs were presented with five model substrates. The combination of our enzyme kinetic measurements and the in silico docking studies therefore indicate that the physiological functions of individual PPOs might be defined by their specific interactions with different natural substrates.

  11. Individual, team, and coach predictors of players' likelihood to aggress in youth soccer.

    PubMed

    Chow, Graig M; Murray, Kristen E; Feltz, Deborah L

    2009-08-01

    The purpose of this study was to examine personal and socioenvironmental factors of players' likelihood to aggress. Participants were youth soccer players (N = 258) and their coaches (N = 23) from high school and club teams. Players completed the Judgments About Moral Behavior in Youth Sports Questionnaire (JAMBYSQ; Stephens, Bredemeier, & Shields, 1997), which assessed athletes' stage of moral development, team norm for aggression, and self-described likelihood to aggress against an opponent. Coaches were administered the Coaching Efficacy Scale (CES; Feltz, Chase, Moritz, & Sullivan, 1999). Using multilevel modeling, results demonstrated that the team norm for aggression at the athlete and team level were significant predictors of athletes' self likelihood to aggress scores. Further, coaches' game strategy efficacy emerged as a positive predictor of their players' self-described likelihood to aggress. The findings contribute to previous research examining the socioenvironmental predictors of athletic aggression in youth sport by demonstrating the importance of coaching efficacy beliefs.

  12. Posterior propriety for hierarchical models with log-likelihoods that have norm bounds

    DOE PAGES

    Michalak, Sarah E.; Morris, Carl N.

    2015-07-17

    Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less

  13. Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.

    PubMed

    Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R

    2018-05-26

    Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  14. On Muthen's Maximum Likelihood for Two-Level Covariance Structure Models

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro

    2005-01-01

    Data in social and behavioral sciences are often hierarchically organized. Special statistical procedures that take into account the dependence of such observations have been developed. Among procedures for 2-level covariance structure analysis, Muthen's maximum likelihood (MUML) has the advantage of easier computation and faster convergence. When…

  15. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  16. Likelihood-Ratio DIF Testing: Effects of Nonnormality

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    Differential item functioning (DIF) occurs when an item has different measurement properties for members of one group versus another. Likelihood-ratio (LR) tests for DIF based on item response theory (IRT) involve statistically comparing IRT models that vary with respect to their constraints. A simulation study evaluated how violation of the…

  17. A Study of Item Bias for Attitudinal Measurement Using Maximum Likelihood Factor Analysis.

    ERIC Educational Resources Information Center

    Mayberry, Paul W.

    A technique for detecting item bias that is responsive to attitudinal measurement considerations is a maximum likelihood factor analysis procedure comparing multivariate factor structures across various subpopulations, often referred to as SIFASP. The SIFASP technique allows for factorial model comparisons in the testing of various hypotheses…

  18. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  19. Robust Multipoint Water-Fat Separation Using Fat Likelihood Analysis

    PubMed Central

    Yu, Huanzhou; Reeder, Scott B.; Shimakawa, Ann; McKenzie, Charles A.; Brittain, Jean H.

    2016-01-01

    Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. PMID:21842498

  20. Validation of DNA-based identification software by computation of pedigree likelihood ratios.

    PubMed

    Slooten, K

    2011-08-01

    Disaster victim identification (DVI) can be aided by DNA-evidence, by comparing the DNA-profiles of unidentified individuals with those of surviving relatives. The DNA-evidence is used optimally when such a comparison is done by calculating the appropriate likelihood ratios. Though conceptually simple, the calculations can be quite involved, especially with large pedigrees, precise mutation models etc. In this article we describe a series of test cases designed to check if software designed to calculate such likelihood ratios computes them correctly. The cases include both simple and more complicated pedigrees, among which inbred ones. We show how to calculate the likelihood ratio numerically and algebraically, including a general mutation model and possibility of allelic dropout. In Appendix A we show how to derive such algebraic expressions mathematically. We have set up these cases to validate new software, called Bonaparte, which performs pedigree likelihood ratio calculations in a DVI context. Bonaparte has been developed by SNN Nijmegen (The Netherlands) for the Netherlands Forensic Institute (NFI). It is available free of charge for non-commercial purposes (see www.dnadvi.nl for details). Commercial licenses can also be obtained. The software uses Bayesian networks and the junction tree algorithm to perform its calculations. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

Top