Sample records for standard modeling techniques

  1. Simulations of motor unit number estimation techniques

    NASA Astrophysics Data System (ADS)

    Major, Lora A.; Jones, Kelvin E.

    2005-06-01

    Motor unit number estimation (MUNE) is an electrodiagnostic procedure used to evaluate the number of motor axons connected to a muscle. All MUNE techniques rely on assumptions that must be fulfilled to produce a valid estimate. As there is no gold standard to compare the MUNE techniques against, we have developed a model of the relevant neuromuscular physiology and have used this model to simulate various MUNE techniques. The model allows for a quantitative analysis of candidate MUNE techniques that will hopefully contribute to consensus regarding a standard procedure for performing MUNE.

  2. Standard surface-reflectance model and illuminant estimation

    NASA Technical Reports Server (NTRS)

    Tominaga, Shoji; Wandell, Brian A.

    1989-01-01

    A vector analysis technique was adopted to test the standard reflectance model. A computational model was developed to determine the components of the observed spectra and an estimate of the illuminant was obtained without using a reference white standard. The accuracy of the standard model is evaluated.

  3. Comparison of Preloaded Bougie versus Standard Bougie Technique for Endotracheal Intubation in a Cadaveric Model.

    PubMed

    Baker, Jay B; Maskell, Kevin F; Matlock, Aaron G; Walsh, Ryan M; Skinner, Carl G

    2015-07-01

    We compared intubating with a preloaded bougie (PB) against standard bougie technique in terms of success rates, time to successful intubation and provider preference on a cadaveric airway model. In this prospective, crossover study, healthcare providers intubated a cadaver using the PB technique and the standard bougie technique. Participants were randomly assigned to start with either technique. Following standardized training and practice, procedural success and time for each technique was recorded for each participant. Subsequently, participants were asked to rate their perceived ease of intubation on a visual analogue scale of 1 to 10 (1=difficult and 10=easy) and to select which technique they preferred. 47 participants with variable experience intubating were enrolled at an emergency medicine intern airway course. The success rate of all groups for both techniques was equal (95.7%). The range of times to completion for the standard bougie technique was 16.0-70.2 seconds, with a mean time of 29.7 seconds. The range of times to completion for the PB technique was 15.7-110.9 seconds, with a mean time of 29.4 seconds. There was a non-significant difference of 0.3 seconds (95% confidence interval -2.8 to 3.4 seconds) between the two techniques. Participants rated the relative ease of intubation as 7.3/10 for the standard technique and 7.6/10 for the preloaded technique (p=0.53, 95% confidence interval of the difference -0.97 to 0.50). Thirty of 47 participants subjectively preferred the PB technique (p=0.039). There was no significant difference in success or time to intubation between standard bougie and PB techniques. The majority of participants in this study preferred the PB technique. Until a clear and clinically significant difference is found between these techniques, emergency airway operators should feel confident in using the technique with which they are most comfortable.

  4. A comparison of linear and nonlinear statistical techniques in performance attribution.

    PubMed

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  5. NASA standard: Trend analysis techniques

    NASA Technical Reports Server (NTRS)

    1988-01-01

    This Standard presents descriptive and analytical techniques for NASA trend analysis applications. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. Use of this Standard is not mandatory; however, it should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend Analysis is neither a precise term nor a circumscribed methodology, but rather connotes, generally, quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this Standard. The document presents the basic ideas needed for qualitative and quantitative assessment of trends, together with relevant examples. A list of references provides additional sources of information.

  6. Visually guided tube thoracostomy insertion comparison to standard of care in a large animal model.

    PubMed

    Hernandez, Matthew C; Vogelsang, David; Anderson, Jeff R; Thiels, Cornelius A; Beilman, Gregory; Zielinski, Martin D; Aho, Johnathon M

    2017-04-01

    Tube thoracostomy (TT) is a lifesaving procedure for a variety of thoracic pathologies. The most commonly utilized method for placement involves open dissection and blind insertion. Image guided placement is commonly utilized but is limited by an inability to see distal placement location. Unfortunately, TT is not without complications. We aim to demonstrate the feasibility of a disposable device to allow for visually directed TT placement compared to the standard of care in a large animal model. Three swine were sequentially orotracheally intubated and anesthetized. TT was conducted utilizing a novel visualization device, tube thoracostomy visual trocar (TTVT) and standard of care (open technique). Position of the TT in the chest cavity were recorded using direct thoracoscopic inspection and radiographic imaging with the operator blinded to results. Complications were evaluated using a validated complication grading system. Standard descriptive statistical analyses were performed. Thirty TT were placed, 15 using TTVT technique, 15 using standard of care open technique. All of the TT placed using TTVT were without complication and in optimal position. Conversely, 27% of TT placed using standard of care open technique resulted in complications. Necropsy revealed no injury to intrathoracic organs. Visual directed TT placement using TTVT is feasible and non-inferior to the standard of care in a large animal model. This improvement in instrumentation has the potential to greatly improve the safety of TT. Further study in humans is required. Therapeutic Level II. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Multi-technique comparison of troposphere zenith delays and gradients during CONT08

    NASA Astrophysics Data System (ADS)

    Teke, Kamil; Böhm, Johannes; Nilsson, Tobias; Schuh, Harald; Steigenberger, Peter; Dach, Rolf; Heinkelmann, Robert; Willis, Pascal; Haas, Rüdiger; García-Espada, Susana; Hobiger, Thomas; Ichikawa, Ryuichi; Shimizu, Shingo

    2011-07-01

    CONT08 was a 15 days campaign of continuous Very Long Baseline Interferometry (VLBI) sessions during the second half of August 2008 carried out by the International VLBI Service for Geodesy and Astrometry (IVS). In this study, VLBI estimates of troposphere zenith total delays (ZTD) and gradients during CONT08 were compared with those derived from observations with the Global Positioning System (GPS), Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS), and water vapor radiometers (WVR) co-located with the VLBI radio telescopes. Similar geophysical models were used for the analysis of the space geodetic data, whereas the parameterization for the least-squares adjustment of the space geodetic techniques was optimized for each technique. In addition to space geodetic techniques and WVR, ZTD and gradients from numerical weather models (NWM) were used from the European Centre for Medium-Range Weather Forecasts (ECMWF) (all sites), the Japan Meteorological Agency (JMA) and Cloud Resolving Storm Simulator (CReSS) (Tsukuba), and the High Resolution Limited Area Model (HIRLAM) (European sites). Biases, standard deviations, and correlation coefficients were computed between the troposphere estimates of the various techniques for all eleven CONT08 co-located sites. ZTD from space geodetic techniques generally agree at the sub-centimetre level during CONT08, and—as expected—the best agreement is found for intra-technique comparisons: between the Vienna VLBI Software and the combined IVS solutions as well as between the Center for Orbit Determination (CODE) solution and an IGS PPP time series; both intra-technique comparisons are with standard deviations of about 3-6 mm. The best inter space geodetic technique agreement of ZTD during CONT08 is found between the combined IVS and the IGS solutions with a mean standard deviation of about 6 mm over all sites, whereas the agreement with numerical weather models is between 6 and 20 mm. The standard deviations are generally larger at low latitude sites because of higher humidity, and the latter is also the reason why the standard deviations are larger at northern hemisphere stations during CONT08 in comparison to CONT02 which was observed in October 2002. The assessment of the troposphere gradients from the different techniques is not as clear because of different time intervals, different estimation properties, or different observables. However, the best inter-technique agreement is found between the IVS combined gradients and the GPS solutions with standard deviations between 0.2 and 0.7 mm.

  8. Advancement of Techniques for Modeling the Effects of Atmospheric Gravity-Wave-Induced Inhomogeneities on Infrasound Propagation

    DTIC Science & Technology

    2010-09-01

    ADVANCEMENT OF TECHNIQUES FOR MODELING THE EFFECTS OF ATMOSPHERIC GRAVITY-WAVE-INDUCED INHOMOGENEITIES ON INFRASOUND PROPAGATION Robert G...number of infrasound observations indicate that fine-scale atmospheric inhomogeneities contribute to infrasonic arrivals that are not predicted by...standard modeling techniques. In particular, gravity waves, or buoyancy waves, are believed to contribute to the multipath nature of infrasound

  9. A numerical projection technique for large-scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang

    2011-10-01

    We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.

  10. Strain Gauge Balance Calibration and Data Reduction at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Ferris, A. T. Judy

    1999-01-01

    This paper will cover the standard force balance calibration and data reduction techniques used at Langley Research Center. It will cover balance axes definition, balance type, calibration instrumentation, traceability of standards to NIST, calibration loading procedures, balance calibration mathematical model, calibration data reduction techniques, balance accuracy reporting, and calibration frequency.

  11. Asymptotically Safe Standard Model via Vectorlike Fermions.

    PubMed

    Mann, R B; Meffe, J R; Sannino, F; Steele, T G; Wang, Z W; Zhang, C

    2017-12-29

    We construct asymptotically safe extensions of the standard model by adding gauged vectorlike fermions. Using large number-of-flavor techniques we argue that all gauge couplings, including the hypercharge and, under certain conditions, the Higgs coupling, can achieve an interacting ultraviolet fixed point.

  12. Asymptotically Safe Standard Model via Vectorlike Fermions

    NASA Astrophysics Data System (ADS)

    Mann, R. B.; Meffe, J. R.; Sannino, F.; Steele, T. G.; Wang, Z. W.; Zhang, C.

    2017-12-01

    We construct asymptotically safe extensions of the standard model by adding gauged vectorlike fermions. Using large number-of-flavor techniques we argue that all gauge couplings, including the hypercharge and, under certain conditions, the Higgs coupling, can achieve an interacting ultraviolet fixed point.

  13. Laparoscopic lens fogging: solving a common surgical problem in standard and robotic laparoscopes via a scientific model.

    PubMed

    Manning, Todd G; Papa, Nathan; Perera, Marlon; McGrath, Shannon; Christidis, Daniel; Khan, Munad; O'Beirne, Richard; Campbell, Nicholas; Bolton, Damien; Lawrentschuk, Nathan

    2018-03-01

    Laparoscopic lens fogging (LLF) hampers vision and impedes operative efficiency. Attempts to reduce LLF have led to the development of various anti-fogging fluids and warming devices. Limited literature exists directly comparing these techniques. We constructed a model peritoneum to simulate LLF and to compare the efficacy of various anti-fogging techniques. Intraperitoneal space was simulated using a suction bag suspended within an 8 L container of water. LLF was induced by varying the temperature and humidity within the model peritoneum. Various anti-fogging techniques were assessed including scope warmers, FRED TM , Resoclear TM , chlorhexidine, betadine and immersion in heated saline. These products were trialled with and without the use of a disposable scope warmer. Vision scores were evaluated by the same investigator for all tests and rated according to a predetermined scale. Fogging was assessed for each product or technique 30 times and a mean vision rating was recorded. All products tested imparted some benefit, but FRED TM performed better than all other techniques. Betadine and Resoclear TM performed no better than the use of a scope warmer alone. Immersion in saline prior to insertion resulted in decreased vision ratings. The robotic scope did not result in LLF within the model. In standard laparoscopes, the most superior preventative measure was FRED TM utilised on a pre-warmed scope. Despite improvements in LLF with other products FRED TM was better than all other techniques. The robotic laparoscope performed superiorly regarding LLF compared to standard laparoscope.

  14. A Procedure for Estimating a Criterion-Referenced Standard to Identify Educationally Deprived Children for Title I Services. Final Report.

    ERIC Educational Resources Information Center

    Ziomek, Robert L.; Wright, Benjamin D.

    Techniques such as the norm-referenced and average score techniques, commonly used in the identification of educationally disadvantaged students, are critiqued. This study applied latent trait theory, specifically the Rasch Model, along with teacher judgments relative to the mastery of instructional/test decisions, to derive a standard setting…

  15. Search for the Standard Model Higgs Boson Decaying to Bottom Quarks in Proton-Proton Collisions at 8 TeV

    NASA Astrophysics Data System (ADS)

    Silkworth, Inga

    A search for the standard model Higgs boson (H) decaying to bottom quarks and produced in association with a Z boson is presented. The search uses 8 TeV center-of-mass energy proton-proton collision data recorded by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to integrated luminosity of 19.0 inverse femtobarns. The Z boson is reconstructed using two oppositely charged leptons -- either electrons or muons. Two techniques for reconstructing the Higgs candidate are discussed: the standard method using two jets reconstructed with the anti-kt algorithm and a second technique using jet substructure that was developed for highly boosted massive particles. Upper limits, at the 95% confidence level, on the production cross section times the branching ratio, with respect to the standard model expectations, are derived for a Higgs boson in a mass range 110-135 GeV. The results from the ZH channel are combined with five other channels, and an excess of events is observed consistent with the standard model Higgs boson with a local significance of 2.1 standard deviations at 125 GeV.

  16. Weighted least squares techniques for improved received signal strength based localization.

    PubMed

    Tarrío, Paula; Bernardos, Ana M; Casar, José R

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

  17. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

    PubMed Central

    Tarrío, Paula; Bernardos, Ana M.; Casar, José R.

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092

  18. A critical comparison of several low Reynolds number k-epsilon turbulence models for flow over a backward facing step

    NASA Technical Reports Server (NTRS)

    Steffen, C. J., Jr.

    1993-01-01

    Turbulent backward-facing step flow was examined using four low turbulent Reynolds number k-epsilon models and one standard high Reynolds number technique. A tunnel configuration of 1:9 (step height: exit tunnel height) was used. The models tested include: the original Jones and Launder; Chien; Launder and Sharma; and the recent Shih and Lumley formulation. The experimental reference of Driver and Seegmiller was used to make detailed comparisons between reattachment length, velocity, pressure, turbulent kinetic energy, Reynolds shear stress, and skin friction predictions. The results indicated that the use of a wall function for the standard k-epsilon technique did not reduce the calculation accuracy for this separated flow when compared to the low turbulent Reynolds number techniques.

  19. AN IMPROVED SOCKING TECHNIQUE FOR MASTER SLAVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, T.C.; Deckard, L.E.; Howe, P.W.

    1962-10-29

    A technique for socking a pair of standard Model 8 master-slave manipulators is described. The technique is primarily concerned with the fabrication of the bellows section, which provides for Z motion as well as wris movement and rotation. (N.W.R.)

  20. Simulation and Modeling Capability for Standard Modular Hydropower Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Kevin M.; Smith, Brennan T.; Witt, Adam M.

    Grounded in the stakeholder-validated framework established in Oak Ridge National Laboratory’s SMH Exemplary Design Envelope Specification, this report on Simulation and Modeling Capability for Standard Modular Hydropower (SMH) Technology provides insight into the concepts, use cases, needs, gaps, and challenges associated with modeling and simulating SMH technologies. The SMH concept envisions a network of generation, passage, and foundation modules that achieve environmentally compatible, cost-optimized hydropower using standardization and modularity. The development of standardized modeling approaches and simulation techniques for SMH (as described in this report) will pave the way for reliable, cost-effective methods for technology evaluation, optimization, and verification.

  1. Advances in a distributed approach for ocean model data interoperability

    USGS Publications Warehouse

    Signell, Richard P.; Snowden, Derrick P.

    2014-01-01

    An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF) metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF) output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), a metadata standard for unstructured grid model output (UGRID), and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS®) Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.

  2. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  3. NASA standard: Trend analysis techniques

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Descriptive and analytical techniques for NASA trend analysis applications are presented in this standard. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. This document should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend analysis is neither a precise term nor a circumscribed methodology: it generally connotes quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this document. The basic ideas needed for qualitative and quantitative assessment of trends along with relevant examples are presented.

  4. The Timeseries Toolbox - A Web Application to Enable Accessible, Reproducible Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Veatch, W.; Friedman, D.; Baker, B.; Mueller, C.

    2017-12-01

    The vast majority of data analyzed by climate researchers are repeated observations of physical process or time series data. This data lends itself of a common set of statistical techniques and models designed to determine trends and variability (e.g., seasonality) of these repeated observations. Often, these same techniques and models can be applied to a wide variety of different time series data. The Timeseries Toolbox is a web application designed to standardize and streamline these common approaches to time series analysis and modeling with particular attention to hydrologic time series used in climate preparedness and resilience planning and design by the U. S. Army Corps of Engineers. The application performs much of the pre-processing of time series data necessary for more complex techniques (e.g. interpolation, aggregation). With this tool, users can upload any dataset that conforms to a standard template and immediately begin applying these techniques to analyze their time series data.

  5. Comparison of Quadrapolar™ radiofrequency lesions produced by standard versus modified technique: an experimental model.

    PubMed

    Safakish, Ramin

    2017-01-01

    Lower back pain (LBP) is a global public health issue and is associated with substantial financial costs and loss of quality of life. Over the years, different literature has provided different statistics regarding the causes of the back pain. The following statistic is the closest estimation regarding our patient population. The sacroiliac (SI) joint pain is responsible for LBP in 18%-30% of individuals with LBP. Quadrapolar™ radiofrequency ablation, which involves ablation of the nerves of the SI joint using heat, is a commonly used treatment for SI joint pain. However, the standard Quadrapolar radiofrequency procedure is not always effective at ablating all the sensory nerves that cause the pain in the SI joint. One of the major limitations of the standard Quadrapolar radiofrequency procedure is that it produces small lesions of ~4 mm in diameter. Smaller lesions increase the likelihood of failure to ablate all nociceptive input. In this study, we compare the standard Quadrapolar radiofrequency ablation technique to a modified Quadrapolar ablation technique that has produced improved patient outcomes in our clinic. The methodology of the two techniques are compared. In addition, we compare results from an experimental model comparing the lesion sizes produced by the two techniques. Taken together, the findings from this study suggest that the modified Quadrapolar technique provides longer lasting relief for the back pain that is caused by SI joint dysfunction. A randomized controlled clinical trial is the next step required to quantify the difference in symptom relief and quality of life produced by the two techniques.

  6. Distributed geospatial model sharing based on open interoperability standards

    USGS Publications Warehouse

    Feng, Min; Liu, Shuguang; Euliss, Ned H.; Fang, Yin

    2009-01-01

    Numerous geospatial computational models have been developed based on sound principles and published in journals or presented in conferences. However modelers have made few advances in the development of computable modules that facilitate sharing during model development or utilization. Constraints hampering development of model sharing technology includes limitations on computing, storage, and connectivity; traditional stand-alone and closed network systems cannot fully support sharing and integrating geospatial models. To address this need, we have identified methods for sharing geospatial computational models using Service Oriented Architecture (SOA) techniques and open geospatial standards. The service-oriented model sharing service is accessible using any tools or systems compliant with open geospatial standards, making it possible to utilize vast scientific resources available from around the world to solve highly sophisticated application problems. The methods also allow model services to be empowered by diverse computational devices and technologies, such as portable devices and GRID computing infrastructures. Based on the generic and abstract operations and data structures required for Web Processing Service (WPS) standards, we developed an interactive interface for model sharing to help reduce interoperability problems for model use. Geospatial computational models are shared on model services, where the computational processes provided by models can be accessed through tools and systems compliant with WPS. We developed a platform to help modelers publish individual models in a simplified and efficient way. Finally, we illustrate our technique using wetland hydrological models we developed for the prairie pothole region of North America.

  7. Modeling of tool path for the CNC sheet cutting machines

    NASA Astrophysics Data System (ADS)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  8. Novel conformal technique to reduce staircasing artifacts at material boundaries for FDTD modeling of the bioheat equation.

    PubMed

    Neufeld, E; Chavannes, N; Samaras, T; Kuster, N

    2007-08-07

    The modeling of thermal effects, often based on the Pennes Bioheat Equation, is becoming increasingly popular. The FDTD technique commonly used in this context suffers considerably from staircasing errors at boundaries. A new conformal technique is proposed that can easily be integrated into existing implementations without requiring a special update scheme. It scales fluxes at interfaces with factors derived from the local surface normal. The new scheme is validated using an analytical solution, and an error analysis is performed to understand its behavior. The new scheme behaves considerably better than the standard scheme. Furthermore, in contrast to the standard scheme, it is possible to obtain with it more accurate solutions by increasing the grid resolution.

  9. DSN system performance test Doppler noise models; noncoherent configuration

    NASA Technical Reports Server (NTRS)

    Bunce, R.

    1977-01-01

    The newer model for variance, the Allan technique, now adopted for testing, is analyzed in the subject mode. A model is generated (including considerable contribution from the station secondary frequency standard), and rationalized with existing data. The variance model is definitely sound; the Allan technique mates theory and measure. The mean-frequency model is an estimate; this problem is yet to be rigorously resolved. The unaltered defining expressions are noncovergent, and the observed mean is quite erratic.

  10. The Objective Borderline Method: A Probabilistic Method for Standard Setting

    ERIC Educational Resources Information Center

    Shulruf, Boaz; Poole, Phillippa; Jones, Philip; Wilkinson, Tim

    2015-01-01

    A new probability-based standard setting technique, the Objective Borderline Method (OBM), was introduced recently. This was based on a mathematical model of how test scores relate to student ability. The present study refined the model and tested it using 2500 simulated data-sets. The OBM was feasible to use. On average, the OBM performed well…

  11. Computer-Aided Geometry Modeling

    NASA Technical Reports Server (NTRS)

    Shoosmith, J. N. (Compiler); Fulton, R. E. (Compiler)

    1984-01-01

    Techniques in computer-aided geometry modeling and their application are addressed. Mathematical modeling, solid geometry models, management of geometric data, development of geometry standards, and interactive and graphic procedures are discussed. The applications include aeronautical and aerospace structures design, fluid flow modeling, and gas turbine design.

  12. Cost minimizing of cutting process for CNC thermal and water-jet machines

    NASA Astrophysics Data System (ADS)

    Tavaeva, Anastasia; Kurennov, Dmitry

    2015-11-01

    This paper deals with optimization problem of cutting process for CNC thermal and water-jet machines. The accuracy of objective function parameters calculation for optimization problem is investigated. This paper shows that working tool path speed is not constant value. One depends on some parameters that are described in this paper. The relations of working tool path speed depending on the numbers of NC programs frames, length of straight cut, configuration part are presented. Based on received results the correction coefficients for working tool speed are defined. Additionally the optimization problem may be solved by using mathematical model. Model takes into account the additional restrictions of thermal cutting (choice of piercing and output tool point, precedence condition, thermal deformations). At the second part of paper the non-standard cutting techniques are considered. Ones may lead to minimizing of cutting cost and time compared with standard cutting techniques. This paper considers the effectiveness of non-standard cutting techniques application. At the end of the paper the future research works are indicated.

  13. Novel hybrid linear stochastic with non-linear extreme learning machine methods for forecasting monthly rainfall a tropical climate.

    PubMed

    Zeynoddin, Mohammad; Bonakdari, Hossein; Azari, Arash; Ebtehaj, Isa; Gharabaghi, Bahram; Riahi Madavar, Hossein

    2018-09-15

    A novel hybrid approach is presented that can more accurately predict monthly rainfall in a tropical climate by integrating a linear stochastic model with a powerful non-linear extreme learning machine method. This new hybrid method was then evaluated by considering four general scenarios. In the first scenario, the modeling process is initiated without preprocessing input data as a base case. While in other three scenarios, the one-step and two-step procedures are utilized to make the model predictions more precise. The mentioned scenarios are based on a combination of stationarization techniques (i.e., differencing, seasonal and non-seasonal standardization and spectral analysis), and normality transforms (i.e., Box-Cox, John and Draper, Yeo and Johnson, Johnson, Box-Cox-Mod, log, log standard, and Manly). In scenario 2, which is a one-step scenario, the stationarization methods are employed as preprocessing approaches. In scenario 3 and 4, different combinations of normality transform, and stationarization methods are considered as preprocessing techniques. In total, 61 sub-scenarios are evaluated resulting 11013 models (10785 linear methods, 4 nonlinear models, and 224 hybrid models are evaluated). The uncertainty of the linear, nonlinear and hybrid models are examined by Monte Carlo technique. The best preprocessing technique is the utilization of Johnson normality transform and seasonal standardization (respectively) (R 2  = 0.99; RMSE = 0.6; MAE = 0.38; RMSRE = 0.1, MARE = 0.06, UI = 0.03 &UII = 0.05). The results of uncertainty analysis indicated the good performance of proposed technique (d-factor = 0.27; 95PPU = 83.57). Moreover, the results of the proposed methodology in this study were compared with an evolutionary hybrid of adaptive neuro fuzzy inference system (ANFIS) with firefly algorithm (ANFIS-FFA) demonstrating that the new hybrid methods outperformed ANFIS-FFA method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Evaluation of a simplified gross thrust calculation technique using two prototype F100 turbofan engines in an altitude facility

    NASA Technical Reports Server (NTRS)

    Kurtenbach, F. J.

    1979-01-01

    The technique which relies on afterburner duct pressure measurements and empirical corrections to an ideal one dimensional flow analysis to determine thrust is presented. A comparison of the calculated and facility measured thrust values is reported. The simplified model with the engine manufacturer's gas generator model are compared. The evaluation was conducted over a range of Mach numbers from 0.80 to 2.00 and at altitudes from 4020 meters to 15,240 meters. The effects of variations in inlet total temperature from standard day conditions were explored. Engine conditions were varied from those normally scheduled for flight. The technique was found to be accurate to a twice standard deviation of 2.89 percent, with accuracy a strong function of afterburner duct pressure difference.

  15. Statistical Evaluation of Time Series Analysis Techniques

    NASA Technical Reports Server (NTRS)

    Benignus, V. A.

    1973-01-01

    The performance of a modified version of NASA's multivariate spectrum analysis program is discussed. A multiple regression model was used to make the revisions. Performance improvements were documented and compared to the standard fast Fourier transform by Monte Carlo techniques.

  16. System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.

    2011-01-01

    Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed

  17. HMM-based lexicon-driven and lexicon-free word recognition for online handwritten Indic scripts.

    PubMed

    Bharath, A; Madhvanath, Sriganesh

    2012-04-01

    Research for recognizing online handwritten words in Indic scripts is at its early stages when compared to Latin and Oriental scripts. In this paper, we address this problem specifically for two major Indic scripts--Devanagari and Tamil. In contrast to previous approaches, the techniques we propose are largely data driven and script independent. We propose two different techniques for word recognition based on Hidden Markov Models (HMM): lexicon driven and lexicon free. The lexicon-driven technique models each word in the lexicon as a sequence of symbol HMMs according to a standard symbol writing order derived from the phonetic representation. The lexicon-free technique uses a novel Bag-of-Symbols representation of the handwritten word that is independent of symbol order and allows rapid pruning of the lexicon. On handwritten Devanagari word samples featuring both standard and nonstandard symbol writing orders, a combination of lexicon-driven and lexicon-free recognizers significantly outperforms either of them used in isolation. In contrast, most Tamil word samples feature the standard symbol order, and the lexicon-driven recognizer outperforms the lexicon free one as well as their combination. The best recognition accuracies obtained for 20,000 word lexicons are 87.13 percent for Devanagari when the two recognizers are combined, and 91.8 percent for Tamil using the lexicon-driven technique.

  18. Data Mining of Macromolecular Structures.

    PubMed

    van Beusekom, Bart; Perrakis, Anastassis; Joosten, Robbie P

    2016-01-01

    The use of macromolecular structures is widespread for a variety of applications, from teaching protein structure principles all the way to ligand optimization in drug development. Applying data mining techniques on these experimentally determined structures requires a highly uniform, standardized structural data source. The Protein Data Bank (PDB) has evolved over the years toward becoming the standard resource for macromolecular structures. However, the process selecting the data most suitable for specific applications is still very much based on personal preferences and understanding of the experimental techniques used to obtain these models. In this chapter, we will first explain the challenges with data standardization, annotation, and uniformity in the PDB entries determined by X-ray crystallography. We then discuss the specific effect that crystallographic data quality and model optimization methods have on structural models and how validation tools can be used to make informed choices. We also discuss specific advantages of using the PDB_REDO databank as a resource for structural data. Finally, we will provide guidelines on how to select the most suitable protein structure models for detailed analysis and how to select a set of structure models suitable for data mining.

  19. Retinal Information Processing for Minimum Laser Lesion Detection and Cumulative Damage

    DTIC Science & Technology

    1992-09-17

    TAL3Unaqr~orJ:ccd [] J ,;--Wicic tion --------------... MYRON....... . ................... ... ....... ...........................MYRON L. WOLBARSHT B D ist...possible beneficial visual function of the small retinal image movements. B . Visual System Models Prior models of visual system information processing have...against standard secondary sources whose calibrations can be traced to the National Bureau of Standards. B . Electrophysiological Techniques Extracellular

  20. TU-G-204-03: Dynamic CT Myocardial Perfusion Measurement Using First Pass Analysis and Maximum Slope Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hubbard, L; Ziemer, B; Sadeghi, B

    Purpose: To evaluate the accuracy of dynamic CT myocardial perfusion measurement using first pass analysis (FPA) and maximum slope models. Methods: A swine animal model was prepared by percutaneous advancement of an angioplasty balloon into the proximal left anterior descending (LAD) coronary artery to induce varying degrees of stenosis. Maximal hyperaemia was achieved in the LAD with an intracoronary adenosine drip (240 µg/min). Serial microsphere and contrast (370 mg/mL iodine, 30 mL, 5mL/s) injections were made over a range of induced stenoses, and dynamic imaging was performed using a 320-row CT scanner at 100 kVp and 200 mA. The FPAmore » CT perfusion technique was used to make vessel-specific myocardial perfusion measurements. CT perfusion measurements using the FPA and maximum slope models were validated using colored microspheres as the reference gold standard. Results: Perfusion measurements using the FPA technique (P-FPA) showed good correlation with minimal offset when compared to perfusion measurements using microspheres (P- Micro) as the reference standard (P -FPA = 0.96 P-Micro + 0.05, R{sup 2} = 0.97, RMSE = 0.19 mL/min/g). In contrast, the maximum slope model technique (P-MS) was shown to underestimate perfusion when compared to microsphere perfusion measurements (P-MS = 0.42 P -Micro −0.48, R{sup 2} = 0.94, RMSE = 3.3 mL/min/g). Conclusion: The results indicate the potential for significant improvements in accuracy of dynamic CT myocardial perfusion measurement using the first pass analysis technique as compared with the standard maximum slope model.« less

  1. A generative tool for building health applications driven by ISO 13606 archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Martínez-Costa, Catalina; Fernández-Breis, Jesualdo Tomás

    2012-10-01

    The use of Electronic Healthcare Records (EHR) standards in the development of healthcare applications is crucial for achieving the semantic interoperability of clinical information. Advanced EHR standards make use of the dual model architecture, which provides a solution for clinical interoperability based on the separation of the information and knowledge. However, the impact of such standards is biased by the limited availability of tools that facilitate their usage and practical implementation. In this paper, we present an approach for the automatic generation of clinical applications for the ISO 13606 EHR standard, which is based on the dual model architecture. This generator has been generically designed, so it can be easily adapted to other dual model standards and can generate applications for multiple technological platforms. Such good properties are based on the combination of standards for the representation of generic user interfaces and model-driven engineering techniques.

  2. Size reduction techniques for vital compliant VHDL simulation models

    DOEpatents

    Rich, Marvin J.; Misra, Ashutosh

    2006-08-01

    A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.

  3. Ar+ and CuBr laser-assisted chemical bleaching of teeth: estimation of whiteness degree

    NASA Astrophysics Data System (ADS)

    Dimitrov, S.; Todorovska, Roumyana; Gizbrecht, Alexander I.; Raychev, L.; Petrov, Lyubomir P.

    2003-11-01

    In this work the results of adaptation of impartial methods for color determination aimed at developing of techniques for estimation of human teeth whiteness degree, sufficiently handy for common use in clinical practice are presented. For approbation and by the way of illustration of the techniques, standards of teeth colors were used as well as model and naturally discolored human teeth treated by two bleaching chemical compositions activated by three light sources each: Ar+ and CuBr lasers, and a standard halogen photopolymerization lamp. Typical reflection and fluorescence spectra of some samples are presented; the samples colors were estimated by a standard computer processing in RGB and B coordinates. The results of the applied spectral and colorimetric techniques are in a good agreement with those of the standard computer processing of the corresponding digital photographs and complies with the visually estimated degree of the teeth whiteness judged according to the standard reference scale commonly used in the aesthetic dentistry.

  4. A new technique for measuring aerosols with moonlight observations and a sky background model

    NASA Astrophysics Data System (ADS)

    Jones, Amy; Noll, Stefan; Kausch, Wolfgang; Kimeswenger, Stefan; Szyszka, Ceszary; Unterguggenberger, Stefanie

    2014-05-01

    There have been an ample number of studies on aerosols in urban, daylight conditions, but few for remote, nocturnal aerosols. We have developed a new technique for investigating such aerosols using our sky background model and astronomical observations. With a dedicated observing proposal we have successfully tested this technique for nocturnal, remote aerosol studies. This technique relies on three requirements: (a) sky background model, (b) observations taken with scattered moonlight, and (c) spectrophotometric standard star observations for flux calibrations. The sky background model was developed for the European Southern Observatory and is optimized for the Very Large Telescope at Cerro Paranal in the Atacama desert in Chile. This is a remote location with almost no urban aerosols. It is well suited for studying remote background aerosols that are normally difficult to detect. Our sky background model has an uncertainty of around 20 percent and the scattered moonlight portion is even more accurate. The last two requirements are having astronomical observations with moonlight and of standard stars at different airmasses, all during the same night. We had a dedicated observing proposal at Cerro Paranal with the instrument X-Shooter to use as a case study for this method. X-Shooter is a medium resolution, echelle spectrograph which covers the wavelengths from 0.3 to 2.5 micrometers. We observed plain sky at six different distances (7, 13, 20, 45, 90, and 110 degrees) to the Moon for three different Moon phases (between full and half). Also direct observations of spectrophotometric standard stars were taken at two different airmasses for each night to measure the extinction curve via the Langley method. This is an ideal data set for testing this technique. The underlying assumption is that all components, other than the atmospheric conditions (specifically aerosols and airglow), can be calculated with the model for the given observing parameters. The scattered moonlight model is designed for the average atmospheric conditions at Cerro Paranal. The Mie scattering is calculated for the average distribution of aerosol particles, but this input can be modified. We can avoid the airglow emission lines, and near full Moon the airglow continuum can be ignored. In the case study, by comparing the scattered moonlight for the various angles and wavelengths along with the extinction curve from the standard stars, we can iteratively find the optimal aerosol size distribution for the time of observation. We will present this new technique, the results from this case study, and how it can be implemented for investigating aerosols using the X-Shooter archive and other astronomical archives.

  5. Les Houches 2017: Physics at TeV Colliders Standard Model Working Group Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, J.R.; et al.

    This Report summarizes the proceedings of the 2017 Les Houches workshop on Physics at TeV Colliders. Session 1 dealt with (I) new developments relevant for high precision Standard Model calculations, (II) theoretical uncertainties and dataset dependence of parton distribution functions, (III) new developments in jet substructure techniques, (IV) issues in the theoretical description of the production of Standard Model Higgs bosons and how to relate experimental measurements, (V) phenomenological studies essential for comparing LHC data from Run II with theoretical predictions and projections for future measurements, and (VI) new developments in Monte Carlo event generators.

  6. Introduction to Information Visualization (InfoVis) Techniques for Model-Based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Sindiy, Oleg; Litomisky, Krystof; Davidoff, Scott; Dekens, Frank

    2013-01-01

    This paper presents insights that conform to numerous system modeling languages/representation standards. The insights are drawn from best practices of Information Visualization as applied to aerospace-based applications.

  7. SELECTION AND CALIBRATION OF SUBSURFACE REACTIVE TRANSPORT MODELS USING A SURROGATE-MODEL APPROACH

    EPA Science Inventory

    While standard techniques for uncertainty analysis have been successfully applied to groundwater flow models, extension to reactive transport is frustrated by numerous difficulties, including excessive computational burden and parameter non-uniqueness. This research introduces a...

  8. Electromagnetic navigation reduces surgical time and radiation exposure for proximal interlocking in retrograde femoral nailing.

    PubMed

    Somerson, Jeremy S; Rowley, David; Kennedy, Chad; Buttacavoli, Frank; Agarwal, Animesh

    2014-07-01

    To compare the time required for proximal locking screw placement between a standard freehand technique and the navigated technique, and to quantify the reduction in ionizing radiation exposure. A fresh frozen cadaver model was used for 48 proximal interlocking screw procedures. Each procedure consisted of insertion of 2 anteroposterior locking screws. Standard fluoroscopic technique was used for 24 procedures, and an electromagnetic navigation system was used for the remaining 24 procedures. Procedure duration was recorded using an electronic timer and radiation doses were documented. Mean total insertion time for both proximal interlocking screws was 405 ± 165.7 seconds with the freehand technique and 311 ± 78.3 seconds in the navigation group (P = 0.002). All procedures resulted in successful locking screw placement. Mean ionizing radiation exposure time for proximal locking was 29.5 ± 12.8 seconds. Proximal locking screw insertion using the navigation technique evaluated in this work was significantly faster than the standard fluoroscopic method. The navigated technique is effective and has the potential to prevent ionizing radiation exposure.

  9. On the implications of the classical ergodic theorems: analysis of developmental processes has to focus on intra-individual variation.

    PubMed

    Molenaar, Peter C M

    2008-01-01

    It is argued that general mathematical-statistical theorems imply that standard statistical analysis techniques of inter-individual variation are invalid to investigate developmental processes. Developmental processes have to be analyzed at the level of individual subjects, using time series data characterizing the patterns of intra-individual variation. It is shown that standard statistical techniques based on the analysis of inter-individual variation appear to be insensitive to the presence of arbitrary large degrees of inter-individual heterogeneity in the population. An important class of nonlinear epigenetic models of neural growth is described which can explain the occurrence of such heterogeneity in brain structures and behavior. Links with models of developmental instability are discussed. A simulation study based on a chaotic growth model illustrates the invalidity of standard analysis of inter-individual variation, whereas time series analysis of intra-individual variation is able to recover the true state of affairs. (c) 2007 Wiley Periodicals, Inc.

  10. Development of a standardized laparoscopic caecum resection model to simulate laparoscopic appendectomy in rats

    PubMed Central

    2014-01-01

    Background Laparoscopic appendectomy (LA) has become one of the most common surgical procedures to date. To improve and standardize this technique further, cost-effective and reliable animal models are needed. Methods In a pilot study, 30 Wistar rats underwent laparoscopic caecum resection (as rats do not have an appendix vermiformis), to optimize the instrumental and surgical parameters. A subsequent test study was performed in another 30 rats to compare three different techniques for caecum resection and bowel closure. Results Bipolar coagulation led to an insufficiency of caecal stump closure in all operated rats (Group 1, n = 10). Endoloop ligation followed by bipolar coagulation and resection (Group 2, n = 10) or resection with a LigaSure™ device (Group 3, n = 10) resulted in sufficient caecal stump closure. Conclusions We developed a LA model enabling us to compare three different caecum resection techniques in rats. In conclusion, only endoloop closure followed by bipolar coagulation proved to be a secure and cost-effective surgical approach. PMID:24934381

  11. Deriving Global Convection Maps From SuperDARN Measurements

    NASA Astrophysics Data System (ADS)

    Gjerloev, J. W.; Waters, C. L.; Barnes, R. J.

    2018-04-01

    A new statistical modeling technique for determining the global ionospheric convection is described. The principal component regression (PCR)-based technique is based on Super Dual Auroral Radar Network (SuperDARN) observations and is an advanced version of the PCR technique that Waters et al. (https//:doi.org.10.1002/2015JA021596) used for the SuperMAG data. While SuperMAG ground magnetic field perturbations are vector measurements, SuperDARN provides line-of-sight measurements of the ionospheric convection flow. Each line-of-sight flow has a known azimuth (or direction), which must be converted into the actual vector flow. However, the component perpendicular to the azimuth direction is unknown. Our method uses historical data from the SuperDARN database and PCR to determine a fill-in model convection distribution for any given universal time. The fill-in data process is driven by a list of state descriptors (magnetic indices and the solar zenith angle). The final solution is then derived from a spherical cap harmonic fit to the SuperDARN measurements and the fill-in model. When compared with the standard SuperDARN fill-in model, we find that our fill-in model provides improved solutions, and the final solutions are in better agreement with the SuperDARN measurements. Our solutions are far less dynamic than the standard SuperDARN solutions, which we interpret as being due to a lack of magnetosphere-ionosphere inertia and communication delays in the standard SuperDARN technique while it is inherently included in our approach. Rather, we argue that the magnetosphere-ionosphere system has inertia that prevents the global convection from changing abruptly in response to an interplanetary magnetic field change.

  12. Marginal and internal fit of cobalt-chromium copings fabricated using the conventional and the direct metal laser sintering techniques: A comparative in vitro study.

    PubMed

    Ullattuthodi, Sujana; Cherian, Kandathil Phillip; Anandkumar, R; Nambiar, M Sreedevi

    2017-01-01

    This in vitro study seeks to evaluate and compare the marginal and internal fit of cobalt-chromium copings fabricated using the conventional and direct metal laser sintering (DMLS) techniques. A master model of a prepared molar tooth was made using cobalt-chromium alloy. Silicone impression of the master model was made and thirty standardized working models were then produced; twenty working models for conventional lost-wax technique and ten working models for DMLS technique. A total of twenty metal copings were fabricated using two different production techniques: conventional lost-wax method and DMLS; ten samples in each group. The conventional and DMLS copings were cemented to the working models using glass ionomer cement. Marginal gap of the copings were measured at predetermined four points. The die with the cemented copings are standardized-sectioned with a heavy duty lathe. Then, each sectioned samples were analyzed for the internal gap between the die and the metal coping using a metallurgical microscope. Digital photographs were taken at ×50 magnification and analyzed using measurement software. Statistical analysis was done by unpaired t -test and analysis of variance (ANOVA). The results of this study reveal that no significant difference was present in the marginal gap of conventional and DMLS copings ( P > 0.05) by means of ANOVA. The mean values of internal gap of DMLS copings were significantly greater than that of conventional copings ( P < 0.05). Within the limitations of this in vitro study, it was concluded that the internal fit of conventional copings was superior to that of the DMLS copings. Marginal fit of the copings fabricated by two different techniques had no significant difference.

  13. Cartographic mapping study

    NASA Technical Reports Server (NTRS)

    Wilson, C.; Dye, R.; Reed, L.

    1982-01-01

    The errors associated with planimetric mapping of the United States using satellite remote sensing techniques are analyzed. Assumptions concerning the state of the art achievable for satellite mapping systems and platforms in the 1995 time frame are made. An analysis of these performance parameters is made using an interactive cartographic satellite computer model, after first validating the model using LANDSAT 1 through 3 performance parameters. An investigation of current large scale (1:24,000) US National mapping techniques is made. Using the results of this investigation, and current national mapping accuracy standards, the 1995 satellite mapping system is evaluated for its ability to meet US mapping standards for planimetric and topographic mapping at scales of 1:24,000 and smaller.

  14. 40 CFR Appendix K to Part 50 - Interpretation of the National Ambient Air Quality Standards for Particulate Matter

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., other techniques, such as the use of statistical models or the use of historical data could be..., mathematical techniques should be applied to account for the trends to ensure that the expected annual values... emission patterns, either the most recent representative year(s) could be used or statistical techniques or...

  15. Search for standard model Higgs boson production in association with a W boson using a matrix element technique at CDF in pp̄ collisions at √s=1.96 TeV

    DOE PAGES

    Aaltonen, T.; Álvarez González, B.; Amerio, S.; ...

    2012-04-02

    This paper presents a search for standard model Higgs boson production in association with a W boson using events recorded by the CDF experiment in a data set corresponding to an integrated luminosity of 5.6 fb⁻¹. The search is performed using a matrix element technique in which the signal and background hypotheses are used to create a powerful discriminator. The discriminant output distributions for signal and background are fit to the observed events using a binned likelihood approach to search for the Higgs boson signal. We find no evidence for a Higgs boson, and 95% confidence level (C.L.) upper limitsmore » are set on σ(pp̄→WH)×B(H→bb¯). The observed limits range from 3.5 to 37.6 relative to the standard model expectation for Higgs boson masses between mH=100 GeV/c² and mH=150 GeV/c². The 95% C.L. expected limit is estimated from the median of an ensemble of simulated experiments and varies between 2.9 and 32.7 relative to the production rate predicted by the standard model over the Higgs boson mass range studied.« less

  16. Effective Thermal Conductivity of High Porosity Open Cell Nickel Foam

    NASA Technical Reports Server (NTRS)

    Sullins, Alan D.; Daryabeigi, Kamran

    2001-01-01

    The effective thermal conductivity of high-porosity open cell nickel foam samples was measured over a wide range of temperatures and pressures using a standard steady-state technique. The samples, measuring 23.8 mm, 18.7 mm, and 13.6 mm in thickness, were constructed with layers of 1.7 mm thick foam with a porosity of 0.968. Tests were conducted with the specimens subjected to temperature differences of 100 to 1000 K across the thickness and at environmental pressures of 10(exp -4) to 750 mm Hg. All test were conducted in a gaseous nitrogen environment. A one-dimensional finite volume numerical model was developed to model combined radiation/conduction heat transfer in the foam. The radiation heat transfer was modeled using the two-flux approximation. Solid and gas conduction were modeled using standard techniques for high porosity media. A parameter estimation technique was used in conjunction with the measured and predicted thermal conductivities at pressures of 10(exp -4) and 750 mm Hg to determine the extinction coefficient, albedo of scattering, and weighting factors for modeling the conduction thermal conductivity. The measured and predicted conductivities over the intermediate pressure values differed by 13%.

  17. Modeling the human mental lexicon with self-organizing feature maps

    NASA Astrophysics Data System (ADS)

    Wittenburg, Peter; Frauenfelder, Uli H.

    1992-10-01

    Recent efforts to model the remarkable ability of humans to recognize speech and words are described. Different techniques including the use of neural nets for representing phonological similarity between words in the lexicon with self organizing algorithms are discussed. Simulations using the standard Kohonen algorithm are presented to illustrate some problems confronted with this technique in modeling similarity relations of form in the human mental lexicon. Alternative approaches that can potentially deal with some of these limitations are sketched.

  18. Anticipatory Neurofuzzy Control

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1994-01-01

    Technique of feedback control, called "anticipatory neurofuzzy control," developed for use in controlling flexible structures and other dynamic systems for which mathematical models of dynamics poorly known or unknown. Superior ability to act during operation to compensate for, and adapt to, errors in mathematical model of dynamics, changes in dynamics, and noise. Also offers advantage of reduced computing time. Hybrid of two older fuzzy-logic control techniques: standard fuzzy control and predictive fuzzy control.

  19. Rewriting Modulo SMT

    NASA Technical Reports Server (NTRS)

    Rocha, Camilo; Meseguer, Jose; Munoz, Cesar A.

    2013-01-01

    Combining symbolic techniques such as: (i) SMT solving, (ii) rewriting modulo theories, and (iii) model checking can enable the analysis of infinite-state systems outside the scope of each such technique. This paper proposes rewriting modulo SMT as a new technique combining the powers of (i)-(iii) and ideally suited to model and analyze infinite-state open systems; that is, systems that interact with a non-deterministic environment. Such systems exhibit both internal non-determinism due to the system, and external non-determinism due to the environment. They are not amenable to finite-state model checking analysis because they typically are infinite-state. By being reducible to standard rewriting using reflective techniques, rewriting modulo SMT can both naturally model and analyze open systems without requiring any changes to rewriting-based reachability analysis techniques for closed systems. This is illustrated by the analysis of a real-time system beyond the scope of timed automata methods.

  20. Reconstruction of dynamic image series from undersampled MRI data using data-driven model consistency condition (MOCCO).

    PubMed

    Velikina, Julia V; Samsonov, Alexey A

    2015-11-01

    To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models preestimated from training data. We introduce the model consistency condition (MOCCO) technique, which utilizes temporal models to regularize reconstruction without constraining the solution to be low-rank, as is performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Our method was compared with a standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE-MRA) and cardiac CINE imaging. We studied the sensitivity of all methods to rank reduction and temporal subspace modeling errors. MOCCO demonstrated reduced sensitivity to modeling errors compared with the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE-MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. © 2014 Wiley Periodicals, Inc.

  1. RECONSTRUCTION OF DYNAMIC IMAGE SERIES FROM UNDERSAMPLED MRI DATA USING DATA-DRIVEN MODEL CONSISTENCY CONDITION (MOCCO)

    PubMed Central

    Velikina, Julia V.; Samsonov, Alexey A.

    2014-01-01

    Purpose To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models pre-estimated from training data. Theory We introduce the MOdel Consistency COndition (MOCCO) technique that utilizes temporal models to regularize the reconstruction without constraining the solution to be low-rank as performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Methods Our method was compared to standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE MRA) and cardiac CINE imaging. We studied sensitivity of all methods to rank-reduction and temporal subspace modeling errors. Results MOCCO demonstrated reduced sensitivity to modeling errors compared to the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. Conclusions MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. PMID:25399724

  2. Performance of preproduction model cesium beam frequency standards for spacecraft applications

    NASA Technical Reports Server (NTRS)

    Levine, M. W.

    1978-01-01

    A cesium beam frequency standards for spaceflight application on Navigation Development Satellites was designed and fabricated and preliminary testing was completed. The cesium standard evolved from an earlier prototype model launched aboard NTS-2 and the engineering development model to be launched aboard NTS satellites during 1979. A number of design innovations, including a hybrid analog/digital integrator and the replacement of analog filters and phase detectors by clocked digital sampling techniques are discussed. Thermal and thermal-vacuum testing was concluded and test data are presented. Stability data for 10 to 10,000 seconds averaging interval, measured under laboratory conditions, are shown.

  3. Prospecting for new physics in the Higgs and flavor sectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bishara, Fady

    We explore two directions in beyond the standard model physics: dark matter model building and probing new sources of CP violation. In dark matter model building, we consider two scenarios where the stability of dark matter derives from the flavor symmetries of the standard model. The first model contains a flavor singlet dark matter candidate whose couplings to the visible sector are proportional to the flavor breaking parameters. This leads to a metastable dark matter with TeV scale mediators. In the second model, we consider a fully gauged SU(3) 3 flavor model with a flavor triplet dark matter. Consequently, the dark matter multiplet is charged while the standard model fields are neutral under a remnant Z 3 which ensures dark matter stability. We show that a Dirac fermion dark matter with radiative splitting in the multiplet must have a mass in the range [0:5; 5] TeV in order to satisfy all experimental constraints. We then turn our attention to Higgs portal dark matter and investigate the possibility of obtaining bounds on the up, down, and strange quark Yukawa couplings. If Higgs portal dark matter is discovered, we find that direct detection rates are insensitive to vanishing light quark Yukawa couplings. We then review flavor models and give the expected enhancement or suppression of the Yukawa couplings in those models. Finally, in the last two chapters, we develop techniques for probing CP violation in the Higgs coupling to photons and in rare radiative decays of B mesons. While theoretically clean, we find that these methods are not practical with current and planned detectors. However, these techniques can be useful with a dedicated detector (e.g., a gaseous TPC). In the case of radiative B meson decay B 0 → (K* → Kππ) γ, the techniques we develop also allow the extraction of the photon polarization fraction which is sensitive to new physics contributions since, in the standard model, the right(left) handed polarization fraction is of O( Λ QCD=m b) formore » $$\\bar{B}^{0}$$(B 0) meson decays.« less

  4. The Impact of Spatial Correlation and Incommensurability on Model Evaluation

    EPA Science Inventory

    Standard evaluations of air quality models rely heavily on a direct comparison of monitoring data matched with the model output for the grid cell containing the monitor’s location. While such techniques may be adequate for some applications, conclusions are limited by such facto...

  5. Toward a Standardized ODH Analysis Technique

    DOE PAGES

    Degraff, Brian D.

    2016-12-01

    Standardization of ODH analysis and mitigation policy thus represents an opportunity for the cryogenic community. There are several benefits for industry and government facilities to develop an applicable unified standard for ODH. The number of reviewers would increase, and review projects across different facilities would be simpler. Here, it would also present the opportunity for the community to broaden the development of expertise in modeling complicated flow geometries.

  6. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  7. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    NASA Astrophysics Data System (ADS)

    Caballero Morales, Santiago Omar; Cox, Stephen J.

    2009-12-01

    Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  8. Use of Tc-99m-galactosyl-neoglycoalbumin (Tc-NGA) to determine hepatic blood flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stadalnik, R.C.; Vera, D.R.; Woodle, E.S.

    1984-01-01

    Tc-NGA is a new liver radiopharmaceutical which binds to a hepatocyte-specific membrane receptor. Three characteristics of Tc-NGA can be exploited in the measurement of hepatic blood flow (HBF): 1) ability to alter the affinity of Tc-NGA for its receptor by changing the galactose: albumin ratio; 2) ability to achieve a high specific activity with Tc-99m labeling; and 3) ability to administer a high molar dose of Tc-NGA without physiologic side effects. In addition, kinetic modeling of Tc-NGA dynamic data can provide estimates of hepatic receptor concentration. In experimental studies in young pigs, HBF was determined using two techniques: 1) kineticmore » modeling of dynamic data using moderate affinity, low specific activity Tc-NGA (Group A, n=12); and 2) clearance (CL) technique using high affinity, high specific activity Tc-NGA (Group B, n=4). In both groups, HBF was determined simultaneously by continuous infusion of indocyanine green (CI-ICG) with hepatic vein sampling. Regression analysis of HBF measurements obtained with the Tc-NGA kinetic modeling technique and the CI-ICG technique (Group A) revealed good correlation between the two techniques (r=0.802, p=0.02). Similarly, HBF determination by the clearance technique (Group B) provided highly accurate measurements when compared to the CI-ICG technique. Hepatic blood flow measurements by the clearance technique (CL-NGA) fell within one standard deviation of the error associated with each CI-ICG HBF measurement (all CI-ICG standard deviations were less than 10%).« less

  9. Image-Based 3d Reconstruction and Analysis for Orthodontia

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2012-08-01

    Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.

  10. A novel CT acquisition and analysis technique for breathing motion modeling

    NASA Astrophysics Data System (ADS)

    Low, Daniel A.; White, Benjamin M.; Lee, Percy P.; Thomas, David H.; Gaudio, Sergio; Jani, Shyam S.; Wu, Xiao; Lamb, James M.

    2013-06-01

    To report on a novel technique for providing artifact-free quantitative four-dimensional computed tomography (4DCT) image datasets for breathing motion modeling. Commercial clinical 4DCT methods have difficulty managing irregular breathing. The resulting images contain motion-induced artifacts that can distort structures and inaccurately characterize breathing motion. We have developed a novel scanning and analysis method for motion-correlated CT that utilizes standard repeated fast helical acquisitions, a simultaneous breathing surrogate measurement, deformable image registration, and a published breathing motion model. The motion model differs from the CT-measured motion by an average of 0.65 mm, indicating the precision of the motion model. The integral of the divergence of one of the motion model parameters is predicted to be a constant 1.11 and is found in this case to be 1.09, indicating the accuracy of the motion model. The proposed technique shows promise for providing motion-artifact free images at user-selected breathing phases, accurate Hounsfield units, and noise characteristics similar to non-4D CT techniques, at a patient dose similar to or less than current 4DCT techniques.

  11. Error modelling of quantum Hall array resistance standards

    NASA Astrophysics Data System (ADS)

    Marzano, Martina; Oe, Takehiko; Ortolano, Massimo; Callegaro, Luca; Kaneko, Nobu-Hisa

    2018-04-01

    Quantum Hall array resistance standards (QHARSs) are integrated circuits composed of interconnected quantum Hall effect elements that allow the realization of virtually arbitrary resistance values. In recent years, techniques were presented to efficiently design QHARS networks. An open problem is that of the evaluation of the accuracy of a QHARS, which is affected by contact and wire resistances. In this work, we present a general and systematic procedure for the error modelling of QHARSs, which is based on modern circuit analysis techniques and Monte Carlo evaluation of the uncertainty. As a practical example, this method of analysis is applied to the characterization of a 1 MΩ QHARS developed by the National Metrology Institute of Japan. Software tools are provided to apply the procedure to other arrays.

  12. Model-based RSA of a femoral hip stem using surface and geometrical shape models.

    PubMed

    Kaptein, Bart L; Valstar, Edward R; Spoor, Cees W; Stoel, Berend C; Rozing, Piet M

    2006-07-01

    Roentgen stereophotogrammetry (RSA) is a highly accurate three-dimensional measuring technique for assessing micromotion of orthopaedic implants. A drawback is that markers have to be attached to the implant. Model-based techniques have been developed to prevent using special marked implants. We compared two model-based RSA methods with standard marker-based RSA techniques. The first model-based RSA method used surface models, and the second method used elementary geometrical shape (EGS) models. We used a commercially available stem to perform experiments with a phantom as well as reanalysis of patient RSA radiographs. The data from the phantom experiment indicated the accuracy and precision of the elementary geometrical shape model-based RSA method is equal to marker-based RSA. For model-based RSA using surface models, the accuracy is equal to the accuracy of marker-based RSA, but its precision is worse. We found no difference in accuracy and precision between the two model-based RSA techniques in clinical data. For this particular hip stem, EGS model-based RSA is a good alternative for marker-based RSA.

  13. Formulation of consumables management models. Development approach for the mission planning processor working model

    NASA Technical Reports Server (NTRS)

    Connelly, L. C.

    1977-01-01

    The mission planning processor is a user oriented tool for consumables management and is part of the total consumables subsystem management concept. The approach to be used in developing a working model of the mission planning processor is documented. The approach includes top-down design, structured programming techniques, and application of NASA approved software development standards. This development approach: (1) promotes cost effective software development, (2) enhances the quality and reliability of the working model, (3) encourages the sharing of the working model through a standard approach, and (4) promotes portability of the working model to other computer systems.

  14. On the Estimation of Standard Errors in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Philipp, Michel; Strobl, Carolin; de la Torre, Jimmy; Zeileis, Achim

    2018-01-01

    Cognitive diagnosis models (CDMs) are an increasingly popular method to assess mastery or nonmastery of a set of fine-grained abilities in educational or psychological assessments. Several inference techniques are available to quantify the uncertainty of model parameter estimates, to compare different versions of CDMs, or to check model…

  15. Incorporating principal component analysis into air quality model evaluation

    EPA Science Inventory

    The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...

  16. Prediction models for clustered data: comparison of a random intercept and standard regression model

    PubMed Central

    2013-01-01

    Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436

  17. Prediction models for clustered data: comparison of a random intercept and standard regression model.

    PubMed

    Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne

    2013-02-15

    When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.

  18. Application of neural networks and sensitivity analysis to improved prediction of trauma survival.

    PubMed

    Hunter, A; Kennedy, L; Henry, J; Ferguson, I

    2000-05-01

    The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.

  19. Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick; Klein, Vladislav

    2011-01-01

    Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.

  20. Intrathoracic airway measurement: ex-vivo validation

    NASA Astrophysics Data System (ADS)

    Reinhardt, Joseph M.; Raab, Stephen A.; D'Souza, Neil D.; Hoffman, Eric A.

    1997-05-01

    High-resolution x-ray CT (HRCT) provides detailed images of the lungs and bronchial tree. HRCT-based imaging and quantitation of peripheral bronchial airway geometry provides a valuable tool for assessing regional airway physiology. Such measurements have been sued to address physiological questions related to the mechanics of airway collapse in sleep apnea, the measurement of airway response to broncho-constriction agents, and to evaluate and track the progression of disease affecting the airways, such as asthma and cystic fibrosis. Significant attention has been paid to the measurements of extra- and intra-thoracic airways in 2D sections from volumetric x-ray CT. A variety of manual and semi-automatic techniques have been proposed for airway geometry measurement, including the use of standardized display window and level settings for caliper measurements, methods based on manual or semi-automatic border tracing, and more objective, quantitative approaches such as the use of the 'half-max' criteria. A recently proposed measurements technique uses a model-based deconvolution to estimate the location of the inner and outer airway walls. Validation using a plexiglass phantom indicates that the model-based method is more accurate than the half-max approach for thin-walled structures. In vivo validation of these airway measurement techniques is difficult because of the problems in identifying a reliable measurement 'gold standard.' In this paper we report on ex vivo validation of the half-max and model-based methods using an excised pig lung. The lung is sliced into thin sections of tissue and scanned using an electron beam CT scanner. Airways of interest are measured from the CT images, and also measured with using a microscope and micrometer to obtain a measurement gold standard. The result show no significant difference between the model-based measurements and the gold standard; while the half-max estimates exhibited a measurement bias and were significantly different than the gold standard.

  1. The Effects of Two Types of Assertion Training on Self-Assertion, Anxiety and Self Actualization.

    ERIC Educational Resources Information Center

    Langelier, Regis

    The standard assertion training package includes a selection of techniques from behavior therapy such as modeling, behavior rehearsal, and role-playing along with lectures and discussion, bibliotherapy, and audiovisual feedback. The effects of a standard assertion training package with and without videotape feedback on self-report measures of…

  2. Extending enterprise architecture modelling with business goals and requirements

    NASA Astrophysics Data System (ADS)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten

    2011-02-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling techniques for EA focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of EAs in terms of stakeholder concerns and the high-level goals that address these concerns. This article describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the article illustrates how EA can benefit from analysis techniques from the requirements engineering domain.

  3. Search for a standard model Higgs boson in WH --> lvbb in pp collisions at square root s = 1.96 TeV.

    PubMed

    Aaltonen, T; Adelman, J; Akimoto, T; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burke, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Chwalek, T; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cordelli, M; Cortiana, G; Cox, C A; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hays, C; Heck, M; Heijboer, A; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Hussein, M; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, H W; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, H S; Lee, S W; Leone, S; Lewis, J D; Lin, C-S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lucchesi, D; Luci, C; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mathis, M; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Nett, J; Neu, C; Neubauer, M S; Neubauer, S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Peiffer, T; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Renz, M; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sforza, F; Sfyrla, A; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Ttito-Guzmán, P; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Trovato, M; Tsai, S-Y; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Wagner, P; Wagner, R G; Wagner, R L; Wagner, W; Wagner-Kuhr, J; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Weinelt, J; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Würthwein, F; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zhang, X; Zheng, Y; Zucchelli, S

    2009-09-04

    We present a search for a standard model Higgs boson produced in association with a W boson using 2.7 fb(-1) of integrated luminosity of pp collision data taken at square root s = 1.96 TeV. Limits on the Higgs boson production rate are obtained for masses between 100 and 150 GeV/c(2). Through the use of multivariate techniques, the analysis achieves an observed (expected) 95% confidence level upper limit of 5.6 (4.8) times the theoretically expected production cross section for a standard model Higgs boson with a mass of 115 GeV/c(2).

  4. Quantitative photoacoustic imaging in the acoustic regime using SPIM

    NASA Astrophysics Data System (ADS)

    Beigl, Alexander; Elbau, Peter; Sadiq, Kamran; Scherzer, Otmar

    2018-05-01

    While in standard photoacoustic imaging the propagation of sound waves is modeled by the standard wave equation, our approach is based on a generalized wave equation with variable sound speed and material density, respectively. In this paper we present an approach for photoacoustic imaging, which in addition to the recovery of the absorption density parameter, the imaging parameter of standard photoacoustics, also allows us to reconstruct the spatially varying sound speed and density, respectively, of the medium. We provide analytical reconstruction formulas for all three parameters based in a linearized model based on single plane illumination microscopy (SPIM) techniques.

  5. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    NASA Technical Reports Server (NTRS)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  6. Predicting Flavonoid UGT Regioselectivity

    PubMed Central

    Jackson, Rhydon; Knisley, Debra; McIntosh, Cecilia; Pfeiffer, Phillip

    2011-01-01

    Machine learning was applied to a challenging and biologically significant protein classification problem: the prediction of avonoid UGT acceptor regioselectivity from primary sequence. Novel indices characterizing graphical models of residues were proposed and found to be widely distributed among existing amino acid indices and to cluster residues appropriately. UGT subsequences biochemically linked to regioselectivity were modeled as sets of index sequences. Several learning techniques incorporating these UGT models were compared with classifications based on standard sequence alignment scores. These techniques included an application of time series distance functions to protein classification. Time series distances defined on the index sequences were used in nearest neighbor and support vector machine classifiers. Additionally, Bayesian neural network classifiers were applied to the index sequences. The experiments identified improvements over the nearest neighbor and support vector machine classifications relying on standard alignment similarity scores, as well as strong correlations between specific subsequences and regioselectivities. PMID:21747849

  7. Biodegradable Magnesium Stent Treatment of Saccular Aneurysms in a Rat Model - Introduction of the Surgical Technique.

    PubMed

    Nevzati, Edin; Rey, Jeannine; Coluccia, Daniel; D'Alonzo, Donato; Grüter, Basil; Remonda, Luca; Fandino, Javier; Marbacher, Serge

    2017-10-01

    The steady progess in the armamentarium of techniques available for endovascular treatment of intracranial aneurysms requires affordable and reproducable experimental animal models to test novel embolization materials such as stents and flow diverters. The aim of the present project was to design a safe, fast, and standardized surgical technique for stent assisted embolization of saccular aneurysms in a rat animal model. Saccular aneurysms were created from an arterial graft from the descending aorta.The aneurysms were microsurgically transplanted through end-to-side anastomosis to the infrarenal abdominal aorta of a syngenic male Wistar rat weighing >500 g. Following aneurysm anastomosis, aneurysm embolization was performed using balloon expandable magnesium stents (2.5 mm x 6 mm). The stent system was retrograde introduced from the lower abdominal aorta using a modified Seldinger technique. Following a pilot series of 6 animals, a total of 67 rats were operated according to established standard operating procedures. Mean surgery time, mean anastomosis time, and mean suturing time of the artery puncture site were 167 ± 22 min, 26 ± 6 min and 11 ± 5 min, respectively. The mortality rate was 6% (n=4). The morbidity rate was 7.5% (n=5), and in-stent thrombosis was found in 4 cases (n=2 early, n=2 late in stent thrombosis). The results demonstrate the feasibility of standardized stent occlusion of saccular sidewall aneurysms in rats - with low rates of morbidity and mortality. This stent embolization procedure combines the opportunity to study novel concepts of stent or flow diverter based devices as well as the molecular aspects of healing.

  8. Manufacturing implant supported auricular prostheses by rapid prototyping techniques.

    PubMed

    Karatas, Meltem Ozdemir; Cifter, Ebru Demet; Ozenen, Didem Ozdemir; Balik, Ali; Tuncer, Erman Bulent

    2011-08-01

    Maxillofacial prostheses are usually fabricated on the models obtained following the impression procedures. Disadvantages of conventional impression techniques used in production of facial prosthesis are deformation of soft tissues caused by impression material and disturbance of the patient due to. Additionally production of prosthesis by conventional methods takes longer time. Recently, rapid prototyping techniques have been developed for extraoral prosthesis in order to reduce these disadvantages of conventional methods. Rapid prototyping technique has the potential to simplify the procedure and decrease the laboratory work required. It eliminates the need for measurement impression procedures and preparation of wax model to be performed by prosthodontists themselves In the near future this technology will become a standard for fabricating maxillofacial prostheses.

  9. Machine learning modelling for predicting soil liquefaction susceptibility

    NASA Astrophysics Data System (ADS)

    Samui, P.; Sitharam, T. G.

    2011-01-01

    This study describes two machine learning techniques applied to predict liquefaction susceptibility of soil based on the standard penetration test (SPT) data from the 1999 Chi-Chi, Taiwan earthquake. The first machine learning technique which uses Artificial Neural Network (ANN) based on multi-layer perceptions (MLP) that are trained with Levenberg-Marquardt backpropagation algorithm. The second machine learning technique uses the Support Vector machine (SVM) that is firmly based on the theory of statistical learning theory, uses classification technique. ANN and SVM have been developed to predict liquefaction susceptibility using corrected SPT [(N1)60] and cyclic stress ratio (CSR). Further, an attempt has been made to simplify the models, requiring only the two parameters [(N1)60 and peck ground acceleration (amax/g)], for the prediction of liquefaction susceptibility. The developed ANN and SVM models have also been applied to different case histories available globally. The paper also highlights the capability of the SVM over the ANN models.

  10. Technical Report Series on Global Modeling and Data Assimilation. Volume 16; Filtering Techniques on a Stretched Grid General Circulation Model

    NASA Technical Reports Server (NTRS)

    Takacs, Lawrence L.; Sawyer, William; Suarez, Max J. (Editor); Fox-Rabinowitz, Michael S.

    1999-01-01

    This report documents the techniques used to filter quantities on a stretched grid general circulation model. Standard high-latitude filtering techniques (e.g., using an FFT (Fast Fourier Transformations) to decompose and filter unstable harmonics at selected latitudes) applied on a stretched grid are shown to produce significant distortions of the prognostic state when used to control instabilities near the pole. A new filtering technique is developed which accurately accounts for the non-uniform grid by computing the eigenvectors and eigenfrequencies associated with the stretching. A filter function, constructed to selectively damp those modes whose associated eigenfrequencies exceed some critical value, is used to construct a set of grid-spaced weights which are shown to effectively filter without distortion. Both offline and GCM (General Circulation Model) experiments are shown using the new filtering technique. Finally, a brief examination is also made on the impact of applying the Shapiro filter on the stretched grid.

  11. Wavelet-Bayesian inference of cosmic strings embedded in the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    McEwen, J. D.; Feeney, S. M.; Peiris, H. V.; Wiaux, Y.; Ringeval, C.; Bouchet, F. R.

    2017-12-01

    Cosmic strings are a well-motivated extension to the standard cosmological model and could induce a subdominant component in the anisotropies of the cosmic microwave background (CMB), in addition to the standard inflationary component. The detection of strings, while observationally challenging, would provide a direct probe of physics at very high-energy scales. We develop a framework for cosmic string inference from observations of the CMB made over the celestial sphere, performing a Bayesian analysis in wavelet space where the string-induced CMB component has distinct statistical properties to the standard inflationary component. Our wavelet-Bayesian framework provides a principled approach to compute the posterior distribution of the string tension Gμ and the Bayesian evidence ratio comparing the string model to the standard inflationary model. Furthermore, we present a technique to recover an estimate of any string-induced CMB map embedded in observational data. Using Planck-like simulations, we demonstrate the application of our framework and evaluate its performance. The method is sensitive to Gμ ∼ 5 × 10-7 for Nambu-Goto string simulations that include an integrated Sachs-Wolfe contribution only and do not include any recombination effects, before any parameters of the analysis are optimized. The sensitivity of the method compares favourably with other techniques applied to the same simulations.

  12. Using Multilevel Factor Analysis with Clustered Data: Investigating the Factor Structure of the Positive Values Scale

    ERIC Educational Resources Information Center

    Huang, Francis L.; Cornell, Dewey G.

    2016-01-01

    Advances in multilevel modeling techniques now make it possible to investigate the psychometric properties of instruments using clustered data. Factor models that overlook the clustering effect can lead to underestimated standard errors, incorrect parameter estimates, and model fit indices. In addition, factor structures may differ depending on…

  13. Standards for data acquisition and software-based analysis of in vivo electroencephalography recordings from animals. A TASK1-WG5 report of the AES/ILAE Translational Task Force of the ILAE.

    PubMed

    Moyer, Jason T; Gnatkovsky, Vadym; Ono, Tomonori; Otáhal, Jakub; Wagenaar, Joost; Stacey, William C; Noebels, Jeffrey; Ikeda, Akio; Staley, Kevin; de Curtis, Marco; Litt, Brian; Galanopoulou, Aristea S

    2017-11-01

    Electroencephalography (EEG)-the direct recording of the electrical activity of populations of neurons-is a tremendously important tool for diagnosing, treating, and researching epilepsy. Although standard procedures for recording and analyzing human EEG exist and are broadly accepted, there are no such standards for research in animal models of seizures and epilepsy-recording montages, acquisition systems, and processing algorithms may differ substantially among investigators and laboratories. The lack of standard procedures for acquiring and analyzing EEG from animal models of epilepsy hinders the interpretation of experimental results and reduces the ability of the scientific community to efficiently translate new experimental findings into clinical practice. Accordingly, the intention of this report is twofold: (1) to review current techniques for the collection and software-based analysis of neural field recordings in animal models of epilepsy, and (2) to offer pertinent standards and reporting guidelines for this research. Specifically, we review current techniques for signal acquisition, signal conditioning, signal processing, data storage, and data sharing, and include applicable recommendations to standardize collection and reporting. We close with a discussion of challenges and future opportunities, and include a supplemental report of currently available acquisition systems and analysis tools. This work represents a collaboration on behalf of the American Epilepsy Society/International League Against Epilepsy (AES/ILAE) Translational Task Force (TASK1-Workgroup 5), and is part of a larger effort to harmonize video-EEG interpretation and analysis methods across studies using in vivo and in vitro seizure and epilepsy models. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  14. Insertion device calculations with mathematica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carr, R.; Lidia, S.

    1995-02-01

    The design of accelerator insertion devices such as wigglers and undulators has usually been aided by numerical modeling on digital computers, using code in high level languages like Fortran. In the present era, there are higher level programming environments like IDL{reg_sign}, MatLab{reg_sign}, and Mathematica{reg_sign} in which these calculations may be performed by writing much less code, and in which standard mathematical techniques are very easily used. The authors present a suite of standard insertion device modeling routines in Mathematica to illustrate the new techniques. These routines include a simple way to generate magnetic fields using blocks of CSEM materials, trajectorymore » solutions from the Lorentz force equations for given magnetic fields, Bessel function calculations of radiation for wigglers and undulators and general radiation calculations for undulators.« less

  15. A Review of Surface Water Quality Models

    PubMed Central

    Li, Shibei; Jia, Peng; Qi, Changjun; Ding, Feng

    2013-01-01

    Surface water quality models can be useful tools to simulate and predict the levels, distributions, and risks of chemical pollutants in a given water body. The modeling results from these models under different pollution scenarios are very important components of environmental impact assessment and can provide a basis and technique support for environmental management agencies to make right decisions. Whether the model results are right or not can impact the reasonability and scientificity of the authorized construct projects and the availability of pollution control measures. We reviewed the development of surface water quality models at three stages and analyzed the suitability, precisions, and methods among different models. Standardization of water quality models can help environmental management agencies guarantee the consistency in application of water quality models for regulatory purposes. We concluded the status of standardization of these models in developed countries and put forward available measures for the standardization of these surface water quality models, especially in developing countries. PMID:23853533

  16. Improvements to the YbF electron electric dipole moment experiment

    NASA Astrophysics Data System (ADS)

    Sauer, B. E.; Rabey, I. M.; Devlin, J. A.; Tarbutt, M. R.; Ho, C. J.; Hinds, E. A.

    2017-04-01

    The standard model of particle physics predicts that the permanent electric dipole moment (EDM) of the electron is very nearly zero. Many extensions to the standard model predict an electron EDM just below current experimental limits. We are currently working to improve the sensitivity of the Imperial College YbF experiment. We have implemented combined laser-radiofrequency pumping techniques which both increase the number of molecules which participate in the EDM experiment and also increase the probability of detection. Combined, these techniques give nearly two orders of magnitude increase in the experimental sensitivity. At this enhanced sensitivity magnetic effects which were negligible become important. We have developed a new way to construct the electrodes for electric field plates which minimizes the effect of magnetic Johnson noise. The new YbF experiment is expected to comparable in sensitivity to the most sensitive measurements of the electron EDM to date. We will also discuss laser cooling techniques which promise an even larger increase in sensitivity.

  17. Reduced-order modeling for hyperthermia: an extended balanced-realization-based approach.

    PubMed

    Mattingly, M; Bailey, E A; Dutton, A W; Roemer, R B; Devasia, S

    1998-09-01

    Accurate thermal models are needed in hyperthermia cancer treatments for such tasks as actuator and sensor placement design, parameter estimation, and feedback temperature control. The complexity of the human body produces full-order models which are too large for effective execution of these tasks, making use of reduced-order models necessary. However, standard balanced-realization (SBR)-based model reduction techniques require a priori knowledge of the particular placement of actuators and sensors for model reduction. Since placement design is intractable (computationally) on the full-order models, SBR techniques must use ad hoc placements. To alleviate this problem, an extended balanced-realization (EBR)-based model-order reduction approach is presented. The new technique allows model order reduction to be performed over all possible placement designs and does not require ad hoc placement designs. It is shown that models obtained using the EBR method are more robust to intratreatment changes in the placement of the applied power field than those models obtained using the SBR method.

  18. Muon (g-2) Technical Design Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grange, J.

    The Muon (g-2) Experiment, E989 at Fermilab, will measure the muon anomalous magnetic moment a factor-of-four more precisely than was done in E821 at the Brookhaven National Laboratory AGS. The E821 result appears to be greater than the Standard-Model prediction by more than three standard deviations. When combined with expected improvement in the Standard-Model hadronic contributions, E989 should be able to determine definitively whether or not the E821 result is evidence for physics beyond the Standard Model. After a review of the physics motivation and the basic technique, which will use the muon storage ring built at BNL and nowmore » relocated to Fermilab, the design of the new experiment is presented. This document was created in partial fulfillment of the requirements necessary to obtain DOE CD-2/3 approval.« less

  19. Nrf2: A Novel Biomarker of Disease Severity and Target for Therapeutic Intervention in Multiple Sclerosis

    DTIC Science & Technology

    2014-10-01

    imaging technique used to capture T cell/APC interaction and infiltration in CNS during the disease course of EAE; and finally 3) characterize the...period, we aim to understand the mechanism of APC/T cell interaction by standardizing the available mouse model and imaging techniques in our lab...resulted in the development of new triterpenoids, mouse imaging techniques and biochemistry and chemical library construction. For example, work

  20. How Does One Assess the Accuracy of Academic Success Predictors? ROC Analysis Applied to University Entrance Factors

    ERIC Educational Resources Information Center

    Vivo, Juana-Maria; Franco, Manuel

    2008-01-01

    This article attempts to present a novel application of a method of measuring accuracy for academic success predictors that could be used as a standard. This procedure is known as the receiver operating characteristic (ROC) curve, which comes from statistical decision techniques. The statistical prediction techniques provide predictor models and…

  1. Meta-Analysis of Free-Response Studies, 1992-2008: Assessing the Noise Reduction Model in Parapsychology

    ERIC Educational Resources Information Center

    Storm, Lance; Tressoldi, Patrizio E.; Di Risio, Lorenzo

    2010-01-01

    We report the results of meta-analyses on 3 types of free-response study: (a) ganzfeld (a technique that enhances a communication anomaly referred to as "psi"); (b) nonganzfeld noise reduction using alleged psi-enhancing techniques such as dream psi, meditation, relaxation, or hypnosis; and (c) standard free response (nonganzfeld, no noise…

  2. Thermal sensing of cryogenic wind tunnel model surfaces Evaluation of silicon diodes

    NASA Technical Reports Server (NTRS)

    Daryabeigi, K.; Ash, R. L.; Dillon-Townes, L. A.

    1986-01-01

    Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.

  3. Thermal sensing of cryogenic wind tunnel model surfaces - Evaluation of silicon diodes

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Ash, Robert L.; Dillon-Townes, Lawrence A.

    1986-01-01

    Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.

  4. Error analysis of Dobson spectrophotometer measurements of the total ozone content

    NASA Technical Reports Server (NTRS)

    Holland, A. C.; Thomas, R. W. L.

    1975-01-01

    A study of techniques for measuring atmospheric ozone is reported. This study represents the second phase of a program designed to improve techniques for the measurement of atmospheric ozone. This phase of the program studied the sensitivity of Dobson direct sun measurements and the ozone amounts inferred from those measurements to variation in the atmospheric temperature profile. The study used the plane - parallel Monte-Carlo model developed and tested under the initial phase of this program, and a series of standard model atmospheres.

  5. Dynamic model inversion techniques for breath-by-breath measurement of carbon dioxide from low bandwidth sensors.

    PubMed

    Sivaramakrishnan, Shyam; Rajamani, Rajesh; Johnson, Bruce D

    2009-01-01

    Respiratory CO(2) measurement (capnography) is an important diagnosis tool that lacks inexpensive and wearable sensors. This paper develops techniques to enable use of inexpensive but slow CO(2) sensors for breath-by-breath tracking of CO(2) concentration. This is achieved by mathematically modeling the dynamic response and using model-inversion techniques to predict input CO(2) concentration from the slow-varying output. Experiments are designed to identify model-dynamics and extract relevant model-parameters for a solidstate room monitoring CO(2) sensor. A second-order model that accounts for flow through the sensor's filter and casing is found to be accurate in describing the sensor's slow response. The resulting estimate is compared with a standard-of-care respiratory CO(2) analyzer and shown to effectively track variation in breath-by-breath CO(2) concentration. This methodology is potentially useful for measuring fast-varying inputs to any slow sensor.

  6. Standardized Mean Differences in Two-Level Cross-Classified Random Effects Models

    ERIC Educational Resources Information Center

    Lai, Mark H. C.; Kwok, Oi-Man

    2014-01-01

    Multilevel modeling techniques are becoming more popular in handling data with multilevel structure in educational and behavioral research. Recently, researchers have paid more attention to cross-classified data structure that naturally arises in educational settings. However, unlike traditional single-level research, methodological studies about…

  7. The Consolidation/Transition Model in Moral Reasoning Development.

    ERIC Educational Resources Information Center

    Walker, Lawrence J.; Gustafson, Paul; Hennig, Karl H.

    2001-01-01

    This longitudinal study with 62 children and adolescents examined the validity of the consolidation/transition model in the context of moral reasoning development. Results of standard statistical and Bayesian techniques supported the hypotheses regarding cyclical patterns of change and predictors of stage transition, and demonstrated the utility…

  8. Fluoroscopic removal of retrievable self-expandable metal stents in patients with malignant oesophageal strictures: Experience with a non-endoscopic removal system.

    PubMed

    Kim, Pyeong Hwa; Song, Ho-Young; Park, Jung-Hoon; Zhou, Wei-Zhong; Na, Han Kyu; Cho, Young Chul; Jun, Eun Jung; Kim, Jun Ki; Kim, Guk Bae

    2017-03-01

    To evaluate clinical outcomes of fluoroscopic removal of retrievable self-expandable metal stents (SEMSs) for malignant oesophageal strictures, to compare clinical outcomes of three different removal techniques, and to identify predictive factors of successful removal by the standard technique (primary technical success). A total of 137 stents were removed from 128 patients with malignant oesophageal strictures. Primary overall technical success and removal-related complications were evaluated. Logistic regression models were constructed to identify predictive factors of primary technical success. Primary technical success rate was 78.8 % (108/137). Complications occurred in six (4.4 %) cases. Stent location in the upper oesophagus (P=0.004), stricture length over 8 cm (P=0.030), and proximal granulation tissue (P<0.001) were negative predictive factors of primary technical success. If granulation tissue was present at the proximal end, eversion technique was more frequently required (P=0.002). Fluoroscopic removal of retrievable SEMSs for malignant oesophageal strictures using three different removal techniques appeared to be safe and easy. The standard technique is safe and effective in the majority of patients. The presence of proximal granulation tissue, stent location in the upper oesophagus, and stricture length over 8 cm were negative predictive factors for primary technical success by standard extraction and may require a modified removal technique. • Fluoroscopic retrievable SEMS removal is safe and effective. • Standard removal technique by traction is effective in the majority of patients. • Three negative predictive factors of primary technical success were identified. • Caution should be exercised during the removal in those situations. • Eversion technique is effective in cases of proximal granulation tissue.

  9. Proceedings of the 21st DOE/NRC Nuclear Air Cleaning Conference; Sessions 1--8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    First, M.W.

    1991-02-01

    Separate abstracts have been prepared for the papers presented at the meeting on nuclear facility air cleaning technology in the following specific areas of interest: air cleaning technologies for the management and disposal of radioactive wastes; Canadian waste management program; radiological health effects models for nuclear power plant accident consequence analysis; filter testing; US standard codes on nuclear air and gas treatment; European community nuclear codes and standards; chemical processing off-gas cleaning; incineration and vitrification; adsorbents; nuclear codes and standards; mathematical modeling techniques; filter technology; safety; containment system venting; and nuclear air cleaning programs around the world. (MB)

  10. Statistical shape model-based reconstruction of a scaled, patient-specific surface model of the pelvis from a single standard AP x-ray radiograph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng Guoyan

    2010-04-15

    Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction.more » The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark-based initialization. Depending on the surface-based matching techniques, the reconstruction errors were slightly different. When a surface-based iterative affine registration was used, an average reconstruction error of 1.6 mm was observed. This error was increased to 1.9 mm, when a surface-based iterative scaled rigid registration was used. Conclusions: It is feasible to reconstruct a scaled, patient-specific surface model of the pelvis from single standard AP x-ray radiograph using the present approach. The unknown scale of the reconstructed model can be estimated by performing a surface-based 3D/3D matching.« less

  11. Modeling Payload Stowage Impacts on Fire Risks On-Board the International Space Station

    NASA Technical Reports Server (NTRS)

    Anton, Kellie e.; Brown, Patrick F.

    2010-01-01

    The purpose of this presentation is to determine the risks of fire on-board the ISS due to non-standard stowage. ISS stowage is constantly being reexamined for optimality. Non-standard stowage involves stowing items outside of rack drawers, and fire risk is a key concern and is heavily mitigated. A Methodology is needed to account for fire risk due to non-standard stowage to capture the risk. The contents include: 1) Fire Risk Background; 2) General Assumptions; 3) Modeling Techniques; 4) Event Sequence Diagram (ESD); 5) Qualitative Fire Analysis; 6) Sample Qualitative Results for Fire Risk; 7) Qualitative Stowage Analysis; 8) Sample Qualitative Results for Non-Standard Stowage; and 9) Quantitative Analysis Basic Event Data.

  12. A general diagnostic model applied to language testing data.

    PubMed

    von Davier, Matthias

    2008-11-01

    Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.

  13. A new simplified volume-loaded heterotopic rabbit heart transplant model with improved techniques and a standard operating procedure.

    PubMed

    Lu, Wei; Zheng, Jun; Pan, Xu-Dong; Li, Bing; Zhang, Jin-Wei; Wang, Long-Fei; Sun, Li-Zhong

    2015-04-01

    The classic non-working (NW) heterotopic heart transplant (HTX) model in rodents had been widely used for researches related to immunology, graft rejection, evaluation of immunosuppressive therapies and organ preservation. But unloaded models are considered not suitable for some researches. Accordingly, We have constructed a volume-loaded (VL) model by a new and simple technique. Thirty male New Zealand White rabbits were randomly divided into two groups, group NW with 14 rabbits and group VL with 16 rabbits, which served as donors and recipients. We created a large and nonrestrictive shunt to provide left heart a sufficient preload. The donor superior vena cave and ascending aorta (AO) were anastomosed to the recipient abdominal aorta (AAO) and inferior vena cava (IVC), respectively. No animals suffered from paralysis, pneumonia and lethal bleeding. Recipients' mortality and morbidity were 6.7% (1/15) and 13.3% (2/15), respectively. The cold ischemia time in group VL is slight longer than that in group NW. The maximal aortic velocity (MAV) of donor heart was approximately equivalent to half that of native heart in group VL. Moreover, the similar result was achieved in the parameter of late diastolic mitral inflow velocity between donor heart and native heart in group VL. The echocardiography (ECHO) showed a bidirectional flow in donor SVC of VL model, inflow during diastole and outflow during systole. PET-CT imaging showed the standard uptake value (SUV) of allograft was equal to that of native heart in both groups on the postoperative day 3. We have developed a new VL model in rabbits, which imitates a native heart hemodynamically while only requiring a minor additional procedure. Surgical technique is simple compared with currently used HTX models. We also developed a standard operating procedure that significantly improved graft and recipient survival rate. This study may be useful for investigations in transplantation in which a working model is required.

  14. Towards Application of NASA Standard for Models and Simulations in Aeronautical Design Process

    NASA Astrophysics Data System (ADS)

    Vincent, Luc; Dunyach, Jean-Claude; Huet, Sandrine; Pelissier, Guillaume; Merlet, Joseph

    2012-08-01

    Even powerful computational techniques like simulation endure limitations in their validity domain. Consequently using simulation models requires cautions to avoid making biased design decisions for new aeronautical products on the basis of inadequate simulation results. Thus the fidelity, accuracy and validity of simulation models shall be monitored in context all along the design phases to build confidence in achievement of the goals of modelling and simulation.In the CRESCENDO project, we adapt the Credibility Assessment Scale method from NASA standard for models and simulations from space programme to the aircraft design in order to assess the quality of simulations. The proposed eight quality assurance metrics aggregate information to indicate the levels of confidence in results. They are displayed in management dashboard and can secure design trade-off decisions at programme milestones.The application of this technique is illustrated in aircraft design context with specific thermal Finite Elements Analysis. This use case shows how to judge the fitness- for-purpose of simulation as Virtual testing means and then green-light the continuation of Simulation Lifecycle Management (SLM) process.

  15. Improving automation standards via semantic modelling: Application to ISA88.

    PubMed

    Dombayci, Canan; Farreres, Javier; Rodríguez, Horacio; Espuña, Antonio; Graells, Moisès

    2017-03-01

    Standardization is essential for automation. Extensibility, scalability, and reusability are important features for automation software that rely in the efficient modelling of the addressed systems. The work presented here is from the ongoing development of a methodology for semi-automatic ontology construction methodology from technical documents. The main aim of this work is to systematically check the consistency of technical documents and support the improvement of technical document consistency. The formalization of conceptual models and the subsequent writing of technical standards are simultaneously analyzed, and guidelines proposed for application to future technical standards. Three paradigms are discussed for the development of domain ontologies from technical documents, starting from the current state of the art, continuing with the intermediate method presented and used in this paper, and ending with the suggested paradigm for the future. The ISA88 Standard is taken as a representative case study. Linguistic techniques from the semi-automatic ontology construction methodology is applied to the ISA88 Standard and different modelling and standardization aspects that are worth sharing with the automation community is addressed. This study discusses different paradigms for developing and sharing conceptual models for the subsequent development of automation software, along with presenting the systematic consistency checking method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Manufacturing Implant Supported Auricular Prostheses by Rapid Prototyping Techniques

    PubMed Central

    Karatas, Meltem Ozdemir; Cifter, Ebru Demet; Ozenen, Didem Ozdemir; Balik, Ali; Tuncer, Erman Bulent

    2011-01-01

    Maxillofacial prostheses are usually fabricated on the models obtained following the impression procedures. Disadvantages of conventional impression techniques used in production of facial prosthesis are deformation of soft tissues caused by impression material and disturbance of the patient due to. Additionally production of prosthesis by conventional methods takes longer time. Recently, rapid prototyping techniques have been developed for extraoral prosthesis in order to reduce these disadvantages of conventional methods. Rapid prototyping technique has the potential to simplify the procedure and decrease the laboratory work required. It eliminates the need for measurement impression procedures and preparation of wax model to be performed by prosthodontists themselves In the near future this technology will become a standard for fabricating maxillofacial prostheses. PMID:21912504

  17. Explicit treatment for Dirichlet, Neumann and Cauchy boundary conditions in POD-based reduction of groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2018-05-01

    In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.

  18. Simultaneous measurement of the Young's modulus and the Poisson ratio of thin elastic layers.

    PubMed

    Gross, Wolfgang; Kress, Holger

    2017-02-07

    The behavior of cells and tissue is greatly influenced by the mechanical properties of their environment. For studies on the interactions between cells and soft matrices, especially those applying traction force microscopy the characterization of the mechanical properties of thin substrate layers is essential. Various techniques to measure the elastic modulus are available. Methods to accurately measure the Poisson ratio of such substrates are rare and often imply either a combination of multiple techniques or additional equipment which is not needed for the actual biological studies. Here we describe a novel technique to measure both parameters, the Youngs's modulus and the Poisson ratio in a single experiment. The technique requires only a standard inverted epifluorescence microscope. As a model system, we chose cross-linked polyacrylamide and poly-N-isopropylacrylamide hydrogels which are known to obey Hooke's law. We place millimeter-sized steel spheres on the substrates which indent the surface. The data are evaluated using a previously published model which takes finite thickness effects of the substrate layer into account. We demonstrate experimentally for the first time that the application of the model allows the simultaneous determination of both the Young's modulus and the Poisson ratio. Since the method is easy to adapt and comes without the need of special equipment, we envision the technique to become a standard tool for the characterization of substrates for a wide range of investigations of cell and tissue behavior in various mechanical environments as well as other samples, including biological materials.

  19. How to compare cross-lagged associations in a multilevel autoregressive model.

    PubMed

    Schuurman, Noémi K; Ferrer, Emilio; de Boer-Sonnenschein, Mieke; Hamaker, Ellen L

    2016-06-01

    By modeling variables over time it is possible to investigate the Granger-causal cross-lagged associations between variables. By comparing the standardized cross-lagged coefficients, the relative strength of these associations can be evaluated in order to determine important driving forces in the dynamic system. The aim of this study was twofold: first, to illustrate the added value of a multilevel multivariate autoregressive modeling approach for investigating these associations over more traditional techniques; and second, to discuss how the coefficients of the multilevel autoregressive model should be standardized for comparing the strength of the cross-lagged associations. The hierarchical structure of multilevel multivariate autoregressive models complicates standardization, because subject-based statistics or group-based statistics can be used to standardize the coefficients, and each method may result in different conclusions. We argue that in order to make a meaningful comparison of the strength of the cross-lagged associations, the coefficients should be standardized within persons. We further illustrate the bivariate multilevel autoregressive model and the standardization of the coefficients, and we show that disregarding individual differences in dynamics can prove misleading, by means of an empirical example on experienced competence and exhaustion in persons diagnosed with burnout. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. 2D Flood Modelling Using Advanced Terrain Analysis Techniques And A Fully Continuous DEM-Based Rainfall-Runoff Algorithm

    NASA Astrophysics Data System (ADS)

    Nardi, F.; Grimaldi, S.; Petroselli, A.

    2012-12-01

    Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.

  1. Evaluating uses of data mining techniques in propensity score estimation: a simulation study.

    PubMed

    Setoguchi, Soko; Schneeweiss, Sebastian; Brookhart, M Alan; Glynn, Robert J; Cook, E Francis

    2008-06-01

    In propensity score modeling, it is a standard practice to optimize the prediction of exposure status based on the covariate information. In a simulation study, we examined in what situations analyses based on various types of exposure propensity score (EPS) models using data mining techniques such as recursive partitioning (RP) and neural networks (NN) produce unbiased and/or efficient results. We simulated data for a hypothetical cohort study (n = 2000) with a binary exposure/outcome and 10 binary/continuous covariates with seven scenarios differing by non-linear and/or non-additive associations between exposure and covariates. EPS models used logistic regression (LR) (all possible main effects), RP1 (without pruning), RP2 (with pruning), and NN. We calculated c-statistics (C), standard errors (SE), and bias of exposure-effect estimates from outcome models for the PS-matched dataset. Data mining techniques yielded higher C than LR (mean: NN, 0.86; RPI, 0.79; RP2, 0.72; and LR, 0.76). SE tended to be greater in models with higher C. Overall bias was small for each strategy, although NN estimates tended to be the least biased. C was not correlated with the magnitude of bias (correlation coefficient [COR] = -0.3, p = 0.1) but increased SE (COR = 0.7, p < 0.001). Effect estimates from EPS models by simple LR were generally robust. NN models generally provided the least numerically biased estimates. C was not associated with the magnitude of bias but was with the increased SE.

  2. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  3. Study on fast measurement of sugar content of yogurt using Vis/NIR spectroscopy techniques

    NASA Astrophysics Data System (ADS)

    He, Yong; Feng, Shuijuan; Wu, Di; Li, Xiaoli

    2006-09-01

    In order to measuring the sugar content of yogurt rapidly, a fast measurement of sugar content of yogurt using Vis/NIR-spectroscopy techniques was established. 25 samples selected separately from five different brands of yogurt were measured by Vis/NIR-spectroscopy. The sugar content of yogurt on positions scanned by spectrum were measured by a sugar content meter. The mathematical model between sugar content and Vis/NIR spectral measurements was established and developed based on partial least squares (PLS). The correlation coefficient of sugar content based on PLS model is more than 0.894, and standard error of calibration (SEC) is 0.356, standard error of prediction (SEP) is 0.389. Through predicting the sugar content quantitatively of 35 samples of yogurt from 5 different brands, the correlation coefficient between predictive value and measured value of those samples is more than 0.934. The results show the good to excellent prediction performance. The Vis/NIR spectroscopy technique had significantly greater accuracy for determining the sugar content. It was concluded that the Vis/NIRS measurement technique seems reliable to assess the fast measurement of sugar content of yogurt, and a new method for the measurement of sugar content of yogurt was established.

  4. Towards Effective Clustering Techniques for the Analysis of Electric Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Emilie A.; Cotilla Sanchez, Jose E.; Halappanavar, Mahantesh

    2013-11-30

    Clustering is an important data analysis technique with numerous applications in the analysis of electric power grids. Standard clustering techniques are oblivious to the rich structural and dynamic information available for power grids. Therefore, by exploiting the inherent topological and electrical structure in the power grid data, we propose new methods for clustering with applications to model reduction, locational marginal pricing, phasor measurement unit (PMU or synchrophasor) placement, and power system protection. We focus our attention on model reduction for analysis based on time-series information from synchrophasor measurement devices, and spectral techniques for clustering. By comparing different clustering techniques onmore » two instances of realistic power grids we show that the solutions are related and therefore one could leverage that relationship for a computational advantage. Thus, by contrasting different clustering techniques we make a case for exploiting structure inherent in the data with implications for several domains including power systems.« less

  5. Opponent Classification in Poker

    NASA Astrophysics Data System (ADS)

    Ahmad, Muhammad Aurangzeb; Elidrisi, Mohamed

    Modeling games has a long history in the Artificial Intelligence community. Most of the games that have been considered solved in AI are perfect information games. Imperfect information games like Poker and Bridge represent a domain where there is a great deal of uncertainty involved and additional challenges with respect to modeling the behavior of the opponent etc. Techniques developed for playing imperfect games also have many real world applications like repeated online auctions, human computer interaction, opponent modeling for military applications etc. In this paper we explore different techniques for playing poker, the core of these techniques is opponent modeling via classifying the behavior of opponent according to classes provided by domain experts. We utilize windows of full observation in the game to classify the opponent. In Poker, the behavior of an opponent is classified into four standard poker-playing styles based on a subjective function.

  6. Standards in Modeling and Simulation: The Next Ten Years MODSIM World Paper 2010

    NASA Technical Reports Server (NTRS)

    Collins, Andrew J.; Diallo, Saikou; Sherfey, Solomon R.; Tolk, Andreas; Turnitsa, Charles D.; Petty, Mikel; Wiesel, Eric

    2011-01-01

    The world has moved on since the introduction of the Distributed Interactive Simulation (DIS) standard in the early 1980s. The cold-war maybe over but there is still a requirement to train for and analyze the next generation of threats that face the free world. With the emergence of new and more powerful computer technology and techniques means that modeling and simulation (M&S) has become an important and growing, part in satisfying this requirement. As an industry grows, the benefits from standardization within that industry grow with it. For example, it is difficult to imagine what the USA would be like without the 110 volts standard for domestic electricity supply. This paper contains an overview of the outcomes from a recent workshop to investigate the possible future of M&S standards within the federal government.

  7. Preparation of swine for the laboratory.

    PubMed

    Smith, Alison C; Swindle, M Michael

    2006-01-01

    Swine are an important model in many areas of biomedical research. These animals have been used predominantly as preclinical models involving surgical and interventional protocols. The systems most commonly studied include cardiovascular, integumentary, digestive, and urological. Swine are intelligent social animals and require species-specific socialization and handling techniques. It is important to acclimate the animals to the facility and to personnel before they are placed on chronic protocols. Gentle handling techniques instead of forceful procedures are essential to their socialization. They require sturdy caging with specific construction standards, and toys for environmental enrichment. Because the species is covered by both the Animal Welfare Act and the US Department of Agriculture, interstate transport requires a health certificate with destination state-specific disease screening standards. This manuscript provides an overview of best practices that have been utilized in the authors' facility.

  8. Demographic management in a federated healthcare environment.

    PubMed

    Román, I; Roa, L M; Reina-Tosina, J; Madinabeitia, G

    2006-09-01

    The purpose of this paper is to provide a further step toward the decentralization of identification and demographic information about persons by solving issues related to the integration of demographic agents in a federated healthcare environment. The aim is to identify a particular person in every system of a federation and to obtain a unified view of his/her demographic information stored in different locations. This work is based on semantic models and techniques, and pursues the reconciliation of several current standardization works including ITU-T's Open Distributed Processing, CEN's prEN 12967, OpenEHR's dual and reference models, CEN's General Purpose Information Components and CORBAmed's PID service. We propose a new paradigm for the management of person identification and demographic data, based on the development of an open architecture of specialized distributed components together with the incorporation of techniques for the efficient management of domain ontologies, in order to have a federated demographic service. This new service enhances previous correlation solutions sharing ideas with different standards and domains like semantic techniques and database systems. The federation philosophy enforces us to devise solutions to the semantic, functional and instance incompatibilities in our approach. Although this work is based on several models and standards, we have improved them by combining their contributions and developing a federated architecture that does not require the centralization of demographic information. The solution is thus a good approach to face integration problems and the applied methodology can be easily extended to other tasks involved in the healthcare organization.

  9. A Method to Test Model Calibration Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less

  10. A Method to Test Model Calibration Techniques: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less

  11. Invited commentary: G-computation--lost in translation?

    PubMed

    Vansteelandt, Stijn; Keiding, Niels

    2011-04-01

    In this issue of the Journal, Snowden et al. (Am J Epidemiol. 2011;173(7):731-738) give a didactic explanation of G-computation as an approach for estimating the causal effect of a point exposure. The authors of the present commentary reinforce the idea that their use of G-computation is equivalent to a particular form of model-based standardization, whereby reference is made to the observed study population, a technique that epidemiologists have been applying for several decades. They comment on the use of standardized versus conditional effect measures and on the relative predominance of the inverse probability-of-treatment weighting approach as opposed to G-computation. They further propose a compromise approach, doubly robust standardization, that combines the benefits of both of these causal inference techniques and is not more difficult to implement.

  12. A modeling analysis of alternative primary and secondary US ozone standards in urban and rural areas

    NASA Astrophysics Data System (ADS)

    Nopmongcol, Uarporn; Emery, Chris; Sakulyanontvittaya, Tanarit; Jung, Jaegun; Knipping, Eladio; Yarwood, Greg

    2014-12-01

    This study employed the High-Order Decoupled Direct Method (HDDM) of sensitivity analysis in a photochemical grid model to determine US anthropogenic emissions reductions required from 2006 levels to meet alternative US primary (health-based) and secondary (welfare-based) ozone (O3) standards. Applying the modeling techniques developed by Yarwood et al. (2013), we specifically evaluated sector-wide emission reductions needed to meet primary standards in the range of 60-75 ppb, and secondary standards in the range of 7-15 ppm-h, in 22 cities and at 20 rural sites across the US for NOx-only, combined NOx and VOC, and VOC-only scenarios. Site-specific model biases were taken into account by applying adjustment factors separately for the primary and secondary standard metrics, analogous to the US Environmental Protection Agency's (EPA) relative response factor technique. Both bias-adjusted and unadjusted results are presented and analyzed. We found that the secondary metric does not necessarily respond to emission reductions the same way the primary metric does, indicating sensitivity to their different forms. Combined NOx and VOC reductions are most effective for cities, whereas NOx-only reductions are sufficient at rural sites. Most cities we examined require more than 50% US anthropogenic emission reductions from 2006 levels to meet the current primary 75 ppb US standard and secondary 15 ppm-h target. Most rural sites require less than 20% reductions to meet the primary 75 ppb standard and less than 40% reductions to meet the secondary 15 ppm-h target. Whether the primary standard is protective of the secondary standard depends on the combination of alternative standard levels. Our modeling suggests that the current 75 ppb standard achieves a 15 ppm-h secondary target in most (17 of 22) cities, but only half of the rural sites; the inability for several western cities and rural areas to achieve the seasonally-summed secondary 15 ppm-h target while meeting the 75 ppb primary target is likely driven by higher background O3 that is commonly reported in the western US. However, a 70 ppb primary standard is protective of a 15 ppm-h secondary standard in all cities and 18 of 20 rural sites we examined, and a 60 ppb primary standard is protective of a 7 ppm-h secondary standard in all cities and 19 of 20 rural sites. If EPA promulgates separate primary and secondary standards, exceedance areas will need to develop and demonstrate control strategies to achieve both. This HDDM analysis provides an illustrative screening assessment by which to estimate emissions reductions necessary to satisfy both standards.

  13. A Standard for RF Modulation Factor,

    DTIC Science & Technology

    1979-09-01

    Mathematics of Physics and Chemistry, pp. 474-477 (D. Van Nostrand Co., Inc., New York, N.Y., 1943). [23] Graybill , F. A., An Introduction to Linear ...circuit model . The primary limitation on the quadratic technique is the linearity and bandwidth of the analog multiplier. A high speed (5 MHz...o ...... . ..... 39 7.2.1. Nonlinearity Model ............................................... 41 7.2.2. Model Parameters

  14. Radiative Transfer Model for Operational Retrieval of Cloud Parameters from DSCOVR-EPIC Measurements

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Molina Garcia, V.; Doicu, A.; Loyola, D. G.

    2016-12-01

    The Earth Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR) measures the radiance in the backscattering region. To make sure that all details in the backward glory are covered, a large number of streams is required by a standard radiative transfer model based on the discrete ordinates method. Even the use of the delta-M scaling and the TMS correction do not substantially reduce the number of streams. The aim of this work is to analyze the capability of a fast radiative transfer model to retrieve operationally cloud parameters from EPIC measurements. The radiative transfer model combines the discrete ordinates method with matrix exponential for the computation of radiances and the matrix operator method for the calculation of the reflection and transmission matrices. Standard acceleration techniques as, for instance, the use of the normalized right and left eigenvectors, telescoping technique, Pade approximation and successive-order-of-scattering approximation are implemented. In addition, the model may compute the reflection matrix of the cloud by means of the asymptotic theory, and may use the equivalent Lambertian cloud model. The various approximations are analyzed from the point of view of efficiency and accuracy.

  15. New mechanistic insights in the NH 3-SCR reactions at low temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruggeri, Maria Pia; Selleri, Tomasso; Nova, Isabella

    2016-05-06

    The present study is focused on the investigation of the low temperature Standard SCR reaction mechanism over Fe- and Cu-promoted zeolites. Different techniques are employed, including in situ DRIFTS, transient reaction analysis and chemical trapping techniques. The results present strong evidence of nitrite formation in the oxidative activation of NO and of their role in SCR reactions. These elements lead to a deeper understanding of the standard SCR chemistry at low temperature and can potentially improve the consistency of mechanistic mathematical models. Furthermore, comprehension of the mechanism on a fundamental level can contribute to the development of improved SCR catalysts.

  16. Calibrating and training of neutron based NSA techniques with less SNM standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, William H; Swinhoe, Martyn T; Bracken, David S

    2010-01-01

    Accessing special nuclear material (SNM) standards for the calibration of and training on nondestructive assay (NDA) instruments has become increasingly difficult in light of enhanced safeguards and security regulations. Limited or nonexistent access to SNM has affected neutron based NDA techniques more than gamma ray techniques because the effects of multiplication require a range of masses to accurately measure the detector response. Neutron based NDA techniques can also be greatly affected by the matrix and impurity characteristics of the item. The safeguards community has been developing techniques for calibrating instrumentation and training personnel with dwindling numbers of SNM standards. Montemore » Carlo methods have become increasingly important for design and calibration of instrumentation. Monte Carlo techniques have the ability to accurately predict the detector response for passive techniques. The Monte Carlo results are usually benchmarked to neutron source measurements such as californium. For active techniques, the modeling becomes more difficult because of the interaction of the interrogation source with the detector and nuclear material; and the results cannot be simply benchmarked with neutron sources. A Monte Carlo calculated calibration curve for a training course in Indonesia of material test reactor (MTR) fuel elements assayed with an active well coincidence counter (AWCC) will be presented as an example. Performing training activities with reduced amounts of nuclear material makes it difficult to demonstrate how the multiplication and matrix properties of the item affects the detector response and limits the knowledge that can be obtained with hands-on training. A neutron pulse simulator (NPS) has been developed that can produce a pulse stream representative of a real pulse stream output from a detector measuring SNM. The NPS has been used by the International Atomic Energy Agency (IAEA) for detector testing and training applications at the Agency due to the lack of appropriate SNM standards. This paper will address the effect of reduced access to SNM for calibration and training of neutron NDA applications along with the advantages and disadvantages of some solutions that do not use standards, such as the Monte Carlo techniques and the NPS.« less

  17. A new noninvasive controlled intra-articular ankle distraction technique on a cadaver model.

    PubMed

    Aydin, Ahmet T; Ozcanli, Haluk; Soyuncu, Yetkin; Dabak, Tayyar K

    2006-08-01

    Effective joint distraction is crucial in arthroscopic ankle surgery. We describe an effective and controlled intra-articular ankle distraction technique that we have studied by means of a fresh-frozen cadaver model. Using a kyphoplasty balloon, which is currently used in spine surgery, we tried to achieve a controlled distraction. After the fixation of the cadaver model, standard anteromedial and anterolateral portals were used for ankle arthroscopy. From the same portals, the kyphoplasty balloon was inserted and placed in an appropriate position intra-articularly. The necessary amount of distraction was achieved by inflating the kyphoplasty balloon with a pressure regulation pump. All anatomic sites of the ankle joint were easily visualized with the arthroscope during surgery by changing the pressure and the intra-articular position of the kyphoplasty balloon. Ankle distraction was clearly seen on the arthroscopic and image intensifier view. The kyphoplasty balloon is simple to place through the standard portals and the advantage is that it allows easy manipulation of the arthroscopic instruments from the same portal.

  18. Discrete disorder models for many-body localization

    NASA Astrophysics Data System (ADS)

    Janarek, Jakub; Delande, Dominique; Zakrzewski, Jakub

    2018-04-01

    Using exact diagonalization technique, we investigate the many-body localization phenomenon in the 1D Heisenberg chain comparing several disorder models. In particular we consider a family of discrete distributions of disorder strengths and compare the results with the standard uniform distribution. Both statistical properties of energy levels and the long time nonergodic behavior are discussed. The results for different discrete distributions are essentially identical to those obtained for the continuous distribution, provided the disorder strength is rescaled by the standard deviation of the random distribution. Only for the binary distribution significant deviations are observed.

  19. A study of trends and techniques for space base electronics

    NASA Technical Reports Server (NTRS)

    Trotter, J. D.; Wade, T. E.; Gassaway, J. D.

    1978-01-01

    Furnaces and photolithography related equipment were applied to experiments on double layer metal. The double layer metal activity emphasized wet chemistry techniques. By incorporating the following techniques: (1) ultrasonic etching of the vias; (2) premetal clean using a modified buffered hydrogen fluoride; (3) phosphorus doped vapor; and (4) extended sintering, yields of 98 percent were obtained using the standard test pattern. The two dimensional modeling problems have stemmed from, alternately, instability and too much computation time to achieve convergence.

  20. Associating clinical archetypes through UMLS Metathesaurus term clusters.

    PubMed

    Lezcano, Leonardo; Sánchez-Alonso, Salvador; Sicilia, Miguel-Angel

    2012-06-01

    Clinical archetypes are modular definitions of clinical data, expressed using standard or open constraint-based data models as the CEN EN13606 and openEHR. There is an increasing archetype specification activity that raises the need for techniques to associate archetypes to support better management and user navigation in archetype repositories. This paper reports on a computational technique to generate tentative archetype associations by mapping them through term clusters obtained from the UMLS Metathesaurus. The terms are used to build a bipartite graph model and graph connectivity measures can be used for deriving associations.

  1. Single-phase power distribution system power flow and fault analysis

    NASA Technical Reports Server (NTRS)

    Halpin, S. M.; Grigsby, L. L.

    1992-01-01

    Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.

  2. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    PubMed Central

    Jamshidy, Ladan; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique. PMID:28003824

  3. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.

    PubMed

    Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction . One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods . A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results . The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion . The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  4. Regression analysis of current-status data: an application to breast-feeding.

    PubMed

    Grummer-strawn, L M

    1993-09-01

    "Although techniques for calculating mean survival time from current-status data are well known, their use in multiple regression models is somewhat troublesome. Using data on current breast-feeding behavior, this article considers a number of techniques that have been suggested in the literature, including parametric, nonparametric, and semiparametric models as well as the application of standard schedules. Models are tested in both proportional-odds and proportional-hazards frameworks....I fit [the] models to current status data on breast-feeding from the Demographic and Health Survey (DHS) in six countries: two African (Mali and Ondo State, Nigeria), two Asian (Indonesia and Sri Lanka), and two Latin American (Colombia and Peru)." excerpt

  5. Non-thermal plasma destruction of allyl alcohol in waste gas: kinetics and modelling

    NASA Astrophysics Data System (ADS)

    DeVisscher, A.; Dewulf, J.; Van Durme, J.; Leys, C.; Morent, R.; Van Langenhove, H.

    2008-02-01

    Non-thermal plasma treatment is a promising technique for the destruction of volatile organic compounds in waste gas. A relatively unexplored technique is the atmospheric negative dc multi-pin-to-plate glow discharge. This paper reports experimental results of allyl alcohol degradation and ozone production in this type of plasma. A new model was developed to describe these processes quantitatively. The model contains a detailed chemical degradation scheme, and describes the physics of the plasma by assuming that the fraction of electrons that takes part in chemical reactions is an exponential function of the reduced field. The model captured the experimental kinetic data to less than 2 ppm standard deviation.

  6. Second-Language Learning through Imaginative Theory

    ERIC Educational Resources Information Center

    Broom, Catherine

    2011-01-01

    This article explores how Egan's (1997) work on imagination can enrich our understanding of teaching English as a second language (ESL). Much has been written on ESL teaching techniques; however, some of this work has been expounded in a standard educational framework, which is what Egan calls an assembly-line model. This model can easily underlie…

  7. General squark flavour mixing: constraints, phenomenology and benchmarks

    DOE PAGES

    De Causmaecker, Karen; Fuks, Benjamin; Herrmann, Bjorn; ...

    2015-11-19

    Here, we present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.

  8. Requirements analysis, domain knowledge, and design

    NASA Technical Reports Server (NTRS)

    Potts, Colin

    1988-01-01

    Two improvements to current requirements analysis practices are suggested: domain modeling, and the systematic application of analysis heuristics. Domain modeling is the representation of relevant application knowledge prior to requirements specification. Artificial intelligence techniques may eventually be applicable for domain modeling. In the short term, however, restricted domain modeling techniques, such as that in JSD, will still be of practical benefit. Analysis heuristics are standard patterns of reasoning about the requirements. They usually generate questions of clarification or issues relating to completeness. Analysis heuristics can be represented and therefore systematically applied in an issue-based framework. This is illustrated by an issue-based analysis of JSD's domain modeling and functional specification heuristics. They are discussed in the context of the preliminary design of simple embedded systems.

  9. Interband coding extension of the new lossless JPEG standard

    NASA Astrophysics Data System (ADS)

    Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.

    1997-01-01

    Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.

  10. Mathematical and Statistical Techniques for Systems Medicine: The Wnt Signaling Pathway as a Case Study.

    PubMed

    MacLean, Adam L; Harrington, Heather A; Stumpf, Michael P H; Byrne, Helen M

    2016-01-01

    The last decade has seen an explosion in models that describe phenomena in systems medicine. Such models are especially useful for studying signaling pathways, such as the Wnt pathway. In this chapter we use the Wnt pathway to showcase current mathematical and statistical techniques that enable modelers to gain insight into (models of) gene regulation and generate testable predictions. We introduce a range of modeling frameworks, but focus on ordinary differential equation (ODE) models since they remain the most widely used approach in systems biology and medicine and continue to offer great potential. We present methods for the analysis of a single model, comprising applications of standard dynamical systems approaches such as nondimensionalization, steady state, asymptotic and sensitivity analysis, and more recent statistical and algebraic approaches to compare models with data. We present parameter estimation and model comparison techniques, focusing on Bayesian analysis and coplanarity via algebraic geometry. Our intention is that this (non-exhaustive) review may serve as a useful starting point for the analysis of models in systems medicine.

  11. Stochastic models for atomic clocks

    NASA Technical Reports Server (NTRS)

    Barnes, J. A.; Jones, R. H.; Tryon, P. V.; Allan, D. W.

    1983-01-01

    For the atomic clocks used in the National Bureau of Standards Time Scales, an adequate model is the superposition of white FM, random walk FM, and linear frequency drift for times longer than about one minute. The model was tested on several clocks using maximum likelihood techniques for parameter estimation and the residuals were acceptably random. Conventional diagnostics indicate that additional model elements contribute no significant improvement to the model even at the expense of the added model complexity.

  12. Hybrid numerical method for solution of the radiative transfer equation in one, two, or three dimensions.

    PubMed

    Reinersman, Phillip N; Carder, Kendall L

    2004-05-01

    A hybrid method is presented by which Monte Carlo (MC) techniques are combined with an iterative relaxation algorithm to solve the radiative transfer equation in arbitrary one-, two-, or three-dimensional optical environments. The optical environments are first divided into contiguous subregions, or elements. MC techniques are employed to determine the optical response function of each type of element. The elements are combined, and relaxation techniques are used to determine simultaneously the radiance field on the boundary and throughout the interior of the modeled environment. One-dimensional results compare well with a standard radiative transfer model. The light field beneath and adjacent to a long barge is modeled in two dimensions and displayed. Ramifications for underwater video imaging are discussed. The hybrid model is currently capable of providing estimates of the underwater light field needed to expedite inspection of ship hulls and port facilities.

  13. Towards developing standard operating procedures for pre-clinical testing in the mdx mouse model of Duchenne muscular dystrophy

    PubMed Central

    Grounds, Miranda D.; Radley, Hannah G.; Lynch, Gordon S.; Nagaraju, Kanneboyina; De Luca, Annamaria

    2008-01-01

    This review discusses various issues to consider when developing standard operating procedures for pre-clinical studies in the mdx mouse model of Duchenne muscular dystrophy (DMD). The review describes and evaluates a wide range of techniques used to measure parameters of muscle pathology in mdx mice and identifies some basic techniques that might comprise standardised approaches for evaluation. While the central aim is to provide a basis for the development of standardised procedures to evaluate efficacy of a drug or a therapeutic strategy, a further aim is to gain insight into pathophysiological mechanisms in order to identify other therapeutic targets. The desired outcome is to enable easier and more rigorous comparison of pre-clinical data from different laboratories around the world, in order to accelerate identification of the best pre-clinical therapies in the mdx mouse that will fast-track translation into effective clinical treatments for DMD. PMID:18499465

  14. Scaling and kinematics optimisation of the scapula and thorax in upper limb musculoskeletal models

    PubMed Central

    Prinold, Joe A.I.; Bull, Anthony M.J.

    2014-01-01

    Accurate representation of individual scapula kinematics and subject geometries is vital in musculoskeletal models applied to upper limb pathology and performance. In applying individual kinematics to a model׳s cadaveric geometry, model constraints are commonly prescriptive. These rely on thorax scaling to effectively define the scapula׳s path but do not consider the area underneath the scapula in scaling, and assume a fixed conoid ligament length. These constraints may not allow continuous solutions or close agreement with directly measured kinematics. A novel method is presented to scale the thorax based on palpated scapula landmarks. The scapula and clavicle kinematics are optimised with the constraint that the scapula medial border does not penetrate the thorax. Conoid ligament length is not used as a constraint. This method is simulated in the UK National Shoulder Model and compared to four other methods, including the standard technique, during three pull-up techniques (n=11). These are high-performance activities covering a large range of motion. Model solutions without substantial jumps in the joint kinematics data were improved from 23% of trials with the standard method, to 100% of trials with the new method. Agreement with measured kinematics was significantly improved (more than 10° closer at p<0.001) when compared to standard methods. The removal of the conoid ligament constraint and the novel thorax scaling correction factor were shown to be key. Separation of the medial border of the scapula from the thorax was large, although this may be physiologically correct due to the high loads and high arm elevation angles. PMID:25011621

  15. Acoustic thermometry for detecting quenches in superconducting coils and conductor stacks

    NASA Astrophysics Data System (ADS)

    Marchevsky, M.; Gourlay, S. A.

    2017-01-01

    Quench detection capability is essential for reliable operation and protection of superconducting magnets, coils, cables, and machinery. We propose a quench detection technique based on sensing local temperature variations in the bulk of a superconducting winding by monitoring its transient acoustic response. Our approach is primarily aimed at coils and devices built with high-temperature superconductor materials where quench detection using standard voltage-based techniques may be inefficient due to the slow velocity of quench propagation. The acoustic sensing technique is non-invasive, fast, and capable of detecting temperature variations of less than 1 K in the interior of the superconductor cable stack in a 77 K cryogenic environment. We show results of finite element modeling and experiments conducted on a model superconductor stack demonstrating viability of the technique for practical quench detection, discuss sensitivity limits of the technique, and its various applications.

  16. A methodology for model-based development and automated verification of software for aerospace systems

    NASA Astrophysics Data System (ADS)

    Martin, L.; Schatalov, M.; Hagner, M.; Goltz, U.; Maibaum, O.

    Today's software for aerospace systems typically is very complex. This is due to the increasing number of features as well as the high demand for safety, reliability, and quality. This complexity also leads to significant higher software development costs. To handle the software complexity, a structured development process is necessary. Additionally, compliance with relevant standards for quality assurance is a mandatory concern. To assure high software quality, techniques for verification are necessary. Besides traditional techniques like testing, automated verification techniques like model checking become more popular. The latter examine the whole state space and, consequently, result in a full test coverage. Nevertheless, despite the obvious advantages, this technique is rarely yet used for the development of aerospace systems. In this paper, we propose a tool-supported methodology for the development and formal verification of safety-critical software in the aerospace domain. The methodology relies on the V-Model and defines a comprehensive work flow for model-based software development as well as automated verification in compliance to the European standard series ECSS-E-ST-40C. Furthermore, our methodology supports the generation and deployment of code. For tool support we use the tool SCADE Suite (Esterel Technology), an integrated design environment that covers all the requirements for our methodology. The SCADE Suite is well established in avionics and defense, rail transportation, energy and heavy equipment industries. For evaluation purposes, we apply our approach to an up-to-date case study of the TET-1 satellite bus. In particular, the attitude and orbit control software is considered. The behavioral models for the subsystem are developed, formally verified, and optimized.

  17. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  18. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    NASA Technical Reports Server (NTRS)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  19. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737

  20. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.

  1. Finite Volume Numerical Methods for Aeroheating Rate Calculations from Infrared Thermographic Data

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Berry, Scott A.; Horvath, Thomas J.; Nowak, Robert J.

    2006-01-01

    The use of multi-dimensional finite volume heat conduction techniques for calculating aeroheating rates from measured global surface temperatures on hypersonic wind tunnel models was investigated. Both direct and inverse finite volume techniques were investigated and compared with the standard one-dimensional semi-infinite technique. Global transient surface temperatures were measured using an infrared thermographic technique on a 0.333-scale model of the Hyper-X forebody in the NASA Langley Research Center 20-Inch Mach 6 Air tunnel. In these tests the effectiveness of vortices generated via gas injection for initiating hypersonic transition on the Hyper-X forebody was investigated. An array of streamwise-orientated heating striations was generated and visualized downstream of the gas injection sites. In regions without significant spatial temperature gradients, one-dimensional techniques provided accurate aeroheating rates. In regions with sharp temperature gradients caused by striation patterns multi-dimensional heat transfer techniques were necessary to obtain more accurate heating rates. The use of the one-dimensional technique resulted in differences of 20% in the calculated heating rates compared to 2-D analysis because it did not account for lateral heat conduction in the model.

  2. Integration of measurements with atmospheric dispersion models: Source term estimation for dispersal of (239)Pu due to non-nuclear detonation of high explosive

    NASA Astrophysics Data System (ADS)

    Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.

    1992-10-01

    The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.

  3. How Much Can Non-industry Standard Measurement Methodologies Benefit Methane Reduction Programs?

    NASA Astrophysics Data System (ADS)

    Risk, D. A.; O'Connell, L.; Atherton, E.

    2017-12-01

    In recent years, energy sector methane emissions have been recorded in large part by applying modern non-industry-standard techniques. Industry may lack the regulatory flexibility to use such techniques, or in some cases may not understand the possible associated economic advantage. As progressive jurisdictions move from estimation and towards routine measurement, the research community should provide guidance to help regulators and companies measure more effectively, and economically if possible. In this study, we outline a modelling experiment in which we explore the integration of non-industry-standard measurement techniques as part of a generalized compliance measurement program. The study was not intended to be exhaustive, or to recommend particular combinations, but instead to explore the inter-relationships between methodologies, development type, compliance practice. We first defined the role, applicable scale, detection limits, working distances, and approximate deployment cost of several measurement methodologies. We then considered a variety of development types differing mainly in footprint, density, and emissions "profile". Using a Monte Carlo approach, we evaluated the effect of these various factors on the cost and confidence of the compliance measurement program. We found that when added individually, some of the research techniques were indeed able to deliver an improvement in cost and/or confidence when used alongside industry-standard Optical Gas Imaging. When applied in combination, the ideal fraction of each measurement technique depended on development type, emission profile, and whether confidence or cost was more important. Results suggest that measurement cost and confidence could be improved if energy companies exploited a wider range of measurement techniques, and in a manner tailored to each development. In the short-term, combining clear scientific guidance with economic information could benefit immediate mitigation efforts over developing new super sensors.

  4. Semantic similarity-based alignment between clinical archetypes and SNOMED CT: an application to observations.

    PubMed

    Meizoso García, María; Iglesias Allones, José Luis; Martínez Hernández, Diego; Taboada Iglesias, María Jesús

    2012-08-01

    One of the main challenges of eHealth is semantic interoperability of health systems. But, this will only be possible if the capture, representation and access of patient data is standardized. Clinical data models, such as OpenEHR Archetypes, define data structures that are agreed by experts to ensure the accuracy of health information. In addition, they provide an option to normalize clinical data by means of binding terms used in the model definition to standard medical vocabularies. Nevertheless, the effort needed to establish the association between archetype terms and standard terminology concepts is considerable. Therefore, the purpose of this study is to provide an automated approach to bind OpenEHR archetypes terms to the external terminology SNOMED CT, with the capability to do it at a semantic level. This research uses lexical techniques and external terminological tools in combination with context-based techniques, which use information about structural and semantic proximity to identify similarities between terms and so, to find alignments between them. The proposed approach exploits both the structural context of archetypes and the terminology context, in which concepts are logically defined through the relationships (hierarchical and definitional) to other concepts. A set of 25 OBSERVATION archetypes with 477 bound terms was used to test the method. Of these, 342 terms (74.6%) were linked with 96.1% precision, 71.7% recall and 1.23 SNOMED CT concepts on average for each mapping. It has been detected that about one third of the archetype clinical information is grouped logically. Context-based techniques take advantage of this to increase the recall and to validate a 30.4% of the bindings produced by lexical techniques. This research shows that it is possible to automatically map archetype terms to a standard terminology with a high precision and recall, with the help of appropriate contextual and semantic information of both models. Moreover, the semantic-based methods provide a means of validating and disambiguating the resulting bindings. Therefore, this work is a step forward to reduce the human participation in the mapping process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    PubMed Central

    Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450

  6. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network.

    PubMed

    Falat, Lukas; Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  7. Human Language Technology: Opportunities and Challenges

    DTIC Science & Technology

    2005-01-01

    because of the connections to and reliance on signal processing. Audio diarization critically includes indexing of speakers [12], since speaker ...to reduce inter- speaker variability in training. Standard techniques include vocal-tract length normalization, adaptation of acoustic models using...maximum likelihood linear regression (MLLR), and speaker -adaptive training based on MLLR. The acoustic models are mixtures of Gaussians, typically with

  8. A best-fit model for concept vectors in biomedical research grants.

    PubMed

    Johnson, Calvin; Lau, William; Bhandari, Archna; Hays, Timothy

    2008-11-06

    The Research, Condition, and Disease Categorization (RCDC) project was created to standardize budget reporting by research topic. Text mining techniques have been implemented to classify NIH grant applications into proper research and disease categories. A best-fit model is shown to achieve classification performance rivaling that of concept vectors produced by human experts.

  9. Prewhitening of Colored Noise Fields for Detection of Threshold Sources

    DTIC Science & Technology

    1993-11-07

    determines the noise covariance matrix, prewhitening techniques allow detection of threshold sources. The multiple signal classification ( MUSIC ...SUBJECT TERMS 1S. NUMBER OF PAGES AR Model, Colored Noise Field, Mixed Spectra Model, MUSIC , Noise Field, 52 Prewhitening, SNR, Standardized Test...EXAMPLE 2: COMPLEX AR COEFFICIENT .............................................. 5 EXAMPLE 3: MUSIC IN A COLORED BACKGROUND NOISE ...................... 6

  10. Effective Report Preparation: Streamlining the Reporting Process. AIR 1999 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Dalrymple, Margaret; Wang, Mindy; Frost, Jacquelyn

    This paper describes the processes and techniques used to improve and streamline the standard student reports used at Purdue University (Indiana). Various models for analyzing reporting processes are described, especially the model used in the study, the Shewart or Deming Cycle, a method that aids in continuous analysis and improvement through a…

  11. A Semantic-Oriented Approach for Organizing and Developing Annotation for E-Learning

    ERIC Educational Resources Information Center

    Brut, Mihaela M.; Sedes, Florence; Dumitrescu, Stefan D.

    2011-01-01

    This paper presents a solution to extend the IEEE LOM standard with ontology-based semantic annotations for efficient use of learning objects outside Learning Management Systems. The data model corresponding to this approach is first presented. The proposed indexing technique for this model development in order to acquire a better annotation of…

  12. Does gang ripping hold the potential for higher clear cutting yields

    Treesearch

    Hiram Hallock; Pamela Giese

    1980-01-01

    Cutting yields from gang ripping hardwood lumber graded by the National Hardwood Lumber Association standard grades are determined using the technique of mathematical modeling. The lumber used is the same as that in an earlier mathematically modeled determination of cutting yields from traditional rough mill procedures. Mechanical cutting factors such as kerf, cutting...

  13. Optimization Based Efficiencies in First Order Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Peck, Jeffrey A.; Mahadevan, Sankaran

    2003-01-01

    This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.

  14. Standardized Photometric Calibrations for Panchromatic SSA Sensors

    NASA Astrophysics Data System (ADS)

    Castro, P.; Payne, T.; Battle, A.; Cole, Z.; Moody, J.; Gregory, S.; Dao, P.

    2016-09-01

    Panchromatic sensors used for Space Situational Awareness (SSA) have no standardized method for transforming the net flux detected by a CCD without a spectral filter into an exo-atmospheric magnitude in a standard magnitude system. Each SSA data provider appears to have their own method for computing the visual magnitude based on panchromatic brightness making cross-comparisons impossible. We provide a procedure in order to standardize the calibration of panchromatic sensors for the purposes of SSA. A technique based on theoretical modeling is presented that derives standard panchromatic magnitudes from the Johnson-Cousins photometric system defined by Arlo Landolt. We verify this technique using observations of Landolt standard stars and a Vega-like star to determine empirical panchromatic magnitudes and compare these to synthetically derived panchromatic magnitudes. We also investigate color terms caused by differences in the quantum efficiency (QE) between the Landolt standard system and panchromatic systems. We evaluate calibrated panchromatic satellite photometry by observing several GEO satellites and standard stars using three different sensors. We explore the effect of satellite color terms by comparing the satellite signatures. In order to remove other variables affecting the satellite photometry, two of the sensors are at the same site using different CCDs. The third sensor is geographically separate from the first two allowing for a definitive test of calibrated panchromatic satellite photometry.

  15. A hybrid SEA/modal technique for modeling structural-acoustic interior noise in rotorcraft.

    PubMed

    Jayachandran, V; Bonilha, M W

    2003-03-01

    This paper describes a hybrid technique that combines Statistical Energy Analysis (SEA) predictions for structural vibration with acoustic modal summation techniques to predict interior noise levels in rotorcraft. The method was applied for predicting the sound field inside a mock-up of the interior panel system of the Sikorsky S-92 helicopter. The vibration amplitudes of the frame and panel systems were predicted using a detailed SEA model and these were used as inputs to the model of the interior acoustic space. The spatial distribution of the vibration field on individual panels, and their coupling to the acoustic space were modeled using stochastic techniques. Leakage and nonresonant transmission components were accounted for using space-averaged values obtained from a SEA model of the complete structural-acoustic system. Since the cabin geometry was quite simple, the modeling of the interior acoustic space was performed using a standard modal summation technique. Sound pressure levels predicted by this approach at specific microphone locations were compared with measured data. Agreement within 3 dB in one-third octave bands above 40 Hz was observed. A large discrepancy in the one-third octave band in which the first acoustic mode is resonant (31.5 Hz) was observed. Reasons for such a discrepancy are discussed in the paper. The developed technique provides a method for modeling helicopter cabin interior noise in the frequency mid-range where neither FEA nor SEA is individually effective or accurate.

  16. Nonlinear relaxation algorithms for circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, R.A.

    Circuit simulation is an important Computer-Aided Design (CAD) tool in the design of Integrated Circuits (IC). However, the standard techniques used in programs such as SPICE result in very long computer-run times when applied to large problems. In order to reduce the overall run time, a number of new approaches to circuit simulation were developed and are described. These methods are based on nonlinear relaxation techniques and exploit the relative inactivity of large circuits. Simple waveform-processing techniques are described to determine the maximum possible speed improvement that can be obtained by exploiting this property of large circuits. Three simulation algorithmsmore » are described, two of which are based on the Iterated Timing Analysis (ITA) method and a third based on the Waveform-Relaxation Newton (WRN) method. New programs that incorporate these techniques were developed and used to simulate a variety of industrial circuits. The results from these simulations are provided. The techniques are shown to be much faster than the standard approach. In addition, a number of parallel aspects of these algorithms are described, and a general space-time model of parallel-task scheduling is developed.« less

  17. Brian: a simulator for spiking neural networks in python.

    PubMed

    Goodman, Dan; Brette, Romain

    2008-01-01

    "Brian" is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  18. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.

  19. Information loss and reconstruction in diffuse fluorescence tomography

    PubMed Central

    Bonfert-Taylor, Petra; Leblond, Frederic; Holt, Robert W.; Tichauer, Kenneth; Pogue, Brian W.; Taylor, Edward C.

    2012-01-01

    This paper is a theoretical exploration of spatial resolution in diffuse fluorescence tomography. It is demonstrated that, given a fixed imaging geometry, one cannot—relative to standard techniques such as Tikhonov regularization and truncated singular value decomposition—improve the spatial resolution of the optical reconstructions via increasing the node density of the mesh considered for modeling light transport. Using techniques from linear algebra, it is shown that, as one increases the number of nodes beyond the number of measurements, information is lost by the forward model. It is demonstrated that this information cannot be recovered using various common reconstruction techniques. Evidence is provided showing that this phenomenon is related to the smoothing properties of the elliptic forward model that is used in the diffusion approximation to light transport in tissue. This argues for reconstruction techniques that are sensitive to boundaries, such as L1-reconstruction and the use of priors, as well as the natural approach of building a measurement geometry that reflects the desired image resolution. PMID:22472763

  20. Next generation initiation techniques

    NASA Technical Reports Server (NTRS)

    Warner, Tom; Derber, John; Zupanski, Milija; Cohn, Steve; Verlinde, Hans

    1993-01-01

    Four-dimensional data assimilation strategies can generally be classified as either current or next generation, depending upon whether they are used operationally or not. Current-generation data-assimilation techniques are those that are presently used routinely in operational-forecasting or research applications. They can be classified into the following categories: intermittent assimilation, Newtonian relaxation, and physical initialization. It should be noted that these techniques are the subject of continued research, and their improvement will parallel the development of next generation techniques described by the other speakers. Next generation assimilation techniques are those that are under development but are not yet used operationally. Most of these procedures are derived from control theory or variational methods and primarily represent continuous assimilation approaches, in which the data and model dynamics are 'fitted' to each other in an optimal way. Another 'next generation' category is the initialization of convective-scale models. Intermittent assimilation systems use an objective analysis to combine all observations within a time window that is centered on the analysis time. Continuous first-generation assimilation systems are usually based on the Newtonian-relaxation or 'nudging' techniques. Physical initialization procedures generally involve the use of standard or nonstandard data to force some physical process in the model during an assimilation period. Under the topic of next-generation assimilation techniques, variational approaches are currently being actively developed. Variational approaches seek to minimize a cost or penalty function which measures a model's fit to observations, background fields and other imposed constraints. Alternatively, the Kalman filter technique, which is also under investigation as a data assimilation procedure for numerical weather prediction, can yield acceptable initial conditions for mesoscale models. The third kind of next-generation technique involves strategies to initialize convective scale (non-hydrostatic) models.

  1. Tracking Continental Scale Background Ozone with CMAQ

    EPA Science Inventory

    As the National Ambient Air Quality Standards (NAAQS) for ozone become more stringent, there has been growing attention on characterizing the contributions and the uncertainties in ozone from outside the US to the ozone concentrations within the US. Modeling techniques readily av...

  2. From innovation to standard practice: Developing and disseminating behavioral procedures

    PubMed Central

    Paine, Stan C.; Bellamy, G. Thomas

    1982-01-01

    This paper proposes a three-stage continuum for discussing the development and dissemination of behavioral technology. At the level of behavioral techniques, researchers need only establish a functional relationship between technologically defined intervention procedures and socially significant target behaviors. Dissemination is conducted for informational purposes only, and the purposes and details surrounding subsequent use of the technique are left to the discretion of the user. At the level of behavioral demonstration, a collection of socially acceptable intervention procedures is refined and standardized and must be shown to produce behavior changes across a number of subjects. Here dissemination is conducted, in large part, to generate support for provision of services. At the level of behavioral models, procedural descriptions must be useroriented. Additionally, model effects must be obtainable by agents not associated with their development and must compare favorably with other treatment or service alternatives. The purpose of dissemination at this level is to obtain adoptions and replications of the model. Details of development and dissemination of behavioral technology at each of these three levels are discussed. PMID:22478555

  3. Modeling the cost-effectiveness of insect rearing on artificial diets: A test with a tephritid fly used in the sterile insect technique.

    PubMed

    Pascacio-Villafán, Carlos; Birke, Andrea; Williams, Trevor; Aluja, Martín

    2017-01-01

    We modeled the cost-effectiveness of rearing Anastrepha ludens, a major fruit fly pest currently mass reared for sterilization and release in pest control programs implementing the sterile insect technique (SIT). An optimization model was generated by combining response surface models of artificial diet cost savings with models of A. ludens pupation, pupal weight, larval development time and adult emergence as a function of mixtures of yeast, a costly ingredient, with corn flour and corncob fractions in the diet. Our model revealed several yeast-reduced mixtures that could be used to prepare diets that were considerably cheaper than a standard diet used for mass rearing. Models predicted a similar production of insects (pupation and adult emergence), with statistically similar pupal weights and larval development times between yeast-reduced diets and the standard mass rearing diet formulation. Annual savings from using the modified diets could be up to 5.9% of the annual cost of yeast, corn flour and corncob fractions used in the standard diet, representing a potential saving of US $27.45 per ton of diet (US $47,496 in the case of the mean annual production of 1,730.29 tons of artificial diet in the Moscafrut mass rearing facility at Metapa, Chiapas, Mexico). Implementation of the yeast-reduced diet on an experimental scale at mass rearing facilities is still required to confirm the suitability of new mixtures of artificial diet for rearing A. ludens for use in SIT. This should include the examination of critical quality control parameters of flies such as adult flight ability, starvation resistance and male sexual competitiveness across various generations. The method used here could be useful for improving the cost-effectiveness of invertebrate or vertebrate mass rearing diets worldwide.

  4. Modeling the cost-effectiveness of insect rearing on artificial diets: A test with a tephritid fly used in the sterile insect technique

    PubMed Central

    Birke, Andrea; Williams, Trevor; Aluja, Martín

    2017-01-01

    We modeled the cost-effectiveness of rearing Anastrepha ludens, a major fruit fly pest currently mass reared for sterilization and release in pest control programs implementing the sterile insect technique (SIT). An optimization model was generated by combining response surface models of artificial diet cost savings with models of A. ludens pupation, pupal weight, larval development time and adult emergence as a function of mixtures of yeast, a costly ingredient, with corn flour and corncob fractions in the diet. Our model revealed several yeast-reduced mixtures that could be used to prepare diets that were considerably cheaper than a standard diet used for mass rearing. Models predicted a similar production of insects (pupation and adult emergence), with statistically similar pupal weights and larval development times between yeast-reduced diets and the standard mass rearing diet formulation. Annual savings from using the modified diets could be up to 5.9% of the annual cost of yeast, corn flour and corncob fractions used in the standard diet, representing a potential saving of US $27.45 per ton of diet (US $47,496 in the case of the mean annual production of 1,730.29 tons of artificial diet in the Moscafrut mass rearing facility at Metapa, Chiapas, Mexico). Implementation of the yeast-reduced diet on an experimental scale at mass rearing facilities is still required to confirm the suitability of new mixtures of artificial diet for rearing A. ludens for use in SIT. This should include the examination of critical quality control parameters of flies such as adult flight ability, starvation resistance and male sexual competitiveness across various generations. The method used here could be useful for improving the cost-effectiveness of invertebrate or vertebrate mass rearing diets worldwide. PMID:28257496

  5. Comparison of breathing gated CT images generated using a 5DCT technique and a commercial clinical protocol in a porcine model

    PubMed Central

    O’Connell, Dylan P.; Thomas, David H.; Dou, Tai H.; Lamb, James M.; Feingold, Franklin; Low, Daniel A.; Fuld, Matthew K.; Sieren, Jered P.; Sloan, Chelsea M.; Shirk, Melissa A.; Hoffman, Eric A.; Hofmann, Christian

    2015-01-01

    Purpose: To demonstrate that a “5DCT” technique which utilizes fast helical acquisition yields the same respiratory-gated images as a commercial technique for regular, mechanically produced breathing cycles. Methods: Respiratory-gated images of an anesthetized, mechanically ventilated pig were generated using a Siemens low-pitch helical protocol and 5DCT for a range of breathing rates and amplitudes and with standard and low dose imaging protocols. 5DCT reconstructions were independently evaluated by measuring the distances between tissue positions predicted by a 5D motion model and those measured using deformable registration, as well by reconstructing the originally acquired scans. Discrepancies between the 5DCT and commercial reconstructions were measured using landmark correspondences. Results: The mean distance between model predicted tissue positions and deformably registered tissue positions over the nine datasets was 0.65 ± 0.28 mm. Reconstructions of the original scans were on average accurate to 0.78 ± 0.57 mm. Mean landmark displacement between the commercial and 5DCT images was 1.76 ± 1.25 mm while the maximum lung tissue motion over the breathing cycle had a mean value of 27.2 ± 4.6 mm. An image composed of the average of 30 deformably registered images acquired with a low dose protocol had 6 HU image noise (single standard deviation) in the heart versus 31 HU for the commercial images. Conclusions: An end to end evaluation of the 5DCT technique was conducted through landmark based comparison to breathing gated images acquired with a commercial protocol under highly regular ventilation. The techniques were found to agree to within 2 mm for most respiratory phases and most points in the lung. PMID:26133604

  6. What Does CALL Have to Offer Computer Science and What Does Computer Science Have to Offer CALL?

    ERIC Educational Resources Information Center

    Cushion, Steve

    2006-01-01

    We will argue that CALL can usefully be viewed as a subset of computer software engineering and can profit from adopting some of the recent progress in software development theory. The unified modelling language has become the industry standard modelling technique and the accompanying unified process is rapidly gaining acceptance. The manner in…

  7. Space Shuttle and Space Station Radio Frequency (RF) Exposure Analysis

    NASA Technical Reports Server (NTRS)

    Hwu, Shian U.; Loh, Yin-Chung; Sham, Catherine C.; Kroll, Quin D.

    2005-01-01

    This paper outlines the modeling techniques and important parameters to define a rigorous but practical procedure that can verify the compliance of RF exposure to the NASA standards for astronauts and electronic equipment. The electromagnetic modeling techniques are applied to analyze RF exposure in Space Shuttle and Space Station environments with reasonable computing time and resources. The modeling techniques are capable of taking into account the field interactions with Space Shuttle and Space Station structures. The obtained results illustrate the multipath effects due to the presence of the space vehicle structures. It's necessary to include the field interactions with the space vehicle in the analysis for an accurate assessment of the RF exposure. Based on the obtained results, the RF keep out zones are identified for appropriate operational scenarios, flight rules and necessary RF transmitter constraints to ensure a safe operating environment and mission success.

  8. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  9. A Study on Active Disaster Management System for Standardized Emergency Action Plan using BIM and Flood Damage Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Jeong, C.; Om, J.; Hwang, J.; Joo, K.; Heo, J.

    2013-12-01

    In recent, the frequency of extreme flood has been increasing due to climate change and global warming. Highly flood damages are mainly caused by the collapse of flood control structures such as dam and dike. In order to reduce these disasters, the disaster management system (DMS) through flood forecasting, inundation mapping, EAP (Emergency Action Plan) has been studied. The estimation of inundation damage and practical EAP are especially crucial to the DMS. However, it is difficult to predict inundation and take a proper action through DMS in real emergency situation because several techniques for inundation damage estimation are not integrated and EAP is supplied in the form of a document in Korea. In this study, the integrated simulation system including rainfall frequency analysis, rainfall-runoff modeling, inundation prediction, surface runoff analysis, and inland flood analysis was developed. Using this system coupled with standard GIS data, inundation damage can be estimated comprehensively and automatically. The standard EAP based on BIM (Building Information Modeling) was also established in this system. It is, therefore, expected that the inundation damages through this study over the entire area including buildings can be predicted and managed.

  10. Redefining the Practice of Peer Review Through Intelligent Automation Part 1: Creation of a Standardized Methodology and Referenceable Database.

    PubMed

    Reiner, Bruce I

    2017-10-01

    Conventional peer review practice is compromised by a number of well-documented biases, which in turn limit standard of care analysis, which is fundamental to determination of medical malpractice. In addition to these intrinsic biases, other existing deficiencies exist in current peer review including the lack of standardization, objectivity, retrospective practice, and automation. An alternative model to address these deficiencies would be one which is completely blinded to the peer reviewer, requires independent reporting from both parties, utilizes automated data mining techniques for neutral and objective report analysis, and provides data reconciliation for resolution of finding-specific report differences. If properly implemented, this peer review model could result in creation of a standardized referenceable peer review database which could further assist in customizable education, technology refinement, and implementation of real-time context and user-specific decision support.

  11. Development of a Decision Model for Selection of Appropriate Timely Delivery Techniques for Highway Projects

    DOT National Transportation Integrated Search

    2009-04-01

    "The primary umbrella method used by the Oregon Department of Transportation (ODOT) to ensure on-time performance in standard construction contracting is liquidated damages. The assessment value is usually a matter of some judgment. In practice...

  12. Low-derivative operators of the Standard Model effective field theory via Hilbert series methods

    NASA Astrophysics Data System (ADS)

    Lehman, Landon; Martin, Adam

    2016-02-01

    In this work, we explore an extension of Hilbert series techniques to count operators that include derivatives. For sufficiently low-derivative operators, we conjecture an algorithm that gives the number of invariant operators, properly accounting for redundancies due to the equations of motion and integration by parts. Specifically, the conjectured technique can be applied whenever there is only one Lorentz invariant for a given partitioning of derivatives among the fields. At higher numbers of derivatives, equation of motion redundancies can be removed, but the increased number of Lorentz contractions spoils the subtraction of integration by parts redundancies. While restricted, this technique is sufficient to automatically recreate the complete set of invariant operators of the Standard Model effective field theory for dimensions 6 and 7 (for arbitrary numbers of flavors). At dimension 8, the algorithm does not automatically generate the complete operator set; however, it suffices for all but five classes of operators. For these remaining classes, there is a well defined procedure to manually determine the number of invariants. Assuming our method is correct, we derive a set of 535 dimension-8 N f = 1 operators.

  13. New Methods in Tissue Engineering: Improved Models for Viral Infection.

    PubMed

    Ramanan, Vyas; Scull, Margaret A; Sheahan, Timothy P; Rice, Charles M; Bhatia, Sangeeta N

    2014-11-01

    New insights in the study of virus and host biology in the context of viral infection are made possible by the development of model systems that faithfully recapitulate the in vivo viral life cycle. Standard tissue culture models lack critical emergent properties driven by cellular organization and in vivo-like function, whereas animal models suffer from limited susceptibility to relevant human viruses and make it difficult to perform detailed molecular manipulation and analysis. Tissue engineering techniques may enable virologists to create infection models that combine the facile manipulation and readouts of tissue culture with the virus-relevant complexity of animal models. Here, we review the state of the art in tissue engineering and describe how tissue engineering techniques may alleviate some common shortcomings of existing models of viral infection, with a particular emphasis on hepatotropic viruses. We then discuss possible future applications of tissue engineering to virology, including current challenges and potential solutions.

  14. Dose to the contralateral breast: a comparison of two techniques using the enhanced dynamic wedge versus a standard wedge.

    PubMed

    Warlick, W B; O'Rear, J H; Earley, L; Moeller, J H; Gaffney, D K; Leavitt, D D

    1997-01-01

    The dose to the contralateral breast has been associated with an increased risk of developing a second breast malignancy. Varying techniques have been devised and described in the literature to minimize this dose. Metal beam modifiers such as standard wedges are used to improve the dose distribution in the treated breast, but unfortunately introduce an increased scatter dose outside the treatment field, in particular to the contralateral breast. The enhanced dynamic wedge is a means of remote wedging created by independently moving one collimator jaw through the treatment field during dose delivery. This study is an analysis of differing doses to the contralateral breast using two common clinical set-up techniques with the enhanced dynamic wedge versus the standard metal wedge. A tissue equivalent block (solid water), modeled to represent a typical breast outline, was designed as an insert in a Rando phantom to simulate a standard patient being treated for breast conservation. Tissue equivalent material was then used to complete the natural contour of the breast and to reproduce appropriate build-up and internal scatter. Thermoluminescent dosimeter (TLD) rods were placed at predetermined distances from the geometric beam's edge to measure the dose to the contralateral breast. A total of 35 locations were used with five TLDs in each location to verify the accuracy of the measured dose. The radiation techniques used were an isocentric set-up with co-planar, non divergent posterior borders and an isocentric set-up with a half beam block technique utilizing the asymmetric collimator jaw. Each technique used compensating wedges to optimize the dose distribution. A comparison of the dose to the contralateral breast was then made with the enhanced dynamic wedge vs. the standard metal wedge. The measurements revealed a significant reduction in the contralateral breast dose with the enhanced dynamic wedge compared to the standard metal wedge in both set-up techniques. The dose was measured at varying distances from the geometric field edge, ranging from 2 to 8 cm. The average dose with the enhanced dynamic wedge was 2.7-2.8%. The average dose with the standard wedge was 4.0-4.7%. Thermoluminescent dosimeter measurements suggest an increase in both scattered electrons and photons with metal wedges. The enhanced dynamic wedge is a practical clinical advance which improves the dose distribution in patients undergoing breast conservation while at the same time minimizing dose to the contralateral breast, thereby reducing the potential carcinogenic effects.

  15. Implementation of the US EPA (United States Environmental Protection Agency) Regional Oxidant Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novak, J.H.

    1984-05-01

    Model design, implementation and quality assurance procedures can have a significant impact on the effectiveness of long term utility of any modeling approach. The Regional Oxidant Modeling System (ROMS) is exceptionally complex because it treats all chemical and physical processes thought to affect ozone concentration on a regional scale. Thus, to effectively illustrate useful design and implementation techniques, this paper describes the general modeling framework which forms the basis of the ROMS. This framework is flexible enough to allow straightforward update or replacement of the chemical kinetics mechanism and/or any theoretical formulations of the physical processes. Use of the Jacksonmore » Structured Programming (JSP) method to implement this modeling framework has not only increased programmer productivity and quality of the resulting programs, but also has provided standardized program design, dynamic documentation, and easily maintainable and transportable code. A summary of the JSP method is presented to encourage modelers to pursue this technique in their own model development efforts. In addition, since data preparation is such an integral part of a successful modeling system, the ROMS processor network is described with emphasis on the internal quality control techniques.« less

  16. Artificial Intelligence Techniques for Predicting and Mapping Daily Pan Evaporation

    NASA Astrophysics Data System (ADS)

    Arunkumar, R.; Jothiprakash, V.; Sharma, Kirty

    2017-09-01

    In this study, Artificial Intelligence techniques such as Artificial Neural Network (ANN), Model Tree (MT) and Genetic Programming (GP) are used to develop daily pan evaporation time-series (TS) prediction and cause-effect (CE) mapping models. Ten years of observed daily meteorological data such as maximum temperature, minimum temperature, relative humidity, sunshine hours, dew point temperature and pan evaporation are used for developing the models. For each technique, several models are developed by changing the number of inputs and other model parameters. The performance of each model is evaluated using standard statistical measures such as Mean Square Error, Mean Absolute Error, Normalized Mean Square Error and correlation coefficient (R). The results showed that daily TS-GP (4) model predicted better with a correlation coefficient of 0.959 than other TS models. Among various CE models, CE-ANN (6-10-1) resulted better than MT and GP models with a correlation coefficient of 0.881. Because of the complex non-linear inter-relationship among various meteorological variables, CE mapping models could not achieve the performance of TS models. From this study, it was found that GP performs better for recognizing single pattern (time series modelling), whereas ANN is better for modelling multiple patterns (cause-effect modelling) in the data.

  17. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  18. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  19. Microfabricated multijunction thermal converters

    NASA Astrophysics Data System (ADS)

    Wunsch, Thomas Franzen

    2001-12-01

    In order to develop improved standards for the measurement of ac voltages and currents, a new thin-film fabrication technique for the multijunction thermal converter has been developed. The ability of a thermal converter to relate an rms ac voltage or current to a dc value is characterized by a quantity called `ac-dc difference' that is ideally zero. The best devices produced using the new techniques have ac-dc differences below 1 × 10-6 in the range of frequencies from 20 Hz to 10 kHz and below 7.5 × 10-6 in the range of frequencies from 20 kHz to 300 kHz. This is a reduction of two orders of magnitude in the lower frequency range and one order of magnitude in the higher frequency range over devices produced at the National Institute of Standards and Technology in 1996. The performance achieved is competitive with the best techniques in the world for ac measurements and additional evaluation is therefore warranted to determine the suitability of the devices for use as national standards that form the legal basis for traceable rms voltage measurements of time varying waveforms in the United States. The construction of the new devices is based on thin-film fabrication of a heated wire supported by a thermally isolated thin-film membrane. The membrane is produced utilizing a reactive ion plasma etch. A photoresist lift- off technique is used to pattern the metal thin-film layers that form the heater and the multijunction thermocouple circuit. The etching and lift-off allow the device to be produced without wet chemical etches that are time consuming and impede the investigation of structures with differing materials. These techniques result in an approach to fabrication that is simple, inexpensive, and free from the manual construction techniques used in the fabrication of conventional single and multijunction thermoelements. Thermal, thermoelectric, and electrical models have been developed to facilitate designs that reduce the low- frequency error. At high frequencies, from 300 kHz to 1 MHz, the performance of the device is degraded by a capacitive coupling effect that produces an ac-dc difference of approximately -90 × 10-6 at 1 MHz. A model is developed that explains this behavior. The model shows that an improvement in performance in the high-frequency range is possible through the use of very high or very low resistivity silicon substrates.

  20. Tensor-GMRES method for large sparse systems of nonlinear equations

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1994-01-01

    This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.

  1. Use of a real-size 3D-printed model as a preoperative and intraoperative tool for minimally invasive plating of comminuted midshaft clavicle fractures.

    PubMed

    Kim, Hyong Nyun; Liu, Xiao Ning; Noh, Kyu Cheol

    2015-06-10

    Open reduction and plate fixation is the standard operative treatment for displaced midshaft clavicle fracture. However, sometimes it is difficult to achieve anatomic reduction by open reduction technique in cases with comminution. We describe a novel technique using a real-size three dimensionally (3D)-printed clavicle model as a preoperative and intraoperative tool for minimally invasive plating of displaced comminuted midshaft clavicle fractures. A computed tomography (CT) scan is taken of both clavicles in patients with a unilateral displaced comminuted midshaft clavicle fracture. Both clavicles are 3D printed into a real-size clavicle model. Using the mirror imaging technique, the uninjured side clavicle is 3D printed into the opposite side model to produce a suitable replica of the fractured side clavicle pre-injury. The 3D-printed fractured clavicle model allows the surgeon to observe and manipulate accurate anatomical replicas of the fractured bone to assist in fracture reduction prior to surgery. The 3D-printed uninjured clavicle model can be utilized as a template to select the anatomically precontoured locking plate which best fits the model. The plate can be inserted through a small incision and fixed with locking screws without exposing the fracture site. Seven comminuted clavicle fractures treated with this technique achieved good bone union. This technique can be used for a unilateral displaced comminuted midshaft clavicle fracture when it is difficult to achieve anatomic reduction by open reduction technique. Level of evidence V.

  2. The development of advanced manufacturing systems

    NASA Astrophysics Data System (ADS)

    Doumeingts, Guy; Vallespir, Bruno; Darricau, Didier; Roboam, Michel

    Various methods for the design of advanced manufacturing systems (AMSs) are reviewed. The specifications for AMSs and problems inherent in their development are first discussed. Three models, the Computer Aided Manufacturing-International model, the National Bureau of Standards model, and the GRAI model, are considered in detail. Hierarchical modeling tools such as structured analysis and design techniques, Petri nets, and the Icam definition method are used in the development of integrated manufacturing models. Finally, the GRAI method is demonstrated in the design of specifications for the production management system of the Snecma AMS.

  3. Topographic gravity modeling for global Bouguer maps to degree 2160: Validation of spectral and spatial domain forward modeling techniques at the 10 microGal level

    NASA Astrophysics Data System (ADS)

    Hirt, Christian; Reußner, Elisabeth; Rexer, Moritz; Kuhn, Michael

    2016-09-01

    Over the past years, spectral techniques have become a standard to model Earth's global gravity field to 10 km scales, with the EGM2008 geopotential model being a prominent example. For some geophysical applications of EGM2008, particularly Bouguer gravity computation with spectral techniques, a topographic potential model of adequate resolution is required. However, current topographic potential models have not yet been successfully validated to degree 2160, and notable discrepancies between spectral modeling and Newtonian (numerical) integration well beyond the 10 mGal level have been reported. Here we accurately compute and validate gravity implied by a degree 2160 model of Earth's topographic masses. Our experiments are based on two key strategies, both of which require advanced computational resources. First, we construct a spectrally complete model of the gravity field which is generated by the degree 2160 Earth topography model. This involves expansion of the topographic potential to the 15th integer power of the topography and modeling of short-scale gravity signals to ultrahigh degree of 21,600, translating into unprecedented fine scales of 1 km. Second, we apply Newtonian integration in the space domain with high spatial resolution to reduce discretization errors. Our numerical study demonstrates excellent agreement (8 μGgal RMS) between gravity from both forward modeling techniques and provides insight into the convergence process associated with spectral modeling of gravity signals at very short scales (few km). As key conclusion, our work successfully validates the spectral domain forward modeling technique for degree 2160 topography and increases the confidence in new high-resolution global Bouguer gravity maps.

  4. Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters

    NASA Technical Reports Server (NTRS)

    Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.

    1989-01-01

    The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some sample results are compared to data obtained from testing hardware inverters.

  5. Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters

    NASA Technical Reports Server (NTRS)

    Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.

    1989-01-01

    The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some examples are compared to data obtained from testing hardware inverters.

  6. Standardized pivot shift test improves measurement accuracy.

    PubMed

    Hoshino, Yuichi; Araujo, Paulo; Ahlden, Mattias; Moore, Charity G; Kuroda, Ryosuke; Zaffagnini, Stefano; Karlsson, Jon; Fu, Freddie H; Musahl, Volker

    2012-04-01

    The variability of the pivot shift test techniques greatly interferes with achieving a quantitative and generally comparable measurement. The purpose of this study was to compare the variation of the quantitative pivot shift measurements with different surgeons' preferred techniques to a standardized technique. The hypothesis was that standardizing the pivot shift test would improve consistency in the quantitative evaluation when compared with surgeon-specific techniques. A whole lower body cadaveric specimen was prepared to have a low-grade pivot shift on one side and high-grade pivot shift on the other side. Twelve expert surgeons performed the pivot shift test using (1) their preferred technique and (2) a standardized technique. Electromagnetic tracking was utilized to measure anterior tibial translation and acceleration of the reduction during the pivot shift test. The variation of the measurement was compared between the surgeons' preferred technique and the standardized technique. The anterior tibial translation during pivot shift test was similar between using surgeons' preferred technique (left 24.0 ± 4.3 mm; right 15.5 ± 3.8 mm) and using standardized technique (left 25.1 ± 3.2 mm; right 15.6 ± 4.0 mm; n.s.). However, the variation in acceleration was significantly smaller with the standardized technique (left 3.0 ± 1.3 mm/s(2); right 2.5 ± 0.7 mm/s(2)) compared with the surgeons' preferred technique (left 4.3 ± 3.3 mm/s(2); right 3.4 ± 2.3 mm/s(2); both P < 0.01). Standardizing the pivot shift test maneuver provides a more consistent quantitative evaluation and may be helpful in designing future multicenter clinical outcome trials. Diagnostic study, Level I.

  7. Study of Vis/NIR spectroscopy measurement on acidity of yogurt

    NASA Astrophysics Data System (ADS)

    He, Yong; Feng, Shuijuan; Wu, Di; Li, Xiaoli

    2006-09-01

    A fast measurement of pH of yogurt using Vis/NIR-spectroscopy techniques was established in order to measuring the acidity of yogurt rapidly. 27 samples selected separately from five different brands of yogurt were measured by Vis/NIR-spectroscopy. The pH of yogurt on positions scanned by spectrum was measured by a pH meter. The mathematical model between pH and Vis/NIR spectral measurements was established and developed based on partial least squares (PLS) by using Unscramble V9.2. Then 25 unknown samples from 5 different brands were predicted based on the mathematical model. The result shows that The correlation coefficient of pH based on PLS model is more than 0.890, and standard error of calibration (SEC) is 0.037, standard error of prediction (SEP) is 0.043. Through predicting the pH of 25 samples of yogurt from 5 different brands, the correlation coefficient between predictive value and measured value of those samples is more than 0918. The results show the good to excellent prediction performances. The Vis/NIR spectroscopy technique had a significant greater accuracy for determining the value of pH. It was concluded that the VisINIRS measurement technique can be used to measure pH of yogurt fast and accurately, and a new method for the measurement of pH of yogurt was established.

  8. Science and Technology Highlights | NREL

    Science.gov Websites

    Leads to Enhanced Upgrading Methods NREL's efforts to standardize techniques for bio-oil analysis inform enhanced modeling capability and affordable methods to increase energy efficiency. December 2012 NREL Meets Performance Demands of Advanced Lithium-ion Batteries Novel surface modification methods are

  9. Robotic and endoscopic transaxillary thyroidectomies may be cost prohibitive when compared to standard cervical thyroidectomy: a cost analysis.

    PubMed

    Cabot, Jennifer C; Lee, Cho Rok; Brunaud, Laurent; Kleiman, David A; Chung, Woong Youn; Fahey, Thomas J; Zarnegar, Rasa

    2012-12-01

    This study presents a cost analysis of the standard cervical, gasless transaxillary endoscopic, and gasless transaxillary robotic thyroidectomy approaches based on medical costs in the United States. A retrospective review of 140 patients who underwent standard cervical, transaxillary endoscopic, or transaxillary robotic thyroidectomy at 2 tertiary centers was conducted. The cost model included operating room charges, anesthesia fee, consumables cost, equipment depreciation, and maintenance cost. Sensitivity analyses assessed individual cost variables. The mean operative times for the standard cervical, transaxillary endoscopic, and transaxillary robotic approaches were 121 ± 18.9, 185 ± 26.0, and 166 ± 29.4 minutes, respectively. The total cost for the standard cervical, transaxillary endoscopic, and transaxillary robotic approaches were $9,028 ± $891, $12,505 ± $1,222, and $13,670 ± $1,384, respectively. Transaxillary approaches were significantly more expensive than the standard cervical technique (standard cervical/transaxillary endoscopic, P < .0001; standard cervical/transaxillary robotic, P < .0001; and transaxillary endoscopic/transaxillary robotic, P = .001). The transaxillary and standard cervical techniques became equivalent in cost when transaxillary endoscopic operative time decreased to 111 minutes and transaxillary robotic operative time decreased to 68 minutes. Increasing the case load did not resolve the cost difference. Transaxillary endoscopic and transaxillary robotic thyroidectomies are significantly more expensive than the standard cervical approach. Decreasing operative times reduces this cost difference. The greater expense may be prohibitive in countries with a flat reimbursement schedule. Copyright © 2012 Mosby, Inc. All rights reserved.

  10. A low-cost three-dimensional laser surface scanning approach for defining body segment parameters.

    PubMed

    Pandis, Petros; Bull, Anthony Mj

    2017-11-01

    Body segment parameters are used in many different applications in ergonomics as well as in dynamic modelling of the musculoskeletal system. Body segment parameters can be defined using different methods, including techniques that involve time-consuming manual measurements of the human body, used in conjunction with models or equations. In this study, a scanning technique for measuring subject-specific body segment parameters in an easy, fast, accurate and low-cost way was developed and validated. The scanner can obtain the body segment parameters in a single scanning operation, which takes between 8 and 10 s. The results obtained with the system show a standard deviation of 2.5% in volumetric measurements of the upper limb of a mannequin and 3.1% difference between scanning volume and actual volume. Finally, the maximum mean error for the moment of inertia by scanning a standard-sized homogeneous object was 2.2%. This study shows that a low-cost system can provide quick and accurate subject-specific body segment parameter estimates.

  11. Numerical simulation of steady cavitating flow of viscous fluid in a Francis hydroturbine

    NASA Astrophysics Data System (ADS)

    Panov, L. V.; Chirkov, D. V.; Cherny, S. G.; Pylev, I. M.; Sotnikov, A. A.

    2012-09-01

    Numerical technique was developed for simulation of cavitating flows through the flow passage of a hydraulic turbine. The technique is based on solution of steady 3D Navier—Stokes equations with a liquid phase transfer equation. The approch for setting boundary conditions meeting the requirements of cavitation testing standard was suggested. Four different models of evaporation and condensation were compared. Numerical simulations for turbines of different specific speed were compared with experiment.

  12. A technique for estimating dry deposition velocities based on similarity with latent heat flux

    NASA Astrophysics Data System (ADS)

    Pleim, Jonathan E.; Finkelstein, Peter L.; Clarke, John F.; Ellestad, Thomas G.

    Field measurements of chemical dry deposition are needed to assess impacts and trends of airborne contaminants on the exposure of crops and unmanaged ecosystems as well as for the development and evaluation of air quality models. However, accurate measurements of dry deposition velocities require expensive eddy correlation measurements and can only be practically made for a few chemical species such as O 3 and CO 2. On the other hand, operational dry deposition measurements such as those used in large area networks involve relatively inexpensive standard meteorological and chemical measurements but rely on less accurate deposition velocity models. This paper describes an intermediate technique which can give accurate estimates of dry deposition velocity for chemical species which are dominated by stomatal uptake such as O 3 and SO 2. This method can give results that are nearly the quality of eddy correlation measurements of trace gas fluxes at much lower cost. The concept is that bulk stomatal conductance can be accurately estimated from measurements of latent heat flux combined with standard meteorological measurements of humidity, temperature, and wind speed. The technique is tested using data from a field experiment where high quality eddy correlation measurements were made over soybeans. Over a four month period, which covered the entire growth cycle, this technique showed very good agreement with eddy correlation measurements for O 3 deposition velocity.

  13. Comparison of a new noncoplanar intensity-modulated radiation therapy technique for craniospinal irradiation with 3 coplanar techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Anders T., E-mail: andehans@rm.dk; Lukacova, Slavka; Lassen-Ramshad, Yasmin

    2015-01-01

    When standard conformal x-ray technique for craniospinal irradiation is used, it is a challenge to achieve satisfactory dose coverage of the target including the area of the cribriform plate, while sparing organs at risk. We present a new intensity-modulated radiation therapy (IMRT), noncoplanar technique, for delivering irradiation to the cranial part and compare it with 3 other techniques and previously published results. A total of 13 patients who had previously received craniospinal irradiation with standard conformal x-ray technique were reviewed. New treatment plans were generated for each patient using the noncoplanar IMRT-based technique, a coplanar IMRT-based technique, and a coplanarmore » volumetric-modulated arch therapy (VMAT) technique. Dosimetry data for all patients were compared with the corresponding data from the conventional treatment plans. The new noncoplanar IMRT technique substantially reduced the mean dose to organs at risk compared with the standard radiation technique. The 2 other coplanar techniques also reduced the mean dose to some of the critical organs. However, this reduction was not as substantial as the reduction obtained by the noncoplanar technique. Furthermore, compared with the standard technique, the IMRT techniques reduced the total calculated radiation dose that was delivered to the normal tissue, whereas the VMAT technique increased this dose. Additionally, the coverage of the target was significantly improved by the noncoplanar IMRT technique. Compared with the standard technique, the coplanar IMRT and the VMAT technique did not improve the coverage of the target significantly. All the new planning techniques increased the number of monitor units (MU) used—the noncoplanar IMRT technique by 99%, the coplanar IMRT technique by 122%, and the VMAT technique by 26%—causing concern for leak radiation. The noncoplanar IMRT technique covered the target better and decreased doses to organs at risk compared with the other techniques. All the new techniques increased the number of MU compared with the standard technique.« less

  14. Broadband moth-eye antireflection coatings on silicon

    NASA Astrophysics Data System (ADS)

    Sun, Chih-Hung; Jiang, Peng; Jiang, Bin

    2008-02-01

    We report a bioinspired templating technique for fabricating broadband antireflection coatings that mimic antireflective moth eyes. Wafer-scale, subwavelength-structured nipple arrays are directly patterned on silicon using spin-coated silica colloidal monolayers as etching masks. The templated gratings exhibit excellent broadband antireflection properties and the normal-incidence specular reflection matches with the theoretical prediction using a rigorous coupled-wave analysis (RCWA) model. We further demonstrate that two common simulation methods, RCWA and thin-film multilayer models, generate almost identical prediction for the templated nipple arrays. This simple bottom-up technique is compatible with standard microfabrication, promising for reducing the manufacturing cost of crystalline silicon solar cells.

  15. An introduction to chaotic and random time series analysis

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.

    1989-01-01

    The origin of chaotic behavior and the relation of chaos to randomness are explained. Two mathematical results are described: (1) a representation theorem guarantees the existence of a specific time-domain model for chaos and addresses the relation between chaotic, random, and strictly deterministic processes; (2) a theorem assures that information on the behavior of a physical system in its complete state space can be extracted from time-series data on a single observable. Focus is placed on an important connection between the dynamical state space and an observable time series. These two results lead to a practical deconvolution technique combining standard random process modeling methods with new embedded techniques.

  16. Learning in Stochastic Bit Stream Neural Networks.

    PubMed

    van Daalen, Max; Shawe-Taylor, John; Zhao, Jieyu

    1996-08-01

    This paper presents learning techniques for a novel feedforward stochastic neural network. The model uses stochastic weights and the "bit stream" data representation. It has a clean analysable functionality and is very attractive with its great potential to be implemented in hardware using standard digital VLSI technology. The design allows simulation at three different levels and learning techniques are described for each level. The lowest level corresponds to on-chip learning. Simulation results on three benchmark MONK's problems and handwritten digit recognition with a clean set of 500 16 x 16 pixel digits demonstrate that the new model is powerful enough for the real world applications. Copyright 1996 Elsevier Science Ltd

  17. A proposed configuration for a stepped specimen to be used in the systematic evaluation of factors influencing warpage in metallic alloys being used for cryogenic wind tunnel models

    NASA Technical Reports Server (NTRS)

    Wigley, D. A.

    1982-01-01

    A proposed configuration for a stepped specimen to be used in the system evaluation of mechanisms that can introduce warpage or dimensional changes in metallic alloys used for cryogenic wind tunnel models is described. Considerations for selecting a standard specimen are presented along with results obtained from an investigation carried out for VASCOMAX 200 maraging steel. Details of the machining and measurement techniques utilized in the investigation are presented. Initial results from the sample of VASCOMAX 200 show that the configuration and measuring techniques are capable of giving quantitative results.

  18. New Methods in Tissue Engineering

    PubMed Central

    Sheahan, Timothy P.; Rice, Charles M.; Bhatia, Sangeeta N.

    2015-01-01

    New insights in the study of virus and host biology in the context of viral infection are made possible by the development of model systems that faithfully recapitulate the in vivo viral life cycle. Standard tissue culture models lack critical emergent properties driven by cellular organization and in vivo–like function, whereas animal models suffer from limited susceptibility to relevant human viruses and make it difficult to perform detailed molecular manipulation and analysis. Tissue engineering techniques may enable virologists to create infection models that combine the facile manipulation and readouts of tissue culture with the virus-relevant complexity of animal models. Here, we review the state of the art in tissue engineering and describe how tissue engineering techniques may alleviate some common shortcomings of existing models of viral infection, with a particular emphasis on hepatotropic viruses. We then discuss possible future applications of tissue engineering to virology, including current challenges and potential solutions. PMID:25893203

  19. Wang-Landau method for calculating Rényi entropies in finite-temperature quantum Monte Carlo simulations.

    PubMed

    Inglis, Stephen; Melko, Roger G

    2013-01-01

    We implement a Wang-Landau sampling technique in quantum Monte Carlo (QMC) simulations for the purpose of calculating the Rényi entanglement entropies and associated mutual information. The algorithm converges an estimate for an analog to the density of states for stochastic series expansion QMC, allowing a direct calculation of Rényi entropies without explicit thermodynamic integration. We benchmark results for the mutual information on two-dimensional (2D) isotropic and anisotropic Heisenberg models, a 2D transverse field Ising model, and a three-dimensional Heisenberg model, confirming a critical scaling of the mutual information in cases with a finite-temperature transition. We discuss the benefits and limitations of broad sampling techniques compared to standard importance sampling methods.

  20. Microscopic Shell Model Calculations for sd-Shell Nuclei

    NASA Astrophysics Data System (ADS)

    Barrett, Bruce R.; Dikmen, Erdal; Maris, Pieter; Shirokov, Andrey M.; Smirnova, Nadya A.; Vary, James P.

    Several techniques now exist for performing detailed and accurate calculations of the structure of light nuclei, i.e., A ≤ 16. Going to heavier nuclei requires new techniques or extensions of old ones. One of these is the so-called No Core Shell Model (NCSM) with a Core approach, which involves an Okubo-Lee-Suzuki (OLS) transformation of a converged NCSM result into a single major shell, such as the sd-shell. The obtained effective two-body matrix elements can be separated into core and single-particle (s.p.) energies plus residual two-body interactions, which can be used for performing standard shell-model (SSM) calculations. As an example, an application of this procedure will be given for nuclei at the beginning ofthe sd-shell.

  1. Multi-sensor Improved Sea-Surface Temperature (MISST) for IOOS - Navy Component

    DTIC Science & Technology

    2013-09-30

    application and data fusion techniques. 2. Parameterization of IR and MW retrieval differences, with consideration of diurnal warming and cool-skin effects...associated retrieval confidence, standard deviation (STD), and diurnal warming estimates to the application user community in the new GDS 2.0 GHRSST...including coral reefs, ocean modeling in the Gulf of Mexico, improved lake temperatures, numerical data assimilation by ocean models, numerical

  2. Assessment of normal tissue complications following prostate cancer irradiation: Comparison of radiation treatment modalities using NTCP models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takam, Rungdham; Bezak, Eva; Yeoh, Eric E.

    2010-09-15

    Purpose: Normal tissue complication probability (NTCP) of the rectum, bladder, urethra, and femoral heads following several techniques for radiation treatment of prostate cancer were evaluated applying the relative seriality and Lyman models. Methods: Model parameters from literature were used in this evaluation. The treatment techniques included external (standard fractionated, hypofractionated, and dose-escalated) three-dimensional conformal radiotherapy (3D-CRT), low-dose-rate (LDR) brachytherapy (I-125 seeds), and high-dose-rate (HDR) brachytherapy (Ir-192 source). Dose-volume histograms (DVHs) of the rectum, bladder, and urethra retrieved from corresponding treatment planning systems were converted to biological effective dose-based and equivalent dose-based DVHs, respectively, in order to account for differences inmore » radiation treatment modality and fractionation schedule. Results: Results indicated that with hypofractionated 3D-CRT (20 fractions of 2.75 Gy/fraction delivered five times/week to total dose of 55 Gy), NTCP of the rectum, bladder, and urethra were less than those for standard fractionated 3D-CRT using a four-field technique (32 fractions of 2 Gy/fraction delivered five times/week to total dose of 64 Gy) and dose-escalated 3D-CRT. Rectal and bladder NTCPs (5.2% and 6.6%, respectively) following the dose-escalated four-field 3D-CRT (2 Gy/fraction to total dose of 74 Gy) were the highest among analyzed treatment techniques. The average NTCP for the rectum and urethra were 0.6% and 24.7% for LDR-BT and 0.5% and 11.2% for HDR-BT. Conclusions: Although brachytherapy techniques resulted in delivering larger equivalent doses to normal tissues, the corresponding NTCPs were lower than those of external beam techniques other than the urethra because of much smaller volumes irradiated to higher doses. Among analyzed normal tissues, the femoral heads were found to have the lowest probability of complications as most of their volume was irradiated to lower equivalent doses compared to other tissues.« less

  3. Search for the Standard Model Higgs boson decaying into boverline{b} produced in association with top quarks decaying hadronically in pp collisions at √{s}=8 TeV with the ATLAS detector

    NASA Astrophysics Data System (ADS)

    Aad, G.; Abbott, B.; Abdallah, J.; Abdinov, O.; Abeloos, B.; Aben, R.; Abolins, M.; AbouZeid, O. S.; Abraham, N. L.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Affolder, A. A.; Agatonovic-Jovin, T.; Agricola, J.; Aguilar-Saavedra, J. A.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Alconada Verzini, M. J.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alhroob, M.; Aliev, M.; Alimonti, G.; Alison, J.; Alkire, S. P.; Allbrooke, B. M. M.; Allen, B. W.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Alstaty, M.; Alvarez Gonzalez, B.; Álvarez Piqueras, D.; Alviggi, M. G.; Amadio, B. T.; Amako, K.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anders, J. K.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Arabidze, G.; Aracena, I.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arduh, F. A.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Armitage, L. J.; Arnaez, O.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Artz, S.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Augsten, K.; Avolio, G.; Axen, B.; Ayoub, M. K.; Azuelos, G.; Baak, M. A.; Baas, A. E.; Baca, M. J.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Baines, J. T.; Baker, O. K.; Baldin, E. M.; Balek, P.; Balestri, T.; Balli, F.; Balunas, W. K.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Barak, L.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barklow, T.; Barlow, N.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barranco Navarro, L.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Basalaev, A.; Bassalat, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Bauce, M.; Bauer, F.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, M.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bedognetti, M.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, J. K.; Belanger-Champagne, C.; Bell, A. S.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Belyaev, N. L.; Benary, O.; Benchekroun, D.; Bender, M.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez, J.; Benitez Garcia, J. A.; Benjamin, D. P.; Bensinger, J. R.; Bentvelsen, S.; Beresford, L.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Beringer, J.; Berlendis, S.; Bernard, N. R.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertram, I. A.; Bertsche, C.; Bertsche, D.; Besjes, G. J.; Bessidskaia Bylund, O.; Bessner, M.; Besson, N.; Betancourt, C.; Bethke, S.; Bevan, A. J.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Biedermann, D.; Bielski, R.; Biesuz, N. V.; Biglietti, M.; Bilbao De Mendizabal, J.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biondi, S.; Bjergaard, D. M.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blanco, J. E.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Blunier, S.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boehler, M.; Boerner, D.; Bogaerts, J. A.; Bogavac, D.; Bogdanchikov, A. G.; Bohm, C.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Bortfeldt, J.; Bortoletto, D.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Bossio Sola, J. D.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Boutle, S. K.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Breaden Madden, W. D.; Brendlinger, K.; Brennan, A. J.; Brenner, L.; Brenner, R.; Bressler, S.; Bristow, T. M.; Britton, D.; Britzger, D.; Brochu, F. M.; Brock, I.; Brock, R.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Broughton, J. H.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Bruni, A.; Bruni, G.; Brunt, BH; Bruschi, M.; Bruscino, N.; Bryant, P.; Bryngemark, L.; Buanes, T.; Buat, Q.; Buchholz, P.; Buckley, A. G.; Budagov, I. A.; Buehrer, F.; Bugge, M. K.; Bulekov, O.; Bullock, D.; Burckhart, H.; Burdin, S.; Burgard, C. D.; Burghgrave, B.; Burka, K.; Burke, S.; Burmeister, I.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Butler, J. M.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Buzykaev, A. R.; Cabrera Urbán, S.; Caforio, D.; Cairo, V. M.; Cakir, O.; Calace, N.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Caloba, L. P.; Calvet, D.; Calvet, S.; Calvet, T. P.; Camacho Toro, R.; Camarda, S.; Camarri, P.; Cameron, D.; Caminal Armadans, R.; Camincher, C.; Campana, S.; Campanelli, M.; Camplani, A.; Campoverde, A.; Canale, V.; Canepa, A.; Cano Bret, M.; Cantero, J.; Cantrill, R.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Carbone, R. M.; Cardarelli, R.; Cardillo, F.; Carli, I.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Casper, D. W.; Castaneda-Miranda, E.; Castelli, A.; Castillo Gimenez, V.; Castro, N. F.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caudron, J.; Cavaliere, V.; Cavallaro, E.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerda Alberich, L.; Cerio, B. C.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cerv, M.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chan, S. K.; Chan, Y. L.; Chang, P.; Chapman, J. D.; Charlton, D. G.; Chatterjee, A.; Chau, C. C.; Chavez Barajas, C. A.; Che, S.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, S.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, H. J.; Cheng, Y.; Cheplakov, A.; Cheremushkina, E.; Cherkaoui El Moursli, R.; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiarelli, G.; Chiodini, G.; Chisholm, A. S.; Chitan, A.; Chizhov, M. V.; Choi, K.; Chomont, A. R.; Chouridou, S.; Chow, B. K. B.; Christodoulou, V.; Chromek-Burckhart, D.; Chudoba, J.; Chuinard, A. J.; Chwastowski, J. J.; Chytka, L.; Ciapetti, G.; Ciftci, A. K.; Cinca, D.; Cindro, V.; Cioara, I. A.; Ciocio, A.; Cirotto, F.; Citron, Z. H.; Ciubancan, M.; Clark, A.; Clark, B. L.; Clark, M. R.; Clark, P. J.; Clarke, R. N.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coffey, L.; Colasurdo, L.; Cole, B.; Cole, S.; Colijn, A. P.; Collot, J.; Colombo, T.; Compostella, G.; Conde Muiño, P.; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Consorti, V.; Constantinescu, S.; Conta, C.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cormier, K. J. R.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Crawley, S. J.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Crispin Ortuzar, M.; Cristinziani, M.; Croft, V.; Crosetti, G.; Cuhadar Donszelmann, T.; Cummings, J.; Curatolo, M.; Cúth, J.; Cuthbert, C.; Czirr, H.; Czodrowski, P.; D'Auria, S.; D'Onofrio, M.; Da Cunha Sargedas De Sousa, M. J.; Da Via, C.; Dabrowski, W.; Dado, T.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Dandoy, J. R.; Dang, N. P.; Daniells, A. C.; Dann, N. S.; Danninger, M.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Davey, W.; David, C.; Davidek, T.; Davies, M.; Davison, P.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; De, K.; de Asmundis, R.; De Benedetti, A.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vivie De Regie, J. B.; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dedovich, D. V.; Deigaard, I.; Del Peso, J.; Del Prete, T.; Delgove, D.; Deliot, F.; Delitzsch, C. M.; Deliyergiyev, M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P. A.; Deluca, C.; DeMarco, D. A.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Denysiuk, D.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Dette, K.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Ciaccio, A.; Di Ciaccio, L.; Di Clemente, W. K.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Micco, B.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Diaconu, C.; Diamond, M.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Diglio, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; do Vale, M. A. B.; Dobos, D.; Dobre, M.; Doglioni, C.; Dohmae, T.; Dolejsi, J.; Dolezal, Z.; Dolgoshein, B. A.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Drechsler, E.; Dris, M.; Du, Y.; Duarte-Campderros, J.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Duflot, L.; Duguid, L.; Dührssen, M.; Dumancic, M.; Dunford, M.; Duran Yildiz, H.; Düren, M.; Durglishvili, A.; Duschinger, D.; Dutta, B.; Dyndal, M.; Eckardt, C.; Ecker, K. M.; Edgar, R. C.; Edwards, N. C.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; El Kacimi, M.; Ellajosyula, V.; Ellert, M.; Elles, S.; Ellinghaus, F.; Elliot, A. A.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Endo, M.; Ennis, J. S.; Erdmann, J.; Ereditato, A.; Ernis, G.; Ernst, J.; Ernst, M.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Esposito, B.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Fabbri, F.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farina, C.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Faucci Giannelli, M.; Favareto, A.; Fawcett, W. J.; Fayard, L.; Fedin, O. L.; Fedorko, W.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Feremenga, L.; Fernandez Martinez, P.; Fernandez Perez, S.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Ferretto Parodi, A.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, A.; Fischer, C.; Fischer, J.; Fisher, W. C.; Flaschel, N.; Fleck, I.; Fleischmann, P.; Fletcher, G. T.; Fletcher, R. R. M.; Flick, T.; Floderus, A.; Flores Castillo, L. R.; Flowerdew, M. J.; Forcolin, G. T.; Formica, A.; Forti, A.; Foster, A. G.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Francis, D.; Franconi, L.; Franklin, M.; Frate, M.; Fraternali, M.; Freeborn, D.; Fressard-Batraneanu, S. M.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fullana Torregrosa, E.; Fusayasu, T.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gach, G. P.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, L. G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gao, J.; Gao, Y.; Gao, Y. S.; Garay Walls, F. M.; García, C.; García Navarro, J. E.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gascon Bravo, A.; Gatti, C.; Gaudiello, A.; Gaudio, G.; Gaur, B.; Gauthier, L.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Gecse, Z.; Gee, C. N. P.; Geich-Gimbel, Ch.; Geisler, M. P.; Gemme, C.; Genest, M. H.; Geng, C.; Gentile, S.; George, S.; Gerbaudo, D.; Gershon, A.; Ghasemi, S.; Ghazlane, H.; Ghneimat, M.; Giacobbe, B.; Giagu, S.; Giannetti, P.; Gibbard, B.; Gibson, S. M.; Gignac, M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giorgi, F. M.; Giorgi, F. M.; Giraud, P. F.; Giromini, P.; Giugni, D.; Giuli, F.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Goblirsch-Kolb, M.; Godlewski, J.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gonçalo, R.; Goncalves Pinto Firmino Da Costa, J.; Gonella, L.; Gongadze, A.; González de la Hoz, S.; Gonzalez Parra, G.; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Goudet, C. R.; Goujdami, D.; Goussiou, A. G.; Govender, N.; Gozani, E.; Graber, L.; Grabowska-Bold, I.; Gradin, P. O. J.; Grafström, P.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Gratchev, V.; Gray, H. M.; Graziani, E.; Greenwood, Z. D.; Grefe, C.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Grevtsov, K.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grivaz, J.-F.; Groh, S.; Grohs, J. P.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Guan, L.; Guan, W.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Guo, Y.; Gupta, S.; Gustavino, G.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Hadef, A.; Haefner, P.; Hageböck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Haley, J.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamilton, A.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Haney, B.; Hanke, P.; Hanna, R.; Hansen, J. B.; Hansen, J. D.; Hansen, M. C.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harrington, R. D.; Harrison, P. F.; Hartjes, F.; Hasegawa, M.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauser, R.; Hauswald, L.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hawkins, A. D.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, J. J.; Heinrich, L.; Heinz, C.; Hejbal, J.; Helary, L.; Hellman, S.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Heng, Y.; Henkelmann, S.; Henriques Correia, A. M.; Henrot-Versille, S.; Herbert, G. H.; Hernández Jiménez, Y.; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hetherly, J. W.; Hickling, R.; Higón-Rodriguez, E.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hinman, R. R.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hohlfeld, M.; Hohn, D.; Holmes, T. R.; Homann, M.; Hong, T. M.; Hooberman, B. H.; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hrynevich, A.; Hsu, C.; Hsu, P. J.; Hsu, S.-C.; Hu, D.; Hu, Q.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Hülsing, T. A.; Huo, P.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Ideal, E.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Ince, T.; Introzzi, G.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Ito, F.; Iturbe Ponce, J. M.; Iuppa, R.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jabbar, S.; Jackson, B.; Jackson, M.; Jackson, P.; Jain, V.; Jakobi, K. B.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansky, R.; Janssen, J.; Janus, M.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Jeanneau, F.; Jeanty, L.; Jejelava, J.; Jeng, G.-Y.; Jennens, D.; Jenni, P.; Jentzsch, J.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, H.; Jiang, Y.; Jiggins, S.; Jimenez Pena, J.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Johansson, P.; Johns, K. A.; Johnson, W. J.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, S.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Jovicevic, J.; Ju, X.; Juste Rozas, A.; Köhler, M. K.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kahn, S. J.; Kajomovitz, E.; Kalderon, C. W.; Kaluza, A.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kaneti, S.; Kanjir, L.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kaplan, L. S.; Kapliy, A.; Kar, D.; Karakostas, K.; Karamaoun, A.; Karastathis, N.; Kareem, M. J.; Karentzos, E.; Karnevskiy, M.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kasahara, K.; Kashif, L.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Kato, C.; Katre, A.; Katzy, J.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazama, S.; Kazanin, V. F.; Keeler, R.; Kehoe, R.; Keller, J. S.; Kempster, J. J.; Kentaro, K.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Keyes, R. A.; Khalil-zada, F.; Khanov, A.; Kharlamov, A. G.; Khoo, T. J.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kido, S.; Kim, H. Y.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kind, O. M.; King, B. T.; King, M.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kiss, F.; Kiuchi, K.; Kivernyk, O.; Kladiva, E.; Klein, M. H.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klioutchnikova, T.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Knapik, J.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koffas, T.; Koffeman, E.; Koi, T.; Kolanoski, H.; Kolb, M.; Koletsou, I.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Kortner, O.; Kortner, S.; Kosek, T.; Kostyukhin, V. V.; Kotwal, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kouskoura, V.; Kowalewska, A. B.; Kowalewski, R.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Kraus, J. K.; Kravchenko, A.; Kretz, M.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Krizka, K.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumnack, N.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuechler, J. T.; Kuehn, S.; Kugel, A.; Kuger, F.; Kuhl, A.; Kuhl, T.; Kukhtin, V.; Kukla, R.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunigo, T.; Kupco, A.; Kurashige, H.; Kurochkin, Y. A.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwan, T.; Kyriazopoulos, D.; La Rosa, A.; La Rosa Navarro, J. L.; La Rotonda, L.; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lammers, S.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lang, V. S.; Lange, J. C.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Lasagni Manghi, F.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Lazovich, T.; Lazzaroni, M.; Le Dortz, O.; Le Guirriec, E.; Le Menedeu, E.; Le Quilleuc, E. P.; LeBlanc, M.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, S. C.; Lee, L.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Lehmann Miotto, G.; Lei, X.; Leight, W. A.; Leisos, A.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Leontsinis, S.; Lerner, G.; Leroy, C.; Lesage, A. A. J.; Lester, C. G.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Leyko, A. M.; Leyton, M.; Li, B.; Li, H.; Li, H. L.; Li, L.; Li, L.; Li, Q.; Li, S.; Li, X.; Li, Y.; Liang, Z.; Liberti, B.; Liblong, A.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limosani, A.; Lin, S. C.; Lin, T. H.; Lindquist, B. E.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lissauer, D.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, H.; Liu, H.; Liu, J.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y. L.; Liu, Y.; Livan, M.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo Sterzo, F.; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loew, K. M.; Loginov, A.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, B. A.; Long, J. D.; Long, R. E.; Longo, L.; Looper, K. A.; Lopes, L.; Lopez Mateos, D.; Lopez Paredes, B.; Lopez Paz, I.; Lopez Solis, A.; Lorenz, J.; Lorenzo Martinez, N.; Losada, M.; Lösel, P. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lu, H.; Lu, N.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luedtke, C.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Lynn, D.; Lysak, R.; Lytken, E.; Lyubushkin, V.; Ma, H.; Ma, L. L.; Ma, Y.; Maccarrone, G.; Macchiolo, A.; Macdonald, C. M.; Maček, B.; Miguens, J. Machado; Madaffari, D.; Madar, R.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeda, J.; Maeland, S.; Maeno, T.; Maevskiy, A.; Magradze, E.; Mahlstedt, J.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maier, T.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyukov, S.; Mamuzic, J.; Mancini, G.; Mandelli, B.; Mandelli, L.; Mandić, I.; Maneira, J.; Manhaes de Andrade Filho, L.; Manjarres Ramos, J.; Mann, A.; Mansoulie, B.; Mantifel, R.; Mantoani, M.; Manzoni, S.; Mapelli, L.; Marceca, G.; March, L.; Marchiori, G.; Marcisovsky, M.; Marjanovic, M.; Marley, D. E.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Marti-Garcia, S.; Martin, B.; Martin, T. A.; Martin, V. J.; Martin dit Latour, B.; Martinez, M.; Martin-Haugh, S.; Martoiu, V. S.; Martyniuk, A. C.; Marx, M.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massa, L.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Mattmann, J.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazza, S. M.; Mc Fadden, N. C.; Mc Goldrick, G.; Mc Kee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McClymont, L. I.; McFarlane, K. W.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McPherson, R. A.; Medinnis, M.; Meehan, S.; Mehlhase, S.; Mehta, A.; Meier, K.; Meineck, C.; Meirose, B.; Mellado Garcia, B. R.; Melo, M.; Meloni, F.; Mengarelli, A.; Menke, S.; Meoni, E.; Mergelmeyer, S.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Meyer Zu Theenhausen, H.; Middleton, R. P.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milesi, M.; Milic, A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mistry, K. P.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Mochizuki, K.; Mohapatra, S.; Mohr, W.; Molander, S.; Moles-Valls, R.; Monden, R.; Mondragon, M. C.; Mönig, K.; Monk, J.; Monnier, E.; Montalbano, A.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Mori, D.; Mori, T.; Morii, M.; Morinaga, M.; Morisbak, V.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Mortensen, S. S.; Morvaj, L.; Mosidze, M.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Muanza, S.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, R. S. P.; Mueller, T.; Muenstermann, D.; Mullen, P.; Mullier, G. A.; Munoz Sanchez, F. J.; Murillo Quijada, J. A.; Murray, W. J.; Musheghyan, H.; Muškinja, M.; Myagkov, A. G.; Myska, M.; Nachman, B. P.; Nackenhorst, O.; Nadal, J.; Nagai, K.; Nagai, R.; Nagano, K.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nagy, E.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Naranjo Garcia, R. F.; Narayan, R.; Narrias Villar, D. I.; Naryshkin, I.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Nef, P. D.; Negri, A.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neves, R. M.; Nevski, P.; Newman, P. R.; Nguyen, D. H.; Nguyen Manh, T.; Nickerson, R. B.; Nicolaidou, R.; Nielsen, J.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, J. K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Nooney, T.; Norberg, S.; Nordberg, M.; Norjoharuddeen, N.; Novgorodova, O.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nurse, E.; Nuti, F.; O'grady, F.; O'Neil, D. C.; O'Rourke, A. A.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Ochoa-Ricoux, J. P.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Oleiro Seabra, L. F.; Olivares Pino, S. A.; Oliveira Damazio, D.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onogi, K.; Onyisi, P. U. E.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Orr, R. S.; Osculati, B.; Ospanov, R.; Otero y Garzon, G.; Otono, H.; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Owen, M.; Owen, R. E.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Padilla Aranda, C.; Pagáčová, M.; Pagan Griso, S.; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palestini, S.; Palka, M.; Pallin, D.; Palma, A.; Panagiotopoulou, E. St.; Pandini, C. E.; Panduro Vazquez, J. G.; Pani, P.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parker, A. J.; Parker, M. A.; Parker, K. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pascuzzi, V. R.; Pasqualucci, E.; Passaggio, S.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Pater, J. R.; Pauly, T.; Pearce, J.; Pearson, B.; Pedersen, L. E.; Pedersen, M.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Pelikan, D.; Penc, O.; Peng, C.; Peng, H.; Penwell, J.; Peralva, B. S.; Perego, M. M.; Perepelitsa, D. V.; Perez Codina, E.; Perini, L.; Pernegger, H.; Perrella, S.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petroff, P.; Petrolo, E.; Petrov, M.; Petrucci, F.; Pettersson, N. E.; Peyaud, A.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Pickering, M. A.; Piegaia, R.; Pilcher, J. E.; Pilkington, A. D.; Pin, A. W. J.; Pinamonti, M.; Pinfold, J. L.; Pingel, A.; Pires, S.; Pirumov, H.; Pitt, M.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Pluth, D.; Poettgen, R.; Poggioli, L.; Pohl, D.; Polesello, G.; Poley, A.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Pozo Astigarraga, M. E.; Pralavorio, P.; Pranko, A.; Prell, S.; Price, D.; Price, L. E.; Primavera, M.; Prince, S.; Proissl, M.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Puddu, D.; Puldon, D.; Purohit, M.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Quayle, W. B.; Queitsch-Maitland, M.; Quilty, D.; Raddum, S.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Raine, J. A.; Rajagopalan, S.; Rammensee, M.; Rangel-Smith, C.; Ratti, M. G.; Rauscher, F.; Rave, S.; Ravenscroft, T.; Raymond, M.; Read, A. L.; Readioff, N. P.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Rehnisch, L.; Reichert, J.; Reisin, H.; Rembser, C.; Ren, H.; Rescigno, M.; Resconi, S.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Richter, S.; Richter-Was, E.; Ricken, O.; Ridel, M.; Rieck, P.; Riegel, C. J.; Rieger, J.; Rifki, O.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Ristić, B.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Rizzi, C.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Roda, C.; Rodina, Y.; Rodriguez Perez, A.; Rodriguez Rodriguez, D.; Roe, S.; Rogan, C. S.; Røhne, O.; Romaniouk, A.; Romano, M.; Romano Saez, S. M.; Romero Adam, E.; Rompotis, N.; Ronzani, M.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, P.; Rosenthal, O.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, J. H. N.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rud, V. I.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryu, S.; Ryzhov, A.; Rzehorz, G. F.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Saha, P.; Sahinsoy, M.; Saimpert, M.; Saito, T.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Salazar Loyola, J. E.; Salek, D.; Sales De Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sammel, D.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Sanchez Martinez, V.; Sandaker, H.; Sandbach, R. L.; Sander, H. G.; Sandhoff, M.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sannino, M.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Santoyo Castillo, I.; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sasaki, O.; Sasaki, Y.; Sato, K.; Sauvage, G.; Sauvan, E.; Savage, G.; Savard, P.; Sawyer, C.; Sawyer, L.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Scarfone, V.; Schaarschmidt, J.; Schacht, P.; Schaefer, D.; Schaefer, R.; Schaeffer, J.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Schiavi, C.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schmitz, S.; Schneider, B.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schopf, E.; Schorlemmer, A. L. S.; Schott, M.; Schovancova, J.; Schramm, S.; Schreyer, M.; Schuh, N.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwarz, T. A.; Schwegler, Ph.; Schweiger, H.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Sciolla, G.; Scuri, F.; Scutti, F.; Searcy, J.; Seema, P.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekhon, K.; Sekula, S. J.; Seliverstov, D. M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Sessa, M.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shaikh, N. W.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shaw, S. M.; Shcherbakova, A.; Shehu, C. Y.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shiyakova, M.; Shmeleva, A.; Shoaleh Saadi, D.; Shochet, M. J.; Shojaii, S.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Sicho, P.; Sidebo, P. E.; Sidiropoulou, O.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silverstein, S. B.; Simak, V.; Simard, O.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simon, D.; Simon, M.; Sinervo, P.; Sinev, N. B.; Sioli, M.; Siragusa, G.; Sivoklokov, S. Yu.; Sjölin, J.; Sjursen, T. B.; Skinner, M. B.; Skottowe, H. P.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Slovak, R.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, M. N. K.; Smith, R. W.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Soh, D. A.; Sokhrannyi, G.; Solans Sanchez, C. A.; Solar, M.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Son, H.; Song, H. Y.; Sood, A.; Sopczak, A.; Sopko, V.; Sorin, V.; Sosa, D.; Sotiropoulou, C. L.; Soualah, R.; Soukharev, A. M.; South, D.; Sowden, B. C.; Spagnolo, S.; Spalla, M.; Spangenberg, M.; Spanò, F.; Sperlich, D.; Spettel, F.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; Denis, R. D. St.; Stabile, A.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, G. H.; Stark, J.; Staroba, P.; Starovoitov, P.; Stärz, S.; Staszewski, R.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Subramaniam, R.; Suchek, S.; Sugaya, Y.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, S.; Svatos, M.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Taccini, C.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takai, H.; Takashima, R.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tam, J. Y. C.; Tan, K. G.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tannenwald, B. B.; Tannoury, N.; Tapia Araya, S.; Tapprogge, S.; Tarem, S.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, A. C.; Taylor, G. N.; Taylor, P. T. E.; Taylor, W.; Teischinger, F. A.; Teixeira-Dias, P.; Temming, K. K.; Temple, D.; Ten Kate, H.; Teng, P. K.; Teoh, J. J.; Tepel, F.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Theveneaux-Pelzer, T.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, E. N.; Thompson, P. D.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Tibbetts, M. J.; Ticse Torres, R. E.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tipton, P.; Tisserant, S.; Todome, K.; Todorov, T.; Todorova-Nova, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Tong, B.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Trefzger, T.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Trofymov, A.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; Truong, L.; Trzebinski, M.; Trzupek, A.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsui, K. M.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tudorache, A.; Tudorache, V.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Turgeman, D.; Turra, R.; Turvey, A. J.; Tuts, P. M.; Tyndel, M.; Ucchielli, G.; Ueda, I.; Ueno, R.; Ughetto, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urban, J.; Urquijo, P.; Urrejola, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valderanis, C.; Valdes Santurio, E.; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Vallecorsa, S.; Valls Ferrer, J. A.; Van Den Wollenberg, W.; Van Der Deijl, P. C.; van der Geer, R.; van der Graaf, H.; van Eldik, N.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vanguri, R.; Vaniachine, A.; Vankov, P.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vasquez, J. G.; Vazeille, F.; Vazquez Schroeder, T.; Veatch, J.; Veloce, L. M.; Veloso, F.; Veneziano, S.; Ventura, A.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Vickey Boeriu, O. E.; Viehhauser, G. H. A.; Viel, S.; Vigani, L.; Vigne, R.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Vittori, C.; Vivarelli, I.; Vlachos, S.; Vlasak, M.; Vogel, M.; Vokac, P.; Volpi, G.; Volpi, M.; von der Schmitt, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, P.; Wagner, W.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wallangen, V.; Wang, C.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, T.; Wang, X.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Washbrook, A.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; Whallon, N. L.; Wharton, A. M.; White, A.; White, M. J.; White, R.; White, S.; Whiteson, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wildauer, A.; Wilk, F.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, J. A.; Wingerter-Seez, I.; Winklmeier, F.; Winston, O. J.; Winter, B. T.; Wittgen, M.; Wittkowski, J.; Wollstadt, S. J.; Wolter, M. W.; Wolters, H.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wu, M.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yakabe, R.; Yamaguchi, D.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, S.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, Y.; Yang, Z.; Yao, W.-M.; Yap, Y. C.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yeletskikh, I.; Yen, A. L.; Yildirim, E.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J. M.; Yu, J.; Yuan, L.; Yuen, S. P. Y.; Yusuff, I.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zakharchuk, N.; Zalieckas, J.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zeng, J. C.; Zeng, Q.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zhang, D.; Zhang, F.; Zhang, G.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, R.; Zhang, R.; Zhang, X.; Zhang, Z.; Zhao, X.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, C.; Zhou, L.; Zhou, L.; Zhou, M.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, S.; Zinonos, Z.; Zinser, M.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; zur Nedden, M.; Zurzolo, G.; Zwalinski, L.

    2016-05-01

    A search for Higgs boson production in association with a pair of top quarks ( toverline{t}H ) is performed, where the Higgs boson decays to boverline{b} , and both top quarks decay hadronically. The data used correspond to an integrated luminosity of 20.3 fb-1 of pp collisions at √{s}=8 TeV collected with the ATLAS detector at the Large Hadron Collider. The search selects events with at least six energetic jets and uses a boosted decision tree algorithm to discriminate between signal and Standard Model background. The dominant multijet background is estimated using a dedicated data-driven technique. For a Higgs boson mass of 125 GeV, an upper limit of 6.4 (5.4) times the Standard Model cross section is observed (expected) at 95% confidence level. The best-fit value for the signal strength is μ = 1.6 ± 2.6 times the Standard Model expectation for m H = 125 GeV. Combining all toverline{t}H searches carried out by ATLAS at √{s}=8 and 7 TeV, an observed (expected) upper limit of 3.1 (1.4) times the Standard Model expectation is obtained at 95% confidence level, with a signal strength μ = 1.7 ± 0.8. [Figure not available: see fulltext.

  4. Competing risks models and time-dependent covariates

    PubMed Central

    Barnett, Adrian; Graves, Nick

    2008-01-01

    New statistical models for analysing survival data in an intensive care unit context have recently been developed. Two models that offer significant advantages over standard survival analyses are competing risks models and multistate models. Wolkewitz and colleagues used a competing risks model to examine survival times for nosocomial pneumonia and mortality. Their model was able to incorporate time-dependent covariates and so examine how risk factors that changed with time affected the chances of infection or death. We briefly explain how an alternative modelling technique (using logistic regression) can more fully exploit time-dependent covariates for this type of data. PMID:18423067

  5. A Protocol for Using Gene Set Enrichment Analysis to Identify the Appropriate Animal Model for Translational Research.

    PubMed

    Weidner, Christopher; Steinfath, Matthias; Wistorf, Elisa; Oelgeschläger, Michael; Schneider, Marlon R; Schönfelder, Gilbert

    2017-08-16

    Recent studies that compared transcriptomic datasets of human diseases with datasets from mouse models using traditional gene-to-gene comparison techniques resulted in contradictory conclusions regarding the relevance of animal models for translational research. A major reason for the discrepancies between different gene expression analyses is the arbitrary filtering of differentially expressed genes. Furthermore, the comparison of single genes between different species and platforms often is limited by technical variance, leading to misinterpretation of the con/discordance between data from human and animal models. Thus, standardized approaches for systematic data analysis are needed. To overcome subjective gene filtering and ineffective gene-to-gene comparisons, we recently demonstrated that gene set enrichment analysis (GSEA) has the potential to avoid these problems. Therefore, we developed a standardized protocol for the use of GSEA to distinguish between appropriate and inappropriate animal models for translational research. This protocol is not suitable to predict how to design new model systems a-priori, as it requires existing experimental omics data. However, the protocol describes how to interpret existing data in a standardized manner in order to select the most suitable animal model, thus avoiding unnecessary animal experiments and misleading translational studies.

  6. A Search for the Standard Model Higgs Boson Produced in Association with a $W$ Boson

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, Martin Johannes

    2011-05-01

    We present a search for a standard model Higgs boson produced in association with a W boson using data collected with the CDF II detector from pmore » $$\\bar{p}$$ collisions at √s = 1.96 TeV. The search is performed in the WH → ℓvb$$\\bar{b}$$ channel. The two quarks usually fragment into two jets, but sometimes a third jet can be produced via gluon radiation, so we have increased the standard two-jet sample by including events that contain three jets. We reconstruct the Higgs boson using two or three jets depending on the kinematics of the event. We find an improvement in our search sensitivity using the larger sample together with this multijet reconstruction technique. Our data show no evidence of a Higgs boson, so we set 95% confidence level upper limits on the WH production rate. We set limits between 3.36 and 28.7 times the standard model prediction for Higgs boson masses ranging from 100 to 150 GeV/c 2.« less

  7. Development of a decision model for selection of appropriate timely delivery techniques for highway projects : final report, April 2009.

    DOT National Transportation Integrated Search

    2009-04-01

    The primary umbrella method used by the Oregon Department of Transportation (ODOT) to ensure on-time performance in standard construction contracting is liquidated damages. The assessment value is usually a matter of some judgment. In practice,...

  8. maxwell brown | NREL

    Science.gov Websites

    Research Interests Optimization and modeling techniques Economic impacts of energy sector transformation . Transportation Research Record. Caron, J, S Cohen, J Reilly, M Brown. 2018. Exploring the Impacts of a National : Economic and GHG Impacts of a National Low Carbon Fuel Standard. Transportation Research Record: Journal of

  9. Mathematical Modeling of Chemical Stoichiometry

    ERIC Educational Resources Information Center

    Croteau, Joshua; Fox, William P.; Varazo, Kristofoland

    2007-01-01

    In beginning chemistry classes, students are taught a variety of techniques for balancing chemical equations. The most common method is inspection. This paper addresses using a system of linear mathematical equations to solve for the stoichiometric coefficients. Many linear algebra books carry the standard balancing of chemical equations as an…

  10. Student Effort and Performance over the Semester

    ERIC Educational Resources Information Center

    Krohn, Gregory A.; O'Connor, Catherine M.

    2005-01-01

    The authors extend the standard education production function and student time allocation analysis to focus on the interactions between student effort and performance over the semester. The purged instrumental variable technique is used to obtain consistent estimators of the structural parameters of the model using data from intermediate…

  11. Visiting the Gödel universe.

    PubMed

    Grave, Frank; Buser, Michael

    2008-01-01

    Visualization of general relativity illustrates aspects of Einstein's insights into the curved nature of space and time to the expert as well as the layperson. One of the most interesting models which came up with Einstein's theory was developed by Kurt Gödel in 1949. The Gödel universe is a valid solution of Einstein's field equations, making it a possible physical description of our universe. It offers remarkable features like the existence of an optical horizon beyond which time travel is possible. Although we know that our universe is not a Gödel universe, it is interesting to visualize physical aspects of a world model resulting from a theory which is highly confirmed in scientific history. Standard techniques to adopt an egocentric point of view in a relativistic world model have shortcomings with respect to the time needed to render an image as well as difficulties in applying a direct illumination model. In this paper we want to face both issues to reduce the gap between common visualization standards and relativistic visualization. We will introduce two techniques to speed up recalculation of images by means of preprocessing and lookup tables and to increase image quality through a special optimization applicable to the Gödel universe. The first technique allows the physicist to understand the different effects of general relativity faster and better by generating images from existing datasets interactively. By using the intrinsic symmetries of Gödel's spacetime which are expressed by the Killing vector field, we are able to reduce the necessary calculations to simple cases using the second technique. This even makes it feasible to account for a direct illumination model during the rendering process. Although the presented methods are applied to Gödel's universe, they can also be extended to other manifolds, for example light propagation in moving dielectric media. Therefore, other areas of research can benefit from these generic improvements.

  12. Reconstructing extreme AMOC events through nudging of the ocean surface: a perfect model approach

    NASA Astrophysics Data System (ADS)

    Ortega, Pablo; Guilyardi, Eric; Swingedouw, Didier; Mignot, Juliette; Nguyen, Sébastien

    2017-11-01

    While the Atlantic Meridional Overturning Circulation (AMOC) is thought to be a crucial component of the North Atlantic climate, past changes in its strength are challenging to quantify, and only limited information is available. In this study, we use a perfect model approach with the IPSL-CM5A-LR model to assess the performance of several surface nudging techniques in reconstructing the variability of the AMOC. Special attention is given to the reproducibility of an extreme positive AMOC peak from a preindustrial control simulation. Nudging includes standard relaxation techniques towards the sea surface temperature and salinity anomalies of this target control simulation, and/or the prescription of the wind-stress fields. Surface nudging approaches using standard fixed restoring terms succeed in reproducing most of the target AMOC variability, including the timing of the extreme event, but systematically underestimate its amplitude. A detailed analysis of the AMOC variability mechanisms reveals that the underestimation of the extreme AMOC maximum comes from a deficit in the formation of the dense water masses in the main convection region, located south of Iceland in the model. This issue is largely corrected after introducing a novel surface nudging approach, which uses a varying restoring coefficient that is proportional to the simulated mixed layer depth, which, in essence, keeps the restoring time scale constant. This new technique substantially improves water mass transformation in the regions of convection, and in particular, the formation of the densest waters, which are key for the representation of the AMOC extreme. It is therefore a promising strategy that may help to better constrain the AMOC variability and other ocean features in the models. As this restoring technique only uses surface data, for which better and longer observations are available, it opens up opportunities for improved reconstructions of the AMOC over the last few decades.

  13. Reconstructing extreme AMOC events through nudging of the ocean surface: A perfect model approach

    NASA Astrophysics Data System (ADS)

    Ortega, Pablo; Guilyardi, Eric; Swingedouw, Didier; Mignot, Juliette; Nguyen, Sebastien

    2017-04-01

    While the Atlantic Meridional Overturning Circulation (AMOC) is thought to be a crucial component of the North Atlantic climate and its predictability, past changes in its strength are challenging to quantify, and only limited information is available. In this study, we use a perfect model approach with the IPSL-CM5A-LR model to assess the performance of several surface nudging techniques in reconstructing the variability of the AMOC. Special attention is given to the reproducibility of an extreme positive AMOC peak from a preindustrial control simulation. Nudging includes standard relaxation techniques towards the sea surface temperature and salinity anomalies of this target control simulation, and/or the prescription of the wind-stress fields. Surface nudging approaches using standard fixed restoring terms succeed in reproducing most of the target AMOC variability, including the timing of the extreme event, but systematically underestimate its amplitude. A detailed analysis of the AMOC variability mechanisms reveals that the underestimation of the extreme AMOC maximum comes from a deficit in the formation of the dense water masses in the main convection region, located south of Iceland in the model. This issue is largely corrected after introducing a novel surface nudging approach, which uses a varying restoring coefficient that is proportional to the simulated mixed layer depth, which, in essence, keeps the restoring time scale constant. This new technique substantially improves water mass transformation in the regions of convection, and in particular, the formation of the densest waters, which are key for the representation of the AMOC extreme. It is therefore a promising strategy that may help to better initialize the AMOC variability and other ocean features in the models, and thus improve decadal climate predictions. As this restoring technique only uses surface data, for which better and longer observations are available, it opens up opportunities for improved reconstructions of the AMOC over the last few decades.

  14. Application of a parameter-estimation technique to modeling the regional aquifer underlying the eastern Snake River plain, Idaho

    USGS Publications Warehouse

    Garabedian, Stephen P.

    1986-01-01

    A nonlinear, least-squares regression technique for the estimation of ground-water flow model parameters was applied to the regional aquifer underlying the eastern Snake River Plain, Idaho. The technique uses a computer program to simulate two-dimensional, steady-state ground-water flow. Hydrologic data for the 1980 water year were used to calculate recharge rates, boundary fluxes, and spring discharges. Ground-water use was estimated from irrigated land maps and crop consumptive-use figures. These estimates of ground-water withdrawal, recharge rates, and boundary flux, along with leakance, were used as known values in the model calibration of transmissivity. Leakance values were adjusted between regression solutions by comparing model-calculated to measured spring discharges. In other simulations, recharge and leakance also were calibrated as prior-information regression parameters, which limits the variation of these parameters using a normalized standard error of estimate. Results from a best-fit model indicate a wide areal range in transmissivity from about 0.05 to 44 feet squared per second and in leakance from about 2.2x10 -9 to 6.0 x 10 -8 feet per second per foot. Along with parameter values, model statistics also were calculated, including the coefficient of correlation between calculated and observed head (0.996), the standard error of the estimates for head (40 feet), and the parameter coefficients of variation (about 10-40 percent). Additional boundary flux was added in some areas during calibration to achieve proper fit to ground-water flow directions. Model fit improved significantly when areas that violated model assumptions were removed. It also improved slightly when y-direction (northwest-southeast) transmissivity values were larger than x-direction (northeast-southwest) transmissivity values. The model was most sensitive to changes in recharge, and in some areas, to changes in transmissivity, particularly near the spring discharge area from Milner Dam to King Hill.

  15. Robust Hidden Markov Model based intelligent blood vessel detection of fundus images.

    PubMed

    Hassan, Mehdi; Amin, Muhammad; Murtza, Iqbal; Khan, Asifullah; Chaudhry, Asmatullah

    2017-11-01

    In this paper, we consider the challenging problem of detecting retinal vessel networks. Precise detection of retinal vessel networks is vital for accurate eye disease diagnosis. Most of the blood vessel tracking techniques may not properly track vessels in presence of vessels' occlusion. Owing to problem in sensor resolution or acquisition of fundus images, it is possible that some part of vessel may occlude. In this scenario, it becomes a challenging task to accurately trace these vital vessels. For this purpose, we have proposed a new robust and intelligent retinal vessel detection technique on Hidden Markov Model. The proposed model is able to successfully track vessels in the presence of occlusion. The effectiveness of the proposed technique is evaluated on publically available standard DRIVE dataset of the fundus images. The experiments show that the proposed technique not only outperforms the other state of the art methodologies of retinal blood vessels segmentation, but it is also capable of accurate occlusion handling in retinal vessel networks. The proposed technique offers better average classification accuracy, sensitivity, specificity, and area under the curve (AUC) of 95.7%, 81.0%, 97.0%, and 90.0% respectively, which shows the usefulness of the proposed technique. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. An efficient Cellular Potts Model algorithm that forbids cell fragmentation

    NASA Astrophysics Data System (ADS)

    Durand, Marc; Guesnet, Etienne

    2016-11-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique which is widely used for simulating cellular patterns such as foams or biological tissues. Despite its realism and generality, the standard Monte Carlo algorithm used in the scientific literature to evolve this model preserves connectivity of cells on a limited range of simulation temperature only. We present a new algorithm in which cell fragmentation is forbidden for all simulation temperatures. This allows to significantly enhance realism of the simulated patterns. It also increases the computational efficiency compared with the standard CPM algorithm even at same simulation temperature, thanks to the time spared in not doing unrealistic moves. Moreover, our algorithm restores the detailed balance equation, ensuring that the long-term stage is independent of the chosen acceptance rate and chosen path in the temperature space.

  17. Soldering to a single atomic layer

    NASA Astrophysics Data System (ADS)

    Girit, ćaǧlar Ö.; Zettl, A.

    2007-11-01

    The standard technique to make electrical contact to nanostructures is electron beam lithography. This method has several drawbacks including complexity, cost, and sample contamination. We present a simple technique to cleanly solder submicron sized, Ohmic contacts to nanostructures. To demonstrate, we contact graphene, a single atomic layer of carbon, and investigate low- and high-bias electronic transport. We set lower bounds on the current carrying capacity of graphene. A simple model allows us to obtain device characteristics such as mobility, minimum conductance, and contact resistance.

  18. Soldering to a single atomic layer

    NASA Astrophysics Data System (ADS)

    Girit, Caglar; Zettl, Alex

    2008-03-01

    The standard technique to make electrical contact to nanostructures is electron beam lithography. This method has several drawbacks including complexity, cost, and sample contamination. We present a simple technique to cleanly solder submicron sized, Ohmic contacts to nanostructures. To demonstrate, we contact graphene, a single atomic layer of carbon, and investigate low- and high-bias electronic transport. We set lower bounds on the current carrying capacity of graphene. A simple model allows us to obtain device characteristics such as mobility, minimum conductance, and contact resistance.

  19. Analyzing Phylogenetic Trees with Timed and Probabilistic Model Checking: The Lactose Persistence Case Study.

    PubMed

    Requeno, José Ignacio; Colom, José Manuel

    2014-12-01

    Model checking is a generic verification technique that allows the phylogeneticist to focus on models and specifications instead of on implementation issues. Phylogenetic trees are considered as transition systems over which we interrogate phylogenetic questions written as formulas of temporal logic. Nonetheless, standard logics become insufficient for certain practices of phylogenetic analysis since they do not allow the inclusion of explicit time and probabilities. The aim of this paper is to extend the application of model checking techniques beyond qualitative phylogenetic properties and adapt the existing logical extensions and tools to the field of phylogeny. The introduction of time and probabilities in phylogenetic specifications is motivated by the study of a real example: the analysis of the ratio of lactose intolerance in some populations and the date of appearance of this phenotype.

  20. Analyzing phylogenetic trees with timed and probabilistic model checking: the lactose persistence case study.

    PubMed

    Requeno, José Ignacio; Colom, José Manuel

    2014-10-23

    Model checking is a generic verification technique that allows the phylogeneticist to focus on models and specifications instead of on implementation issues. Phylogenetic trees are considered as transition systems over which we interrogate phylogenetic questions written as formulas of temporal logic. Nonetheless, standard logics become insufficient for certain practices of phylogenetic analysis since they do not allow the inclusion of explicit time and probabilities. The aim of this paper is to extend the application of model checking techniques beyond qualitative phylogenetic properties and adapt the existing logical extensions and tools to the field of phylogeny. The introduction of time and probabilities in phylogenetic specifications is motivated by the study of a real example: the analysis of the ratio of lactose intolerance in some populations and the date of appearance of this phenotype.

  1. Probabilistic registration of an unbiased statistical shape model to ultrasound images of the spine

    NASA Astrophysics Data System (ADS)

    Rasoulian, Abtin; Rohling, Robert N.; Abolmaesumi, Purang

    2012-02-01

    The placement of an epidural needle is among the most difficult regional anesthetic techniques. Ultrasound has been proposed to improve success of placement. However, it has not become the standard-of-care because of limitations in the depictions and interpretation of the key anatomical features. We propose to augment the ultrasound images with a registered statistical shape model of the spine to aid interpretation. The model is created with a novel deformable group-wise registration method which utilizes a probabilistic approach to register groups of point sets. The method is compared to a volume-based model building technique and it demonstrates better generalization and compactness. We instantiate and register the shape model to a spine surface probability map extracted from the ultrasound images. Validation is performed on human subjects. The achieved registration accuracy (2-4 mm) is sufficient to guide the choice of puncture site and trajectory of an epidural needle.

  2. Rescuing the Clinical Breast Examination: Advances in Classifying Technique and Assessing Physician Competency.

    PubMed

    Laufer, Shlomi; D'Angelo, Anne-Lise D; Kwan, Calvin; Ray, Rebbeca D; Yudkowsky, Rachel; Boulet, John R; McGaghie, William C; Pugh, Carla M

    2017-12-01

    Develop new performance evaluation standards for the clinical breast examination (CBE). There are several, technical aspects of a proper CBE. Our recent work discovered a significant, linear relationship between palpation force and CBE accuracy. This article investigates the relationship between other technical aspects of the CBE and accuracy. This performance assessment study involved data collection from physicians (n = 553) attending 3 different clinical meetings between 2013 and 2014: American Society of Breast Surgeons, American Academy of Family Physicians, and American College of Obstetricians and Gynecologists. Four, previously validated, sensor-enabled breast models were used for clinical skills assessment. Models A and B had solitary, superficial, 2 cm and 1 cm soft masses, respectively. Models C and D had solitary, deep, 2 cm hard and moderately firm masses, respectively. Finger movements (search technique) from 1137 CBE video recordings were independently classified by 2 observers. Final classifications were compared with CBE accuracy. Accuracy rates were model A = 99.6%, model B = 89.7%, model C = 75%, and model D = 60%. Final classification categories for search technique included rubbing movement, vertical movement, piano fingers, and other. Interrater reliability was (k = 0.79). Rubbing movement was 4 times more likely to yield an accurate assessment (odds ratio 3.81, P < 0.001) compared with vertical movement and piano fingers. Piano fingers had the highest failure rate (36.5%). Regression analysis of search pattern, search technique, palpation force, examination time, and 6 demographic variables, revealed that search technique independently and significantly affected CBE accuracy (P < 0.001). Our results support measurement and classification of CBE techniques and provide the foundation for a new paradigm in teaching and assessing hands-on clinical skills. The newly described piano fingers palpation technique was noted to have unusually high failure rates. Medical educators should be aware of the potential differences in effectiveness for various CBE techniques.

  3. Certification of a hybrid parameter model of the fully flexible Shuttle Remote Manipulator System

    NASA Technical Reports Server (NTRS)

    Barhorst, Alan A.

    1995-01-01

    The development of high fidelity models of mechanical systems with flexible components is in flux. Many working models of these devices assume the elastic motion is small and can be superimposed on the overall rigid body motion. A drawback associated with this type of modeling technique is that it is required to regenerate the linear modal model of the device if the elastic motion is sufficiently far from the base rigid motion. An advantage to this type of modeling is that it uses NASTRAN modal data which is the NASA standard means of modal information exchange. A disadvantage to the linear modeling is that it fails to accurately represent large motion of the system, unless constant modal updates are performed. In this study, which is a continuation of a project started last year, the drawback of the currently used modal snapshot modeling technique is addressed in a rigorous fashion by novel and easily applied means.

  4. Summary of Echoes Across the Pond: Understanding EU-US Defense Industrial Relationships

    DTIC Science & Technology

    2008-04-23

    standard models of corporate strategy: Five Forces (Porter, 1980) and “ Co - opetition ” (Brandenburger & Nalebuff, 1996). Section IV provides narratives...Longman. Brandenburger, A.M., & Nalebuff, B.J. (1996). Co - opetition . New York: Doubleday. Porter, M.E. (1980). Competitive strategy: Techniques for...GOVERNMENTAL POLITICS • OUR MODELS – OFFSETS (Udis & Maskus, 1991) – TRANSACTION COST ECONOMICS (Williamson,…) – CORPORATE STRATEGY (5 Forces, “ Co - opetition

  5. On the numerical treatment of Coulomb forces in scattering problems

    NASA Astrophysics Data System (ADS)

    Randazzo, J. M.; Ancarani, L. U.; Colavecchia, F. D.; Gasaneo, G.; Frapiccini, A. L.

    2012-11-01

    We investigate the limiting procedures to obtain Coulomb interactions from short-range potentials. The application of standard techniques used for the two-body case (exponential and sharp cutoff) to the three-body break-up problem is illustrated numerically by considering the Temkin-Poet (TP) model of e-H processes.

  6. 10 CFR 503.34 - Inability to comply with applicable environmental requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... environmental compliance of the facility, including an analysis of its ability to meet applicable standards and... will be based solely on an analysis of the petitioner's capacity to physically achieve applicable... exemption. All such analysis must be based on accepted analytical techniques, such as air quality modeling...

  7. 10 CFR 503.34 - Inability to comply with applicable environmental requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... environmental compliance of the facility, including an analysis of its ability to meet applicable standards and... will be based solely on an analysis of the petitioner's capacity to physically achieve applicable... exemption. All such analysis must be based on accepted analytical techniques, such as air quality modeling...

  8. Modeling of geoelectric parameters for assessing groundwater potentiality in a multifaceted geologic terrain, Ipinsa Southwest, Nigeria - A GIS-based GODT approach

    NASA Astrophysics Data System (ADS)

    Mogaji, Kehinde Anthony; Omobude, Osayande Bright

    2017-12-01

    Modeling of groundwater potentiality zones is a vital scheme for effective management of groundwater resources. This study developed a new multi-criteria decision making algorithm for groundwater potentiality modeling through modifying the standard GOD model. The developed model christened as GODT model was applied to assess groundwater potential in a multi-faceted crystalline geologic terrain, southwestern, Nigeria using the derived four unify groundwater potential conditioning factors namely: Groundwater hydraulic confinement (G), aquifer Overlying strata resistivity (O), Depth to water table (D) and Thickness of aquifer (T) from the interpreted geophysical data acquired in the area. With the developed model algorithm, the GIS-based produced G, O, D and T maps were synthesized to estimate groundwater potential index (GWPI) values for the area. The estimated GWPI values were processed in GIS environment to produce groundwater potential prediction index (GPPI) map which demarcate the area into four potential zones. The produced GODT model-based GPPI map was validated through application of both correlation technique and spatial attribute comparative scheme (SACS). The performance of the GODT model was compared with that of the standard analytic hierarchy process (AHP) model. The correlation technique results established 89% regression coefficients for the GODT modeling algorithm compared with 84% for the AHP model. On the other hand, the SACS validation results for the GODT and AHP models are 72.5% and 65%, respectively. The overall results indicate that both models have good capability for predicting groundwater potential zones with the GIS-based GODT model as a good alternative. The GPPI maps produced in this study can form part of decision making model for environmental planning and groundwater management in the area.

  9. Domain Modeling and Application Development of an Archetype- and XML-based EHRS. Practical Experiences and Lessons Learnt.

    PubMed

    Kropf, Stefan; Chalopin, Claire; Lindner, Dirk; Denecke, Kerstin

    2017-06-28

    Access to patient data within the hospital or between hospitals is still problematic since a variety of information systems is in use applying different vendor specific terminologies and underlying knowledge models. Beyond, the development of electronic health record systems (EHRSs) is time and resource consuming. Thus, there is a substantial need for a development strategy of standardized EHRSs. We are applying a reuse-oriented process model and demonstrate its feasibility and realization on a practical medical use case, which is an EHRS holding all relevant data arising in the context of treatment of tumors of the sella region. In this paper, we describe the development process and our practical experiences. Requirements towards the development of the EHRS were collected by interviews with a neurosurgeon and patient data analysis. For modelling of patient data, we selected openEHR as standard and exploited the software tools provided by the openEHR foundation. The patient information model forms the core of the development process, which comprises the EHR generation and the implementation of an EHRS architecture. Moreover, a reuse-oriented process model from the business domain was adapted to the development of the EHRS. The reuse-oriented process model is a model for a suitable abstraction of both, modeling and development of an EHR centralized EHRS. The information modeling process resulted in 18 archetypes that were aggregated in a template and built the boilerplate of the model driven development. The EHRs and the EHRS were developed by openEHR and W3C standards, tightly supported by well-established XML techniques. The GUI of the final EHRS integrates and visualizes information from various examinations, medical reports, findings and laboratory test results. We conclude that the development of a standardized overarching EHR and an EHRS is feasible using openEHR and W3C standards, enabling a high degree of semantic interoperability. The standardized representation visualizes data and can in this way support the decision process of clinicians.

  10. Comparison of Fast Neutron Detector Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stange, Sy; Mckigney, Edward Allen

    2015-02-09

    This report documents the work performed for the Department of Homeland Security Domestic Nuclear Detection O ce as the project Fast Neutron Detection Evaluation under contract HSHQDC-14-X-00022. This study was performed as a follow-on to the project Study of Fast Neutron Signatures and Measurement Techniques for SNM Detection - DNDO CFP11-100 STA-01. That work compared various detector technologies in a portal monitor con guration, focusing on a comparison between a number of fast neutron detection techniques and two standard thermal neutron detection technologies. The conclusions of the earlier work are contained in the report Comparison of Fast Neutron Detector Technologies.more » This work is designed to address questions raised about assumptions underlying the models built for the earlier project. To that end, liquid scintillators of two di erent sizes{ one a commercial, o -the-shelf (COTS) model of standard dimensions and the other a large, planer module{were characterized at Los Alamos National Laboratory. The results of those measurements were combined with the results of the earlier models to gain a more complete picture of the performance of liquid scintillator as a portal monitor technology.« less

  11. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.

  12. Diminishing-cues retrieval practice: A memory-enhancing technique that works when regular testing doesn't.

    PubMed

    Fiechter, Joshua L; Benjamin, Aaron S

    2017-08-28

    Retrieval practice has been shown to be a highly effective tool for enhancing memory, a fact that has led to major changes to educational practice and technology. However, when initial learning is poor, initial retrieval practice is unlikely to be successful and long-term benefits of retrieval practice are compromised or nonexistent. Here, we investigate the benefit of a scaffolded retrieval technique called diminishing-cues retrieval practice (Finley, Benjamin, Hays, Bjork, & Kornell, Journal of Memory and Language, 64, 289-298, 2011). Under learning conditions that favored a strong testing effect, diminishing cues and standard retrieval practice both enhanced memory performance relative to restudy. Critically, under learning conditions where standard retrieval practice was not helpful, diminishing cues enhanced memory performance substantially. These experiments demonstrate that diminishing-cues retrieval practice can widen the range of conditions under which testing can benefit memory, and so can serve as a model for the broader application of testing-based techniques for enhancing learning.

  13. Correction of stain variations in nuclear refractive index of clinical histology specimens

    PubMed Central

    Uttam, Shikhar; Bista, Rajan K.; Hartman, Douglas J.; Brand, Randall E.; Liu, Yang

    2011-01-01

    For any technique to be adopted into a clinical setting, it is imperative that it seamlessly integrates with well-established clinical diagnostic workflow. We recently developed an optical microscopy technique—spatial-domain low-coherence quantitative phase microscopy (SL-QPM) that can extract the refractive index of the cell nucleus from the standard histology specimens on glass slides prepared via standard clinical protocols. This technique has shown great potential in detecting cancer with a better sensitivity than conventional pathology. A major hurdle in the clinical translation of this technique is the intrinsic variation among staining agents used in histology specimens, which limits the accuracy of refractive index measurements of clinical samples. In this paper, we present a simple and easily generalizable method to remove the effect of variations in staining levels on nuclear refractive index obtained with SL-QPM. We illustrate the efficacy of our correction method by applying it to variously stained histology samples from animal model and clinical specimens. PMID:22112118

  14. Trends in modeling Biomedical Complex Systems

    PubMed Central

    Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro

    2009-01-01

    In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068

  15. An information theory approach to the density of the earth

    NASA Technical Reports Server (NTRS)

    Graber, M. A.

    1977-01-01

    Information theory can develop a technique which takes experimentally determined numbers and produces a uniquely specified best density model satisfying those numbers. A model was generated using five numerical parameters: the mass of the earth, its moment of inertia, three zero-node torsional normal modes (L = 2, 8, 26). In order to determine the stability of the solution, six additional densities were generated, in each of which the period of one of the three normal modes was increased or decreased by one standard deviation. The superposition of the seven models is shown. It indicates that current knowledge of the torsional modes is sufficient to specify the density in the upper mantle but that the lower mantle and core will require smaller standard deviations before they can be accurately specified.

  16. Measurement of ultra-low power oscillators using adaptive drift cancellation with applications to nano-magnetic spin torque oscillators.

    PubMed

    Tamaru, S; Ricketts, D S

    2013-05-01

    This work presents a technique for measuring ultra-low power oscillator signals using an adaptive drift cancellation method. We demonstrate this technique through spectrum measurements of a sub-pW nano-magnet spin torque oscillator (STO). We first present a detailed noise analysis of the standard STO characterization apparatus to estimate the background noise level, then compare these results to the noise level of three measurement configurations. The first and second share the standard configuration but use different spectrum analyzers (SA), an older model and a state-of-the-art model, respectively. The third is the technique proposed in this work using the same old SA as for the first. Our results show that the first and second configurations suffer from a large drift that requires ~30 min to stabilize each time the SA changes the frequency band, even though the SA has been powered on for longer than 24 h. The third configuration introduced in this work, however, shows absolutely no drift as the SA changes frequency band, and nearly the same noise performance as with a state-of-the-art SA, thus providing a reliable method for measuring very low power signals for a wide variety of applications.

  17. Detecting and Locating Seismic Events Without Phase Picks or Velocity Models

    NASA Astrophysics Data System (ADS)

    Arrowsmith, S.; Young, C. J.; Ballard, S.; Slinkard, M.

    2015-12-01

    The standard paradigm for seismic event monitoring is to scan waveforms from a network of stations and identify the arrival time of various seismic phases. A signal association algorithm then groups the picks to form events, which are subsequently located by minimizing residuals between measured travel times and travel times predicted by an Earth model. Many of these steps are prone to significant errors which can lead to erroneous arrival associations and event locations. Here, we revisit a concept for event detection that does not require phase picks or travel time curves and fuses detection, association and location into a single algorithm. Our pickless event detector exploits existing catalog and waveform data to build an empirical stack of the full regional seismic wavefield, which is subsequently used to detect and locate events at a network level using correlation techniques. Because the technique uses more of the information content of the original waveforms, the concept is particularly powerful for detecting weak events that would be missed by conventional methods. We apply our detector to seismic data from the University of Utah Seismograph Stations network and compare our results with the earthquake catalog published by the University of Utah. We demonstrate that the pickless detector can detect and locate significant numbers of events previously missed by standard data processing techniques.

  18. Investigating the probability of detection of typical cavity shapes through modelling and comparison of geophysical techniques

    NASA Astrophysics Data System (ADS)

    James, P.

    2011-12-01

    With a growing need for housing in the U.K., the government has proposed increased development of brownfield sites. However, old mine workings and natural cavities represent a potential hazard before, during and after construction on such sites, and add further complication to subsurface parameters. Cavities are hence a limitation to certain redevelopment and their detection is an ever important consideration. The current standard technique for cavity detection is a borehole grid, which is intrusive, non-continuous, slow and expensive. A new robust investigation standard in the detection of cavities is sought and geophysical techniques offer an attractive alternative. Geophysical techniques have previously been utilised successfully in the detection of cavities in various geologies, but still has an uncertain reputation in the engineering industry. Engineers are unsure of the techniques and are inclined to rely on well known techniques than utilise new technologies. Bad experiences with geophysics are commonly due to the indiscriminate choice of particular techniques. It is imperative that a geophysical survey is designed with the specific site and target in mind at all times, and the ability and judgement to rule out some, or all, techniques. To this author's knowledge no comparative software exists to aid technique choice. Also, previous modelling software limit the shapes of bodies and hence typical cavity shapes are not represented. Here, we introduce 3D modelling software (Matlab) which computes and compares the response to various cavity targets from a range of techniques (gravity, gravity gradient, magnetic, magnetic gradient and GPR). Typical near surface cavity shapes are modelled including shafts, bellpits, various lining and capping materials, and migrating voids. The probability of cavity detection is assessed in typical subsurface and noise conditions across a range of survey parameters. Techniques can be compared and the limits of detection distance assessed. The density of survey points required to achieve a required probability of detection can be calculated. The software aids discriminate choice of technique, improves survey design, and increases the likelihood of survey success; all factors sought in the engineering industry. As a simple example, the response from magnetometry, gravimetry, and gravity gradient techniques above an example 3m deep, 1m cube air cavity in limestone across a 15m grid was calculated. The maximum responses above the cavity are small (amplitudes of 0.018nT, 0.0013mGal, 8.3eotvos respectively), but at typical site noise levels the detection reliability is over 50% for the gradient gravity method on a single survey line. Increasing the number of survey points across the site increases the reliability of detection of the anomaly by the addition of probabilities. We can calculate the probability of detection at different profile spacings to assess the best possible survey design. At 1m spacing the overall probability of by the gradient gravity method is over 90%, and over 60% for magnetometry (at 3m spacing the probability drops to 32%). The use of modelling in near surface surveys is a useful tool to assess the feasibility of a range of techniques to detect subtle signals. Future work will integrate this work with borehole measured parameters.

  19. Skin Friction and Transition Location Measurement on Supersonic Transport Models

    NASA Technical Reports Server (NTRS)

    Kennelly, Robert A., Jr.; Goodsell, Aga M.; Olsen, Lawrence E. (Technical Monitor)

    2000-01-01

    Flow visualization techniques were used to obtain both qualitative and quantitative skin friction and transition location data in wind tunnel tests performed on two supersonic transport models at Mach 2.40. Oil-film interferometry was useful for verifying boundary layer transition, but careful monitoring of model surface temperatures and systematic examination of the effects of tunnel start-up and shutdown transients will be required to achieve high levels of accuracy for skin friction measurements. A more common technique, use of a subliming solid to reveal transition location, was employed to correct drag measurements to a standard condition of all-turbulent flow on the wing. These corrected data were then analyzed to determine the additional correction required to account for the effect of the boundary layer trip devices.

  20. Volterra model of the parametric array loudspeaker operating at ultrasonic frequencies.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2016-11-01

    The parametric array loudspeaker (PAL) is an application of the parametric acoustic array in air, which can be applied to transmit a narrow audio beam from an ultrasonic emitter. However, nonlinear distortion is very perceptible in the audio beam. Modulation methods to reduce the nonlinear distortion are available for on-axis far-field applications. For other applications, preprocessing techniques are wanting. In order to develop a preprocessing technique with general applicability to a wide range of operating conditions, the Volterra filter is investigated as a nonlinear model of the PAL in this paper. Limitations of the standard audio-to-audio Volterra filter are elaborated. An improved ultrasound-to-ultrasound Volterra filter is proposed and empirically demonstrated to be a more generic Volterra model of the PAL.

  1. NASA Handbook for Models and Simulations: An Implementation Guide for NASA-STD-7009

    NASA Technical Reports Server (NTRS)

    Steele, Martin J.

    2013-01-01

    The purpose of this Handbook is to provide technical information, clarification, examples, processes, and techniques to help institute good modeling and simulation practices in the National Aeronautics and Space Administration (NASA). As a companion guide to NASA-STD- 7009, Standard for Models and Simulations, this Handbook provides a broader scope of information than may be included in a Standard and promotes good practices in the production, use, and consumption of NASA modeling and simulation products. NASA-STD-7009 specifies what a modeling and simulation activity shall or should do (in the requirements) but does not prescribe how the requirements are to be met, which varies with the specific engineering discipline, or who is responsible for complying with the requirements, which depends on the size and type of project. A guidance document, which is not constrained by the requirements of a Standard, is better suited to address these additional aspects and provide necessary clarification. This Handbook stems from the Space Shuttle Columbia Accident Investigation (2003), which called for Agency-wide improvements in the "development, documentation, and operation of models and simulations"' that subsequently elicited additional guidance from the NASA Office of the Chief Engineer to include "a standard method to assess the credibility of the models and simulations."2 General methods applicable across the broad spectrum of model and simulation (M&S) disciplines were sought to help guide the modeling and simulation processes within NASA and to provide for consistent reporting ofM&S activities and analysis results. From this, the standardized process for the M&S activity was developed. The major contents of this Handbook are the implementation details of the general M&S requirements ofNASA-STD-7009, including explanations, examples, and suggestions for improving the credibility assessment of an M&S-based analysis.

  2. Business Model for the Security of a Large-Scale PACS, Compliance with ISO/27002:2013 Standard.

    PubMed

    Gutiérrez-Martínez, Josefina; Núñez-Gaona, Marco Antonio; Aguirre-Meneses, Heriberto

    2015-08-01

    Data security is a critical issue in an organization; a proper information security management (ISM) is an ongoing process that seeks to build and maintain programs, policies, and controls for protecting information. A hospital is one of the most complex organizations, where patient information has not only legal and economic implications but, more importantly, an impact on the patient's health. Imaging studies include medical images, patient identification data, and proprietary information of the study; these data are contained in the storage device of a PACS. This system must preserve the confidentiality, integrity, and availability of patient information. There are techniques such as firewalls, encryption, and data encapsulation that contribute to the protection of information. In addition, the Digital Imaging and Communications in Medicine (DICOM) standard and the requirements of the Health Insurance Portability and Accountability Act (HIPAA) regulations are also used to protect the patient clinical data. However, these techniques are not systematically applied to the picture and archiving and communication system (PACS) in most cases and are not sufficient to ensure the integrity of the images and associated data during transmission. The ISO/IEC 27001:2013 standard has been developed to improve the ISM. Currently, health institutions lack effective ISM processes that enable reliable interorganizational activities. In this paper, we present a business model that accomplishes the controls of ISO/IEC 27002:2013 standard and criteria of security and privacy from DICOM and HIPAA to improve the ISM of a large-scale PACS. The methodology associated with the model can monitor the flow of data in a PACS, facilitating the detection of unauthorized access to images and other abnormal activities.

  3. Advanced Atmospheric Ensemble Modeling Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckley, R.; Chiswell, S.; Kurzeja, R.

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two releasemore » times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.« less

  4. Evaluation of liquefaction potential of soil based on standard penetration test using multi-gene genetic programming model

    NASA Astrophysics Data System (ADS)

    Muduli, Pradyut; Das, Sarat

    2014-06-01

    This paper discusses the evaluation of liquefaction potential of soil based on standard penetration test (SPT) dataset using evolutionary artificial intelligence technique, multi-gene genetic programming (MGGP). The liquefaction classification accuracy (94.19%) of the developed liquefaction index (LI) model is found to be better than that of available artificial neural network (ANN) model (88.37%) and at par with the available support vector machine (SVM) model (94.19%) on the basis of the testing data. Further, an empirical equation is presented using MGGP to approximate the unknown limit state function representing the cyclic resistance ratio (CRR) of soil based on developed LI model. Using an independent database of 227 cases, the overall rates of successful prediction of occurrence of liquefaction and non-liquefaction are found to be 87, 86, and 84% by the developed MGGP based model, available ANN and the statistical models, respectively, on the basis of calculated factor of safety (F s) against the liquefaction occurrence.

  5. Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments

    NASA Technical Reports Server (NTRS)

    Manning, Ted A.; Lawrence, Scott L.

    2014-01-01

    As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.

  6. Encoding probabilistic brain atlases using Bayesian inference.

    PubMed

    Van Leemput, Koen

    2009-06-01

    This paper addresses the problem of creating probabilistic brain atlases from manually labeled training data. Probabilistic atlases are typically constructed by counting the relative frequency of occurrence of labels in corresponding locations across the training images. However, such an "averaging" approach generalizes poorly to unseen cases when the number of training images is limited, and provides no principled way of aligning the training datasets using deformable registration. In this paper, we generalize the generative image model implicitly underlying standard "average" atlases, using mesh-based representations endowed with an explicit deformation model. Bayesian inference is used to infer the optimal model parameters from the training data, leading to a simultaneous group-wise registration and atlas estimation scheme that encompasses standard averaging as a special case. We also use Bayesian inference to compare alternative atlas models in light of the training data, and show how this leads to a data compression problem that is intuitive to interpret and computationally feasible. Using this technique, we automatically determine the optimal amount of spatial blurring, the best deformation field flexibility, and the most compact mesh representation. We demonstrate, using 2-D training datasets, that the resulting models are better at capturing the structure in the training data than conventional probabilistic atlases. We also present experiments of the proposed atlas construction technique in 3-D, and show the resulting atlases' potential in fully-automated, pulse sequence-adaptive segmentation of 36 neuroanatomical structures in brain MRI scans.

  7. A Component Approach to Collaborative Scientific Software Development: Tools and Techniques Utilized by the Quantum Chemistry Science Application Partnership

    DOE PAGES

    Kenny, Joseph P.; Janssen, Curtis L.; Gordon, Mark S.; ...

    2008-01-01

    Cutting-edge scientific computing software is complex, increasingly involving the coupling of multiple packages to combine advanced algorithms or simulations at multiple physical scales. Component-based software engineering (CBSE) has been advanced as a technique for managing this complexity, and complex component applications have been created in the quantum chemistry domain, as well as several other simulation areas, using the component model advocated by the Common Component Architecture (CCA) Forum. While programming models do indeed enable sound software engineering practices, the selection of programming model is just one building block in a comprehensive approach to large-scale collaborative development which must also addressmore » interface and data standardization, and language and package interoperability. We provide an overview of the development approach utilized within the Quantum Chemistry Science Application Partnership, identifying design challenges, describing the techniques which we have adopted to address these challenges and highlighting the advantages which the CCA approach offers for collaborative development.« less

  8. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clegg, Samuel M; Barefield, James E; Wiens, Roger C

    2008-01-01

    Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from whichmore » unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.« less

  9. Towards large-scale FAME-based bacterial species identification using machine learning techniques.

    PubMed

    Slabbinck, Bram; De Baets, Bernard; Dawyndt, Peter; De Vos, Paul

    2009-05-01

    In the last decade, bacterial taxonomy witnessed a huge expansion. The swift pace of bacterial species (re-)definitions has a serious impact on the accuracy and completeness of first-line identification methods. Consequently, back-end identification libraries need to be synchronized with the List of Prokaryotic names with Standing in Nomenclature. In this study, we focus on bacterial fatty acid methyl ester (FAME) profiling as a broadly used first-line identification method. From the BAME@LMG database, we have selected FAME profiles of individual strains belonging to the genera Bacillus, Paenibacillus and Pseudomonas. Only those profiles resulting from standard growth conditions have been retained. The corresponding data set covers 74, 44 and 95 validly published bacterial species, respectively, represented by 961, 378 and 1673 standard FAME profiles. Through the application of machine learning techniques in a supervised strategy, different computational models have been built for genus and species identification. Three techniques have been considered: artificial neural networks, random forests and support vector machines. Nearly perfect identification has been achieved at genus level. Notwithstanding the known limited discriminative power of FAME analysis for species identification, the computational models have resulted in good species identification results for the three genera. For Bacillus, Paenibacillus and Pseudomonas, random forests have resulted in sensitivity values, respectively, 0.847, 0.901 and 0.708. The random forests models outperform those of the other machine learning techniques. Moreover, our machine learning approach also outperformed the Sherlock MIS (MIDI Inc., Newark, DE, USA). These results show that machine learning proves very useful for FAME-based bacterial species identification. Besides good bacterial identification at species level, speed and ease of taxonomic synchronization are major advantages of this computational species identification strategy.

  10. On statistical inference in time series analysis of the evolution of road safety.

    PubMed

    Commandeur, Jacques J F; Bijleveld, Frits D; Bergel-Hayat, Ruth; Antoniou, Constantinos; Yannis, George; Papadimitriou, Eleonora

    2013-11-01

    Data collected for building a road safety observatory usually include observations made sequentially through time. Examples of such data, called time series data, include annual (or monthly) number of road traffic accidents, traffic fatalities or vehicle kilometers driven in a country, as well as the corresponding values of safety performance indicators (e.g., data on speeding, seat belt use, alcohol use, etc.). Some commonly used statistical techniques imply assumptions that are often violated by the special properties of time series data, namely serial dependency among disturbances associated with the observations. The first objective of this paper is to demonstrate the impact of such violations to the applicability of standard methods of statistical inference, which leads to an under or overestimation of the standard error and consequently may produce erroneous inferences. Moreover, having established the adverse consequences of ignoring serial dependency issues, the paper aims to describe rigorous statistical techniques used to overcome them. In particular, appropriate time series analysis techniques of varying complexity are employed to describe the development over time, relating the accident-occurrences to explanatory factors such as exposure measures or safety performance indicators, and forecasting the development into the near future. Traditional regression models (whether they are linear, generalized linear or nonlinear) are shown not to naturally capture the inherent dependencies in time series data. Dedicated time series analysis techniques, such as the ARMA-type and DRAG approaches are discussed next, followed by structural time series models, which are a subclass of state space methods. The paper concludes with general recommendations and practice guidelines for the use of time series models in road safety research. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. External trial deep brain stimulation device for the application of desynchronizing stimulation techniques.

    PubMed

    Hauptmann, C; Roulet, J-C; Niederhauser, J J; Döll, W; Kirlangic, M E; Lysyansky, B; Krachkovskyi, V; Bhatti, M A; Barnikol, U B; Sasse, L; Bührle, C P; Speckmann, E-J; Götz, M; Sturm, V; Freund, H-J; Schnell, U; Tass, P A

    2009-12-01

    In the past decade deep brain stimulation (DBS)-the application of electrical stimulation to specific target structures via implanted depth electrodes-has become the standard treatment for medically refractory Parkinson's disease and essential tremor. These diseases are characterized by pathological synchronized neuronal activity in particular brain areas. We present an external trial DBS device capable of administering effectively desynchronizing stimulation techniques developed with methods from nonlinear dynamics and statistical physics according to a model-based approach. These techniques exploit either stochastic phase resetting principles or complex delayed-feedback mechanisms. We explain how these methods are implemented into a safe and user-friendly device.

  12. Combining dictionary techniques with extensible markup language (XML)--requirements to a new approach towards flexible and standardized documentation.

    PubMed Central

    Altmann, U.; Tafazzoli, A. G.; Noelle, G.; Huybrechts, T.; Schweiger, R.; Wächter, W.; Dudeck, J. W.

    1999-01-01

    In oncology various international and national standards exist for the documentation of different aspects of a disease. Since elements of these standards are repeated in different contexts, a common data dictionary could support consistent representation in any context. For the construction of such a dictionary existing documents have to be worked up in a complex procedure, that considers aspects of hierarchical decomposition of documents and of domain control as well as aspects of user presentation and models of the underlying model of patient data. In contrast to other thesauri, text chunks like definitions or explanations are very important and have to be preserved, since oncologic documentation often means coding and classification on an aggregate level and the safe use of coding systems is an important precondition for comparability of data. This paper discusses the potentials of the use of XML in combination with a dictionary for the promotion and development of standard conformable applications for tumor documentation. PMID:10566311

  13. Use of prototype two-channel endoscope with elevator enables larger lift-and-snare endoscopic mucosal resection in a porcine model

    PubMed Central

    Atkinson, Matthew; Chukwumah, Chike; Marks, Jeffrey; Chak, Amitabh

    2014-01-01

    Background: Flat and depressed lesions are becoming increasingly recognized in the esophagus, stomach, and colon. Various techniques have been described for endoscopic mucosal resection (EMR) of these lesions. Aims: To evaluate the efficacy of lift-grasp-cut EMR using a prototype dual-channel forward-viewing endoscope with an instrument elevator in one accessory channel (dual-channel elevator scope) as compared to standard dual-channel endoscopes. Methods: EMR was performed using a lift-grasp-cut technique on normal flat rectosigmoid or gastric mucosa in live porcine models after submucosal injection of 4 mL of saline using a dual-channel elevator scope or a standard dual-channel endoscope. With the dual-channel elevator scope, the elevator was used to attain further lifting of the mucosa. The primary endpoint was size of the EMR specimen and the secondary endpoint was number of complications. Results: Twelve experiments were performed (six gastric and six colonic). Mean specimen diameter was 2.27 cm with the dual-channel elevator scope and 1.34 cm with the dual-channel endoscope (P = 0.018). Two colonic perforations occurred with the dual-channel endoscope, vs no complications with the dual-channel elevator scope. Conclusions: The increased lift of the mucosal epithelium, through use of the dual-channel elevator scope, allows for larger EMR when using a lift-grasp-cut technique. Noting the thin nature of the porcine colonic wall, use of the elevator may also make this technique safer. PMID:24760237

  14. Precipitation interpolation in mountainous areas

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur

    2015-04-01

    Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.

  15. Boosting drug named entity recognition using an aggregate classifier.

    PubMed

    Korkontzelos, Ioannis; Piliouras, Dimitrios; Dowsey, Andrew W; Ananiadou, Sophia

    2015-10-01

    Drug named entity recognition (NER) is a critical step for complex biomedical NLP tasks such as the extraction of pharmacogenomic, pharmacodynamic and pharmacokinetic parameters. Large quantities of high quality training data are almost always a prerequisite for employing supervised machine-learning techniques to achieve high classification performance. However, the human labour needed to produce and maintain such resources is a significant limitation. In this study, we improve the performance of drug NER without relying exclusively on manual annotations. We perform drug NER using either a small gold-standard corpus (120 abstracts) or no corpus at all. In our approach, we develop a voting system to combine a number of heterogeneous models, based on dictionary knowledge, gold-standard corpora and silver annotations, to enhance performance. To improve recall, we employed genetic programming to evolve 11 regular-expression patterns that capture common drug suffixes and used them as an extra means for recognition. Our approach uses a dictionary of drug names, i.e. DrugBank, a small manually annotated corpus, i.e. the pharmacokinetic corpus, and a part of the UKPMC database, as raw biomedical text. Gold-standard and silver annotated data are used to train maximum entropy and multinomial logistic regression classifiers. Aggregating drug NER methods, based on gold-standard annotations, dictionary knowledge and patterns, improved the performance on models trained on gold-standard annotations, only, achieving a maximum F-score of 95%. In addition, combining models trained on silver annotations, dictionary knowledge and patterns are shown to achieve comparable performance to models trained exclusively on gold-standard data. The main reason appears to be the morphological similarities shared among drug names. We conclude that gold-standard data are not a hard requirement for drug NER. Combining heterogeneous models build on dictionary knowledge can achieve similar or comparable classification performance with that of the best performing model trained on gold-standard annotations. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  16. New paradigms in internal architecture design and freeform fabrication of tissue engineering porous scaffolds.

    PubMed

    Yoo, Dongjin

    2012-07-01

    Advanced additive manufacture (AM) techniques are now being developed to fabricate scaffolds with controlled internal pore architectures in the field of tissue engineering. In general, these techniques use a hybrid method which combines computer-aided design (CAD) with computer-aided manufacturing (CAM) tools to design and fabricate complicated three-dimensional (3D) scaffold models. The mathematical descriptions of micro-architectures along with the macro-structures of the 3D scaffold models are limited by current CAD technologies as well as by the difficulty of transferring the designed digital models to standard formats for fabrication. To overcome these difficulties, we have developed an efficient internal pore architecture design system based on triply periodic minimal surface (TPMS) unit cell libraries and associated computational methods to assemble TPMS unit cells into an entire scaffold model. In addition, we have developed a process planning technique based on TPMS internal architecture pattern of unit cells to generate tool paths for freeform fabrication of tissue engineering porous scaffolds. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. A Comparison of pical Root Resorption in Incisors after Fixed Orthodontic Treatment with Standard Edgewise and Straight Wire (MBT) Method

    PubMed Central

    Zahed Zahedani, SM; Oshagh, M; Momeni Danaei, Sh; Roeinpeikar, SMM

    2013-01-01

    Statement of Problem: One of the major outcomes of orthodontic treatment is the apical root resorption of teeth moved during the treatment. Identifying the possible risk factors, are necessary for every orthodontist. Purpose: The aim of this study was to compare the rate of apical root resorption after fixed orthodontic treatment with standard edgewise and straight wire (MBT) method, and also to evaluate other factors effecting the rate of root resorption in orthodontic treatments. Materials and Method: In this study, parallel periapical radiographs of 127 patients imaging a total of 737 individual teeth, were collected. A total of 76 patients were treated by standard edgewise and 51 patients by straight wire method. The periapical radiographs were scanned and then the percentage of root resorption was calculated by Photoshop software. The data were analyzed by Paired-Samples t-test and the Generalized Linear Model adopting the SPSS 15.0. Results: In patients treated with straight wire method (MBT), mean root resorption was 18.26% compared to 14.82% in patients treated with standard edgewise technique (p< .05). Male patients had higher rate of root resorption,statistically significant (p< .05). Age at onset of treatment, duration of treatment, type of dental occlusion, premolar extractions and the use of intermaxillary elastics had no significant effect on the root resorption in this study. Conclusion: Having more root resorption in the straight wire method and less in the standard edgewise technique can be attributed to more root movement in pre-adjusted MBT technique due to the brackets employed in this method. PMID:24724131

  18. A Comparison of pical Root Resorption in Incisors after Fixed Orthodontic Treatment with Standard Edgewise and Straight Wire (MBT) Method.

    PubMed

    Zahed Zahedani, Sm; Oshagh, M; Momeni Danaei, Sh; Roeinpeikar, Smm

    2013-09-01

    One of the major outcomes of orthodontic treatment is the apical root resorption of teeth moved during the treatment. Identifying the possible risk factors, are necessary for every orthodontist. The aim of this study was to compare the rate of apical root resorption after fixed orthodontic treatment with standard edgewise and straight wire (MBT) method, and also to evaluate other factors effecting the rate of root resorption in orthodontic treatments. In this study, parallel periapical radiographs of 127 patients imaging a total of 737 individual teeth, were collected. A total of 76 patients were treated by standard edgewise and 51 patients by straight wire method. The periapical radiographs were scanned and then the percentage of root resorption was calculated by Photoshop software. The data were analyzed by Paired-Samples t-test and the Generalized Linear Model adopting the SPSS 15.0. In patients treated with straight wire method (MBT), mean root resorption was 18.26% compared to 14.82% in patients treated with standard edgewise technique (p< .05). Male patients had higher rate of root resorption,statistically significant (p< .05). Age at onset of treatment, duration of treatment, type of dental occlusion, premolar extractions and the use of intermaxillary elastics had no significant effect on the root resorption in this study. Having more root resorption in the straight wire method and less in the standard edgewise technique can be attributed to more root movement in pre-adjusted MBT technique due to the brackets employed in this method.

  19. Search for the Standard Model Higgs boson decaying into $$ b\\overline{b} $$ produced in association with top quarks decaying hadronically in pp collisions at $$ \\sqrt{s}=8 $$ TeV with the ATLAS detector

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2016-05-01

    In this paper, a search for Higgs boson production in association with a pair of top quarks (more » $$ t\\overline{t} $$H) is performed, where the Higgs boson decays to $$ b\\overline{b} $$ , and both top quarks decay hadronically. The data used correspond to an integrated luminosity of 20.3 fb –1 of pp collisions at √s = 8 TeV collected with the ATLAS detector at the Large Hadron Collider. The search selects events with at least six energetic jets and uses a boosted decision tree algorithm to discriminate between signal and Standard Model background. The dominant multijet background is estimated using a dedicated data-driven technique. For a Higgs boson mass of 125 GeV, an upper limit of 6.4 (5.4) times the Standard Model cross section is observed (expected) at 95% confidence level. The best-fit value for the signal strength is μ = 1.6 ± 2.6 times the Standard Model expectation for m H = 125 GeV. Combining all $$ t\\overline{t}$$H searches carried out by ATLAS at √s = 8 and 7 TeV, an observed (expected) upper limit of 3.1 (1.4) times the Standard Model expectation is obtained at 95% confidence level, with a signal strength μ = 1.7 ± 0.8.« less

  20. Can contrast-enhanced ultrasonography improve Zone III REBOA placement for prehospital care?

    PubMed

    Chaudery, Muzzafer; Clark, James; Morrison, Jonathan J; Wilson, Mark H; Bew, Duncan; Darzi, Ara

    2016-01-01

    Torso hemorrhage is the primary cause of potentially preventable mortality in trauma. Resuscitative endovascular balloon occlusion of the aorta (REBOA) has been advocated as an adjunct to bridge patients to definitive hemorrhage control. The primary aim of this study was to assess whether contrast-enhanced ultrasonography can improve the accuracy of REBOA placement in the infrarenal aorta (Zone III). A fluoroscopy-free "enhanced" Zone III REBOA technique was developed using a porcine cadaver model. A "standard" over-the-wire Seldinger technique was used, which was enhanced with the addition of a microbubble contrast medium to inflate the balloon, observed with ultrasonography. Following this, attending- and resident-level physicians were randomized into two groups. They were taught either the enhanced with ultrasonography guidance (Group A) or the standard measuring length of catheter insertion (Group B) technique as part of a human cadaver trauma skills course. Outcomes assessed included time (seconds) from insertion to inflation, accuracy, and missed targets. All results were benchmarked against three endovascular experts. There were 20 participants who performed REBOA with Group A (51 [31]) being significantly faster than Group B (90 [63]) (p = 0.003) and more accurate (p = 0.023) with no missed targets. Group B had five missed targets, the most common error being inflation within Zone II. For Zone III REBOA, contrast-enhanced ultrasonography technique is faster and more accurate than the standard technique. This may have value in time-critical and austere environments. Clinical studies are now required to evaluate this approach further.

  1. On prognostic models, artificial intelligence and censored observations.

    PubMed

    Anand, S S; Hamilton, P W; Hughes, J G; Bell, D A

    2001-03-01

    The development of prognostic models for assisting medical practitioners with decision making is not a trivial task. Models need to possess a number of desirable characteristics and few, if any, current modelling approaches based on statistical or artificial intelligence can produce models that display all these characteristics. The inability of modelling techniques to provide truly useful models has led to interest in these models being purely academic in nature. This in turn has resulted in only a very small percentage of models that have been developed being deployed in practice. On the other hand, new modelling paradigms are being proposed continuously within the machine learning and statistical community and claims, often based on inadequate evaluation, being made on their superiority over traditional modelling methods. We believe that for new modelling approaches to deliver true net benefits over traditional techniques, an evaluation centric approach to their development is essential. In this paper we present such an evaluation centric approach to developing extensions to the basic k-nearest neighbour (k-NN) paradigm. We use standard statistical techniques to enhance the distance metric used and a framework based on evidence theory to obtain a prediction for the target example from the outcome of the retrieved exemplars. We refer to this new k-NN algorithm as Censored k-NN (Ck-NN). This reflects the enhancements made to k-NN that are aimed at providing a means for handling censored observations within k-NN.

  2. EXPERIMENTAL MODELLING OF AORTIC ANEURYSMS

    PubMed Central

    Doyle, Barry J; Corbett, Timothy J; Cloonan, Aidan J; O’Donnell, Michael R; Walsh, Michael T; Vorp, David A; McGloughlin, Timothy M

    2009-01-01

    A range of silicone rubbers were created based on existing commercially available materials. These silicones were designed to be visually different from one another and have distinct material properties, in particular, ultimate tensile strengths and tear strengths. In total, eleven silicone rubbers were manufactured, with the materials designed to have a range of increasing tensile strengths from approximately 2-4MPa, and increasing tear strengths from approximately 0.45-0.7N/mm. The variations in silicones were detected using a standard colour analysis technique. Calibration curves were then created relating colour intensity to individual material properties. All eleven materials were characterised and a 1st order Ogden strain energy function applied. Material coefficients were determined and examined for effectiveness. Six idealised abdominal aortic aneurysm models were also created using the two base materials of the study, with a further model created using a new mixing technique to create a rubber model with randomly assigned material properties. These models were then examined using videoextensometry and compared to numerical results. Colour analysis revealed a statistically significant linear relationship (p<0.0009) with both tensile strength and tear strength, allowing material strength to be determined using a non-destructive experimental technique. The effectiveness of this technique was assessed by comparing predicted material properties to experimentally measured methods, with good agreement in the results. Videoextensometry and numerical modelling revealed minor percentage differences, with all results achieving significance (p<0.0009). This study has successfully designed and developed a range of silicone rubbers that have unique colour intensities and material strengths. Strengths can be readily determined using a non-destructive analysis technique with proven effectiveness. These silicones may further aid towards an improved understanding of the biomechanical behaviour of aneurysms using experimental techniques. PMID:19595622

  3. The generation and use of numerical shape models for irregular Solar System objects

    NASA Technical Reports Server (NTRS)

    Simonelli, Damon P.; Thomas, Peter C.; Carcich, Brian T.; Veverka, Joseph

    1993-01-01

    We describe a procedure that allows the efficient generation of numerical shape models for irregular Solar System objects, where a numerical model is simply a table of evenly spaced body-centered latitudes and longitudes and their associated radii. This modeling technique uses a combination of data from limbs, terminators, and control points, and produces shape models that have some important advantages over analytical shape models. Accurate numerical shape models make it feasible to study irregular objects with a wide range of standard scientific analysis techniques. These applications include the determination of moments of inertia and surface gravity, the mapping of surface locations and structural orientations, photometric measurement and analysis, the reprojection and mosaicking of digital images, and the generation of albedo maps. The capabilities of our modeling procedure are illustrated through the development of an accurate numerical shape model for Phobos and the production of a global, high-resolution, high-pass-filtered digital image mosaic of this Martian moon. Other irregular objects that have been modeled, or are being modeled, include the asteroid Gaspra and the satellites Deimos, Amalthea, Epimetheus, Janus, Hyperion, and Proteus.

  4. Studies on Instabilities in Long-Baseline Two-Way Satellite Time and Frequency Transfer (TWSTFT) Including a Troposphere Delay Model

    DTIC Science & Technology

    2007-11-01

    TRANSFER ( TWSTFT ) INCLUDING A TROPOSPHERE DELAY MODEL D. Piester, A. Bauch Physikalisch-Technische Bundesanstalt (PTB) Bundesallee 100...Abstract Two-way satellite time and frequency transfer ( TWSTFT ) is one of the leading techniques for remote comparisons of atomic frequency standards...nanosecond level. These achievements are due to the fact that many delay variations of the transmitted signals cancel out in TWSTFT because of the

  5. a Speculative Study on Negative-Dimensional Potential and Wave Problems by Implicit Calculus Modeling Approach

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Wang, Fajie

    Based on the implicit calculus equation modeling approach, this paper proposes a speculative concept of the potential and wave operators on negative dimensionality. Unlike the standard partial differential equation (PDE) modeling, the implicit calculus modeling approach does not require the explicit expression of the PDE governing equation. Instead the fundamental solution of physical problem is used to implicitly define the differential operator and to implement simulation in conjunction with the appropriate boundary conditions. In this study, we conjecture an extension of the fundamental solution of the standard Laplace and Helmholtz equations to negative dimensionality. And then by using the singular boundary method, a recent boundary discretization technique, we investigate the potential and wave problems using the fundamental solution on negative dimensionality. Numerical experiments reveal that the physics behaviors on negative dimensionality may differ on positive dimensionality. This speculative study might open an unexplored territory in research.

  6. Applications of non-standard maximum likelihood techniques in energy and resource economics

    NASA Astrophysics Data System (ADS)

    Moeltner, Klaus

    Two important types of non-standard maximum likelihood techniques, Simulated Maximum Likelihood (SML) and Pseudo-Maximum Likelihood (PML), have only recently found consideration in the applied economic literature. The objective of this thesis is to demonstrate how these methods can be successfully employed in the analysis of energy and resource models. Chapter I focuses on SML. It constitutes the first application of this technique in the field of energy economics. The framework is as follows: Surveys on the cost of power outages to commercial and industrial customers usually capture multiple observations on the dependent variable for a given firm. The resulting pooled data set is censored and exhibits cross-sectional heterogeneity. We propose a model that addresses these issues by allowing regression coefficients to vary randomly across respondents and by using the Geweke-Hajivassiliou-Keane simulator and Halton sequences to estimate high-order cumulative distribution terms. This adjustment requires the use of SML in the estimation process. Our framework allows for a more comprehensive analysis of outage costs than existing models, which rely on the assumptions of parameter constancy and cross-sectional homogeneity. Our results strongly reject both of these restrictions. The central topic of the second Chapter is the use of PML, a robust estimation technique, in count data analysis of visitor demand for a system of recreation sites. PML has been popular with researchers in this context, since it guards against many types of mis-specification errors. We demonstrate, however, that estimation results will generally be biased even if derived through PML if the recreation model is based on aggregate, or zonal data. To countervail this problem, we propose a zonal model of recreation that captures some of the underlying heterogeneity of individual visitors by incorporating distributional information on per-capita income into the aggregate demand function. This adjustment eliminates the unrealistic constraint of constant income across zonal residents, and thus reduces the risk of aggregation bias in estimated macro-parameters. The corrected aggregate specification reinstates the applicability of PML. It also increases model efficiency, and allows-for the generation of welfare estimates for population subgroups.

  7. A predictive modeling approach to increasing the economic effectiveness of disease management programs.

    PubMed

    Bayerstadler, Andreas; Benstetter, Franz; Heumann, Christian; Winter, Fabian

    2014-09-01

    Predictive Modeling (PM) techniques are gaining importance in the worldwide health insurance business. Modern PM methods are used for customer relationship management, risk evaluation or medical management. This article illustrates a PM approach that enables the economic potential of (cost-) effective disease management programs (DMPs) to be fully exploited by optimized candidate selection as an example of successful data-driven business management. The approach is based on a Generalized Linear Model (GLM) that is easy to apply for health insurance companies. By means of a small portfolio from an emerging country, we show that our GLM approach is stable compared to more sophisticated regression techniques in spite of the difficult data environment. Additionally, we demonstrate for this example of a setting that our model can compete with the expensive solutions offered by professional PM vendors and outperforms non-predictive standard approaches for DMP selection commonly used in the market.

  8. A process-based standard for the Solar Energetic Particle Event Environment

    NASA Astrophysics Data System (ADS)

    Gabriel, Stephen

    For 10 years or more, there has been a lack of concensus on what the ISO standard model for the Solar Energetic Particle Event (SEPE) environment should be. Despite many technical discussions between the world experts in this field, it has been impossible to agree on which of the several models available should be selected as the standard. Most of these discussions at the ISO WG4 meetings and conferences, etc have centred around the differences in modelling approach between the MSU model and the several remaining models from elsewhere worldwide (mainly the USA and Europe). The topic is considered timely given the inclusion of a session on reference data sets at the Space Weather Workshop in Boulder in April 2014. The original idea of a ‘process-based’ standard was conceived by Dr Kent Tobiska as a way of getting round the problems associated with not only the presence of different models, which in themselves could have quite distinct modelling approaches but could also be based on different data sets. In essence, a process based standard approach overcomes these issues by allowing there to be more than one model and not necessarily a single standard model; however, any such model has to be completely transparent in that the data set and the modelling techniques used have to be not only to be clearly and unambiguously defined but also subject to peer review. If the model meets all of these requirements then it should be acceptable as a standard model. So how does this process-based approach resolve the differences between the existing modelling approaches for the SEPE environment and remove the impasse? In a sense, it does not remove all of the differences but only some of them; however, most importantly it will allow something which so far has been impossible without ambiguities and disagreement and that is a comparison of the results of the various models. To date one of the problems (if not the major one) in comparing the results of the various different SEPE statistical models has been caused by two things: 1) the data set and 2) the definition of an event Because unravelling the dependencies of the outputs of different statistical models on these two parameters is extremely difficult if not impossible, currently comparison of the results from the different models is also extremely difficult and can lead to controversies, especially over which model is the correct one; hence, when it comes to using these models for engineering purposes to calculate, for example, the radiation dose for a particular mission, the user, who is in all likelihood not an expert in this field, could be given two( or even more) very different environments and find it impossible to know how to select one ( or even how to compare them). What is proposed then, is a process-based standard, which in common with nearly all of the current models is composed of 3 elements, a standard data set, a standard event definition and a resulting standard event list. A standard event list is the output of this standard and can then be used with any of the existing (or indeed future) models that are based on events. This standard event list is completely traceable and transparent and represents a reference event list for all the community. When coupled with a statistical model, the results when compared will only be dependent on the statistical model and not on the data set or event definition.

  9. Robust tuning of robot control systems

    NASA Technical Reports Server (NTRS)

    Minis, I.; Uebel, M.

    1992-01-01

    The computed torque control problem is examined for a robot arm with flexible, geared, joint drive systems which are typical in many industrial robots. The standard computed torque algorithm is not directly applicable to this class of manipulators because of the dynamics introduced by the joint drive system. The proposed approach to computed torque control combines a computed torque algorithm with torque controller at each joint. Three such control schemes are proposed. The first scheme uses the joint torque control system currently implemented on the robot arm and a novel form of the computed torque algorithm. The other two use the standard computed torque algorithm and a novel model following torque control system based on model following techniques. Standard tasks and performance indices are used to evaluate the performance of the controllers. Both numerical simulations and experiments are used in evaluation. The study shows that all three proposed systems lead to improved tracking performance over a conventional PD controller.

  10. 48 CFR 9904.401-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 9904.401-50 Section 9904.401-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.401-50 Techniques for application. (a) The standard...

  11. Quality improvement prototype: Johnson Space Center, National Aeronautics and Space Administration

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Johnson Space Flight Center was recognized by the Office of Management and Budget as a model for its high standards of quality. Included are an executive summary of the center's activities, an organizational overview, techniques for improving quality, the status of the quality effort and a listing of key personnel.

  12. Combining a Standard Fischer Esterification Experiment with Stereochemical and Molecular-Modeling Concepts

    ERIC Educational Resources Information Center

    Clausen, Thomas P.

    2011-01-01

    The Fisher esterification reaction is ideally suited for the undergraduate organic laboratory because it is easy to carry out and often involves a suitable introduction to basic laboratory techniques including extraction, distillation, and simple spectroscopic (IR and NMR) analyses. Here, a Fisher esterification reaction is described in which the…

  13. Crystalline cellulose elastic modulus predicted by atomistic models of uniform deformation and nanoscale indentation

    Treesearch

    Xiawa Wu; Robert J. Moon; Ashlie Martini

    2013-01-01

    The elastic modulus of cellulose Iß in the axial and transverse directions was obtained from atomistic simulations using both the standard uniform deformation approach and a complementary approach based on nanoscale indentation. This allowed comparisons between the methods and closer connectivity to experimental measurement techniques. A reactive...

  14. Does the Use of Connective Words in Written Assessments Predict High School Students' Reading and Writing Achievement?

    ERIC Educational Resources Information Center

    Duggleby, Sandra J.; Tang, Wei; Kuo-Newhouse, Amy

    2016-01-01

    This study examined the relationship between ninth-grade students' use of connectives (temporal, causal, adversative, and additive) in functional writing and performance on standards-based/criterion-referenced measures of reading and writing. Specifically, structural equation modeling (SEM) techniques were used to examine the relationship between…

  15. Control Systems Lab Using a LEGO Mindstorms NXT Motor System

    ERIC Educational Resources Information Center

    Kim, Y.

    2011-01-01

    This paper introduces a low-cost LEGO Mindstorms NXT motor system for teaching classical and modern control theories in standard third-year undergraduate courses. The LEGO motor system can be used in conjunction with MATLAB, Simulink, and several necessary toolboxes to demonstrate: 1) a modeling technique; 2) proportional-integral-differential…

  16. Proceedings of the Sixteenth Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The effects of ionospheric and tropospheric propagation on time and frequency transfer, advances in the generation of precise time and frequency, time transfer techniques and filtering and modeling were among the topics emphasized. Rubidium and cesium frequency standard, crystal oscillators, masers, Kalman filters, and atomic clocks were discussed.

  17. 78 FR 9648 - Approval and Promulgation of Air Quality Implementation Plans; District of Columbia; Volatile...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-11

    ... Regulations (DCMR) for the Control of Volatile Organic Compounds (VOC) to meet the requirement to adopt reasonably available control technology (RACT) for sources as recommended by the Ozone Transport Commission (OTC) model rules and EPA's Control Techniques Guidelines (CTG) standards. On January 26, 2010 and...

  18. Teaching Anatomy of the Sheep Brain: A Laboratory Exercise with PlayDoh.

    ERIC Educational Resources Information Center

    Wilson, Christopher; Marcus, David K.

    1992-01-01

    Reports on the use of PlayDoh clay in a college neuroanatomy class. Describes how students constructed a PlayDoh model of a sheep's brain subsequent to performing a standard dissection procedure. Maintains that students learned from the procedure and recommended the use of the technique in future classes. (CFR)

  19. A Tale of Two Masses

    ERIC Educational Resources Information Center

    Bryan, Kurt

    2011-01-01

    This article presents an application of standard undergraduate ODE techniques to a modern engineering problem, that of using a tuned mass damper to control the vibration of a skyscraper. This material can be used in any ODE course in which the students have been familiarized with basic spring-mass models, resonance, and linear systems of ODEs.…

  20. MODEST: A Tool for Geodesy and Astronomy

    NASA Technical Reports Server (NTRS)

    Sovers, Ojars J.; Jacobs, Christopher S.; Lanyi, Gabor E.

    2004-01-01

    Features of the JPL VLBI modeling and estimation software "MODEST" are reviewed. Its main advantages include thoroughly documented model physics, portability, and detailed error modeling. Two unique models are included: modeling of source structure and modeling of both spatial and temporal correlations in tropospheric delay noise. History of the code parallels the development of the astrometric and geodetic VLBI technique and the software retains many of the models implemented during its advancement. The code has been traceably maintained since the early 1980s, and will continue to be updated with recent IERS standards. Scripts are being developed to facilitate user-friendly data processing in the era of e-VLBI.

  1. Forensic analysis of explosives using isotope ratio mass spectrometry (IRMS)--part 1: instrument validation of the DELTAplusXP IRMS for bulk nitrogen isotope ratio measurements.

    PubMed

    Benson, Sarah J; Lennard, Christopher J; Hill, David M; Maynard, Philip; Roux, Claude

    2010-01-01

    A significant amount of research has been conducted into the use of stable isotopes to assist in determining the origin of various materials. The research conducted in the forensic field shows the potential of isotope ratio mass spectrometry (IRMS) to provide a level of discrimination not achievable utilizing traditional forensic techniques. Despite the research there have been few, if any, publications addressing the validation and measurement uncertainty of the technique for forensic applications. This study, the first in a planned series, presents validation data for the measurement of bulk nitrogen isotope ratios in ammonium nitrate (AN) using the DELTA(plus)XP (Thermo Finnigan) IRMS instrument equipped with a ConFlo III interface and FlashEA 1112 elemental analyzer (EA). Appropriate laboratory standards, analytical methods and correction calculations were developed and evaluated. A validation protocol was developed in line with the guidelines provided by the National Association of Testing Authorities, Australia (NATA). Performance characteristics including: accuracy, precision/repeatability, reproducibility/ruggedness, robustness, linear range, and measurement uncertainty were evaluated for the measurement of nitrogen isotope ratios in AN. AN (99.5%) and ammonium thiocyanate (99.99+%) were determined to be the most suitable laboratory standards and were calibrated against international standards (certified reference materials). All performance characteristics were within an acceptable range when potential uncertainties, including the manufacturer's uncertainty of the technique and standards, were taken into account. The experiments described in this article could be used as a model for validation of other instruments for similar purposes. Later studies in this series will address the more general issue of demonstrating that the IRMS technique is scientifically sound and fit-for-purpose in the forensic explosives analysis field.

  2. The standard deviation of extracellular water/intracellular water is associated with all-cause mortality and technique failure in peritoneal dialysis patients.

    PubMed

    Tian, Jun-Ping; Wang, Hong; Du, Feng-He; Wang, Tao

    2016-09-01

    The mortality rate of peritoneal dialysis (PD) patients is still high, and the predicting factors for PD patient mortality remain to be determined. This study aimed to explore the relationship between the standard deviation (SD) of extracellular water/intracellular water (E/I) and all-cause mortality and technique failure in continuous ambulatory PD (CAPD) patients. All 152 patients came from the PD Center between January 1st 2006 and December 31st 2007. Clinical data and at least five-visit E/I ratio defined by bioelectrical impedance analysis were collected. The patients were followed up till December 31st 2010. The primary outcomes were death from any cause and technique failure. Kaplan-Meier analysis and Cox proportional hazards models were used to identify risk factors for mortality and technique failure in CAPD patients. All patients were followed up for 59.6 ± 23.0 months. The patients were divided into two groups according to their SD of E/I values: lower SD of E/I group (≤0.126) and higher SD of E/I group (>0.126). The patients with higher SD of E/I showed a higher all-cause mortality (log-rank χ (2) = 10.719, P = 0.001) and technique failure (log-rank χ (2) = 9.724, P = 0.002) than those with lower SD of E/I. Cox regression analysis found that SD of E/I independently predicted all-cause mortality (HR  3.551, 95 % CI 1.442-8.746, P = 0.006) and technique failure (HR  2.487, 95 % CI 1.093-5.659, P = 0.030) in CAPD patients after adjustment for confounders except when sensitive C-reactive protein was added into the model. The SD of E/I was a strong independent predictor of all-cause mortality and technique failure in CAPD patients.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    In this paper, a search for Higgs boson production in association with a pair of top quarks (more » $$ t\\overline{t} $$H) is performed, where the Higgs boson decays to $$ b\\overline{b} $$ , and both top quarks decay hadronically. The data used correspond to an integrated luminosity of 20.3 fb –1 of pp collisions at √s = 8 TeV collected with the ATLAS detector at the Large Hadron Collider. The search selects events with at least six energetic jets and uses a boosted decision tree algorithm to discriminate between signal and Standard Model background. The dominant multijet background is estimated using a dedicated data-driven technique. For a Higgs boson mass of 125 GeV, an upper limit of 6.4 (5.4) times the Standard Model cross section is observed (expected) at 95% confidence level. The best-fit value for the signal strength is μ = 1.6 ± 2.6 times the Standard Model expectation for m H = 125 GeV. Combining all $$ t\\overline{t}$$H searches carried out by ATLAS at √s = 8 and 7 TeV, an observed (expected) upper limit of 3.1 (1.4) times the Standard Model expectation is obtained at 95% confidence level, with a signal strength μ = 1.7 ± 0.8.« less

  4. The effects of different representations on static structure analysis of computer malware signatures.

    PubMed

    Narayanan, Ajit; Chen, Yi; Pang, Shaoning; Tao, Ban

    2013-01-01

    The continuous growth of malware presents a problem for internet computing due to increasingly sophisticated techniques for disguising malicious code through mutation and the time required to identify signatures for use by antiviral software systems (AVS). Malware modelling has focused primarily on semantics due to the intended actions and behaviours of viral and worm code. The aim of this paper is to evaluate a static structure approach to malware modelling using the growing malware signature databases now available. We show that, if malware signatures are represented as artificial protein sequences, it is possible to apply standard sequence alignment techniques in bioinformatics to improve accuracy of distinguishing between worm and virus signatures. Moreover, aligned signature sequences can be mined through traditional data mining techniques to extract metasignatures that help to distinguish between viral and worm signatures. All bioinformatics and data mining analysis were performed on publicly available tools and Weka.

  5. The Effects of Different Representations on Static Structure Analysis of Computer Malware Signatures

    PubMed Central

    Narayanan, Ajit; Chen, Yi; Pang, Shaoning; Tao, Ban

    2013-01-01

    The continuous growth of malware presents a problem for internet computing due to increasingly sophisticated techniques for disguising malicious code through mutation and the time required to identify signatures for use by antiviral software systems (AVS). Malware modelling has focused primarily on semantics due to the intended actions and behaviours of viral and worm code. The aim of this paper is to evaluate a static structure approach to malware modelling using the growing malware signature databases now available. We show that, if malware signatures are represented as artificial protein sequences, it is possible to apply standard sequence alignment techniques in bioinformatics to improve accuracy of distinguishing between worm and virus signatures. Moreover, aligned signature sequences can be mined through traditional data mining techniques to extract metasignatures that help to distinguish between viral and worm signatures. All bioinformatics and data mining analysis were performed on publicly available tools and Weka. PMID:23983644

  6. Rewriting Modulo SMT and Open System Analysis

    NASA Technical Reports Server (NTRS)

    Rocha, Camilo; Meseguer, Jose; Munoz, Cesar

    2014-01-01

    This paper proposes rewriting modulo SMT, a new technique that combines the power of SMT solving, rewriting modulo theories, and model checking. Rewriting modulo SMT is ideally suited to model and analyze infinite-state open systems, i.e., systems that interact with a non-deterministic environment. Such systems exhibit both internal non-determinism, which is proper to the system, and external non-determinism, which is due to the environment. In a reflective formalism, such as rewriting logic, rewriting modulo SMT can be reduced to standard rewriting. Hence, rewriting modulo SMT naturally extends rewriting-based reachability analysis techniques, which are available for closed systems, to open systems. The proposed technique is illustrated with the formal analysis of: (i) a real-time system that is beyond the scope of timed-automata methods and (ii) automatic detection of reachability violations in a synchronous language developed to support autonomous spacecraft operations.

  7. IEEE 1988 International Symposium on Electromagnetic Compatibility, Seattle, WA, Aug. 2-4, 1988, Record

    NASA Astrophysics Data System (ADS)

    Various papers on electromagnetic compatibility are presented. Some of the optics considered include: field-to-wire coupling 1 to 18 GHz, SHF/EHF field-to-wire coupling model, numerical method for the analysis of coupling to thin wire structures, spread-spectrum system with an adaptive array for combating interference, technique to select the optimum modulation indices for suppression of undesired signals for simultaneous range and data operations, development of a MHz RF leak detector technique for aircraft harness surveillance, and performance of standard aperture shielding techniques at microwave frequncies. Also discussed are: spectrum efficiency of spread-spectrum systems, control of power supply ripple produced sidebands in microwave transistor amplifiers, an intership SATCOM versus radar electromagnetic interference prediction model, considerations in the design of a broadband E-field sensing system, unique bonding methods for spacecraft, and review of EMC practice for launch vehicle systems.

  8. Bringing Standardized Processes in Atom-Probe Tomography: I Establishing Standardized Terminology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Ian M; Danoix, F; Forbes, Richard

    2011-01-01

    Defining standardized methods requires careful consideration of the entire field and its applications. The International Field Emission Society (IFES) has elected a Standards Committee, whose task is to determine the needed steps to establish atom-probe tomography as an accepted metrology technique. Specific tasks include developing protocols or standards for: terminology and nomenclature; metrology and instrumentation, including specifications for reference materials; test methodologies; modeling and simulations; and science-based health, safety, and environmental practices. The Committee is currently working on defining terminology related to atom-probe tomography with the goal to include terms into a document published by the International Organization for Standardsmore » (ISO). A lot of terms also used in other disciplines have already been defined) and will be discussed for adoption in the context of atom-probe tomography.« less

  9. Twist Model Development and Results from the Active Aeroelastic Wing F/A-18 Aircraft

    NASA Technical Reports Server (NTRS)

    Lizotte, Andrew M.; Allen, Michael J.

    2007-01-01

    Understanding the wing twist of the active aeroelastic wing (AAW) F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption. This technique produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.

  10. A new polyvinyl alcohol hydrogel vascular model (KEZLEX) for microvascular anastomosis training

    PubMed Central

    Mutoh, Tatsushi; Ishikawa, Tatsuya; Ono, Hidenori; Yasui, Nobuyuki

    2010-01-01

    Background: Microvascular anastomosis is a challenging neurosurgical technique that requires extensive training for one to master it. We developed a new vascular model (KEZLEX, Ono and Co., Ltd., Tokyo, Japan) as a non-animal, realistic tool for practicing microvascular anastomosis under realistic circumstances. Methods: The model was manufactured from polyvinyl alcohol hydrogel to provide 1.0–3.0 mm diameter (available for 0.5-mm pitch), 6–8 cm long tubes that have qualitatively similar surface characteristics, visibility, and stiffness to human donor and recipient arteries for various bypass surgeries based on three-dimensional computed tomography/magnetic resonance imaging scanning data reconstruction using visible human data set and vessel casts. Results: Trainees can acquire basic microsuturing techniques for end-to-end, end-to-side, and side-to-side anastomoses with handling similar to that for real arteries. To practice standard deep bypass techniques under realistic circumstances, the substitute vessel can be fixed to specific locations of a commercially available brain model with pins. Conclusion: Our vascular prosthesis model is simple and easy to set up for repeated practice, and will contribute to facilitate “off-the-job” training by trainees. PMID:21170365

  11. Isolation of Circulating Tumor Cells in an Orthotopic Mouse Model of Colorectal Cancer.

    PubMed

    Kochall, Susan; Thepkaysone, May-Linn; García, Sebastián A; Betzler, Alexander M; Weitz, Jürgen; Reissfelder, Christoph; Schölch, Sebastian

    2017-07-18

    Despite the advantages of easy applicability and cost-effectiveness, subcutaneous mouse models have severe limitations and do not accurately simulate tumor biology and tumor cell dissemination. Orthotopic mouse models have been introduced to overcome these limitations; however, such models are technically demanding, especially in hollow organs such as the large bowel. In order to produce uniform tumors which reliably grow and metastasize, standardized techniques of tumor cell preparation and injection are critical. We have developed an orthotopic mouse model of colorectal cancer (CRC) which develops highly uniform tumors and can be used for tumor biology studies as well as therapeutic trials. Tumor cells from either primary tumors, 2-dimensional (2D) cell lines or 3-dimensional (3D) organoids are injected into the cecum and, depending on the metastatic potential of the injected tumor cells, form highly metastatic tumors. In addition, CTCs can be found regularly. We here describe the technique of tumor cell preparation from both 2D cell lines and 3D organoids as well as primary tumor tissue, the surgical and injection techniques as well as the isolation of CTCs from the tumor-bearing mice, and present tips for troubleshooting.

  12. Twist Model Development and Results From the Active Aeroelastic Wing F/A-18 Aircraft

    NASA Technical Reports Server (NTRS)

    Lizotte, Andrew; Allen, Michael J.

    2005-01-01

    Understanding the wing twist of the active aeroelastic wing F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption and by using neural networks. These techniques produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.

  13. Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates

    NASA Astrophysics Data System (ADS)

    Moore, Christopher J.; Gair, Jonathan R.

    2014-12-01

    Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications, these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this Letter, a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalized over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform extremely well when applied to a toy problem. While we use the application to gravitational wave data analysis to motivate and illustrate the technique, it can be applied in any context where model uncertainties exist.

  14. Galerkin v. discrete-optimal projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less

  15. Non-linear homogenized and heterogeneous FE models for FRCM reinforced masonry walls in diagonal compression

    NASA Astrophysics Data System (ADS)

    Bertolesi, Elisa; Milani, Gabriele; Poggi, Carlo

    2016-12-01

    Two FE modeling techniques are presented and critically discussed for the non-linear analysis of tuff masonry panels reinforced with FRCM and subjected to standard diagonal compression tests. The specimens, tested at the University of Naples (Italy), are unreinforced and FRCM retrofitted walls. The extensive characterization of the constituent materials allowed adopting here very sophisticated numerical modeling techniques. In particular, here the results obtained by means of a micro-modeling strategy and homogenization approach are compared. The first modeling technique is a tridimensional heterogeneous micro-modeling where constituent materials (bricks, joints, reinforcing mortar and reinforcing grid) are modeled separately. The second approach is based on a two-step homogenization procedure, previously developed by the authors, where the elementary cell is discretized by means of three-noded plane stress elements and non-linear interfaces. The non-linear structural analyses are performed replacing the homogenized orthotropic continuum with a rigid element and non-linear spring assemblage (RBSM). All the simulations here presented are performed using the commercial software Abaqus. Pros and cons of the two approaches are herein discussed with reference to their reliability in reproducing global force-displacement curves and crack patterns, as well as to the rather different computational effort required by the two strategies.

  16. Towards generating ECSS-compliant fault tree analysis results via ConcertoFLA

    NASA Astrophysics Data System (ADS)

    Gallina, B.; Haider, Z.; Carlsson, A.

    2018-05-01

    Attitude Control Systems (ACSs) maintain the orientation of the satellite in three-dimensional space. ACSs need to be engineered in compliance with ECSS standards and need to ensure a certain degree of dependability. Thus, dependability analysis is conducted at various levels and by using ECSS-compliant techniques. Fault Tree Analysis (FTA) is one of these techniques. FTA is being automated within various Model Driven Engineering (MDE)-based methodologies. The tool-supported CHESS-methodology is one of them. This methodology incorporates ConcertoFLA, a dependability analysis technique enabling failure behavior analysis and thus FTA-results generation. ConcertoFLA, however, similarly to other techniques, still belongs to the academic research niche. To promote this technique within the space industry, we apply it on an ACS and discuss about its multi-faceted potentialities in the context of ECSS-compliant engineering.

  17. Effect of brewing technique and particle size of the ground coffee on sensory profiling of brewed Dampit robusta coffee

    NASA Astrophysics Data System (ADS)

    Fibrianto, K.; Febryana, Y. R.; Wulandari, E. S.

    2018-03-01

    This study aimed to assess the effect of different brewing techniques with the use of appropriate particle size standard of Apresiocoffee cafe (Category 1) compared to the difference brewing techniques with the use of the same particle size (coarse) (Category 2) of the sensory attributes Dampit robusta coffee. Rate-All-That-Apply (RATA) method was applied in this study, and the data was analysed by ANOVA General Linier Model (GLM) on Minitab-16. The influence of brewing techniques (tubruk, French-press, drips, syphon) and type of particle size ground coffee (fine, medium, coarse) were sensorially observed. The result showed that only two attributes, including bitter taste, and astringent/rough-mouth-feel were affected by brewing techniques (p-value <0.05) as observed for brewed coarse coffee powder.

  18. 48 CFR 9904.413-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 9904.413-50 Section 9904.413-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.413-50 Techniques for application. (a) Assignment of actuarial gains and losses. (1) In accordance with the provisions of Cost Accounting Standard 9904.412...

  19. Formal Verification for a Next-Generation Space Shuttle

    NASA Technical Reports Server (NTRS)

    Nelson, Stacy D.; Pecheur, Charles; Koga, Dennis (Technical Monitor)

    2002-01-01

    This paper discusses the verification and validation (V&2) of advanced software used for integrated vehicle health monitoring (IVHM), in the context of NASA's next-generation space shuttle. We survey the current VBCV practice and standards used in selected NASA projects, review applicable formal verification techniques, and discuss their integration info existing development practice and standards. We also describe two verification tools, JMPL2SMV and Livingstone PathFinder, that can be used to thoroughly verify diagnosis applications that use model-based reasoning, such as the Livingstone system.

  20. Cache-based error recovery for shared memory multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.

    1989-01-01

    A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.

  1. NO TIME FOR DEAD TIME: TIMING ANALYSIS OF BRIGHT BLACK HOLE BINARIES WITH NuSTAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachetti, Matteo; Barret, Didier; Harrison, Fiona A.

    Timing of high-count-rate sources with the NuSTAR Small Explorer Mission requires specialized analysis techniques. NuSTAR was primarily designed for spectroscopic observations of sources with relatively low count rates rather than for timing analysis of bright objects. The instrumental dead time per event is relatively long (∼2.5 msec) and varies event-to-event by a few percent. The most obvious effect is a distortion of the white noise level in the power density spectrum (PDS) that cannot be easily modeled with standard techniques due to the variable nature of the dead time. In this paper, we show that it is possible to exploitmore » the presence of two completely independent focal planes and use the cospectrum, the real part of the cross PDS, to obtain a good proxy of the white-noise-subtracted PDS. Thereafter, one can use a Monte Carlo approach to estimate the remaining effects of dead time, namely, a frequency-dependent modulation of the variance and a frequency-independent drop of the sensitivity to variability. In this way, most of the standard timing analysis can be performed, albeit with a sacrifice in signal-to-noise ratio relative to what would be achieved using more standard techniques. We apply this technique to NuSTAR observations of the black hole binaries GX 339–4, Cyg X-1, and GRS 1915+105.« less

  2. Pullout strength of standard vs. cement-augmented rotator cuff repair anchors in cadaveric bone.

    PubMed

    Aziz, Keith T; Shi, Brendan Y; Okafor, Louis C; Smalley, Jeremy; Belkoff, Stephen M; Srikumaran, Uma

    2018-05-01

    We evaluate a novel method of rotator cuff repair that uses arthroscopic equipment to inject bone cement into placed suture anchors. A cadaver model was used to assess the pullout strength of this technique versus anchors without augmentation. Six fresh-frozen matched pairs of upper extremities were screened to exclude those with prior operative procedures, fractures, or neoplasms. One side from each pair was randomized to undergo standard anchor fixation with the contralateral side to undergo anchor fixation augmented with bone cement. After anchor fixation, specimens were mounted on a servohydraulic testing system and suture anchors were pulled at 90° to the insertion to simulate the anatomic pull of the rotator cuff. Sutures were pulled at 1 mm/s until failure. The mean pullout strength was 540 N (95% confidence interval, 389 to 690 N) for augmented anchors and 202 N (95% confidence interval, 100 to 305 N) for standard anchors. The difference in pullout strength was statistically significant (P < 0.05). This study shows superior pullout strength of a novel augmented rotator cuff anchor technique. The described technique, which is achieved by extruding polymethylmethacrylate cement through a cannulated in situ suture anchor with fenestrations, significantly increased the ultimate failure load in cadaveric human humeri. This novel augmented fixation technique was simple and can be implemented with existing instrumentation. In osteoporotic bone, it may substantially reduce the rate of anchor failure. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Application of the 226Ra– 230Th– 234U and 227Ac– 231Pa– 235U radiochronometers to uranium certified reference materials

    DOE PAGES

    Rolison, John M.; Treinen, Kerri C.; McHugh, Kelly C.; ...

    2017-11-06

    Uranium certified reference materials (CRM) issued by New Brunswick Laboratory were subjected to dating using four independent uranium-series radiochronometers. In all cases, there was acceptable agreement between the model ages calculated using the 231Pa– 235U, 230Th– 234U, 227Ac– 235U or 226Ra– 234U radiochronometers and either the certified 230Th– 234U model date (CRM 125-A and CRM U630), or the known purification date (CRM U050 and CRM U100). Finally, the agreement between the four independent radiochronometers establishes these uranium certified reference materials as ideal informal standards for validating dating techniques utilized in nuclear forensic investigations in the absence of standards with certifiedmore » model ages for multiple radiochronometers.« less

  4. Application of the 226Ra– 230Th– 234U and 227Ac– 231Pa– 235U radiochronometers to uranium certified reference materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rolison, John M.; Treinen, Kerri C.; McHugh, Kelly C.

    Uranium certified reference materials (CRM) issued by New Brunswick Laboratory were subjected to dating using four independent uranium-series radiochronometers. In all cases, there was acceptable agreement between the model ages calculated using the 231Pa– 235U, 230Th– 234U, 227Ac– 235U or 226Ra– 234U radiochronometers and either the certified 230Th– 234U model date (CRM 125-A and CRM U630), or the known purification date (CRM U050 and CRM U100). Finally, the agreement between the four independent radiochronometers establishes these uranium certified reference materials as ideal informal standards for validating dating techniques utilized in nuclear forensic investigations in the absence of standards with certifiedmore » model ages for multiple radiochronometers.« less

  5. Novel dark matter phenomenology at colliders

    NASA Astrophysics Data System (ADS)

    Wardlow, Kyle Patrick

    While a suitable candidate particle for dark matter (DM) has yet to be discovered, it is possible one will be found by experiments currently investigating physics on the weak scale. If discovered on that energy scale, the dark matter will likely be producible in significant quantities at colliders like the LHC, allowing the properties of and underlying physical model characterizing the dark matter to be precisely determined. I assume that the dark matter will be produced as one of the decay products of a new massive resonance related to physics beyond the Standard Model, and using the energy distributions of the associated visible decay products, develop techniques for determining the symmetry protecting these potential dark matter candidates from decaying into lighter Standard Model (SM) particles and to simultaneously measure the masses of both the dark matter candidate and the particle from which it decays.

  6. Simplified estimation of age-specific reference intervals for skewed data.

    PubMed

    Wright, E M; Royston, P

    1997-12-30

    Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huo, Ya Ruth, E-mail: ruth.huo@gmail.com; Pillai, Krishna, E-mail: panthera6444@yahoo.com.au; Akhter, Javed, E-mail: s8603151@unsw.edu.au

    BackgroundThe dual-electrode bipolar-RFA (B-RFA) is increasingly used to ablate large liver tumours (3–7 cm). However, the challenging aspect of B-RFA is the placement of the two electrodes around the tumour. Realignment often requires the electrodes to be extracted and reinserted.AimThe aim of this study is to examine “Edgeboost”, a novel technique to increase the lateral ablation dimension without requiring any realignment of the electrodes.Methods and MaterialsAn egg-white model and an ex vivo calf liver model were used compare the standard bipolar mode ablation to Edgeboost-1 (reaching full impedance in bipolar mode initially, then cycling in unipolar mode between left and rightmore » probes) and Edgeboost-2 (similar to Edgeboost-1 but not reaching full impedance initially in bipolar mode in order to minimize charring and, thus, to increase total ablation time).ResultsA significantly larger outer lateral ablation dimension to the probe was achieved with Edgeboost-1 compared to the standard method in the liver model (1.14 cm, SD: 0.16 vs. 0.44 cm, SD: 0.24, p = 0.04). Edgeboost-2 achieved the largest outer lateral ablation dimension of 1.75 cm (SD: 0.35). A similar association was seen in the egg model. Edgeboost-2 almost doubled the mass ablated with standard bipolar alone (mass ratio: 1:1.94 in egg white and 1:1.84 in liver).ConclusionThis study demonstrates that the novel “Edgeboost” technique can increase the outer lateral ablation dimension without requiring the two inserted electrodes to be reinserted. This would be beneficial for interventionists who use the dual B-RFA.« less

  8. A novel computer-aided method to fabricate a custom one-piece glass fiber dowel-and-core based on digitized impression and crown preparation data.

    PubMed

    Chen, Zhiyu; Li, Ya; Deng, Xuliang; Wang, Xinzhi

    2014-06-01

    Fiber-reinforced composite dowels have been widely used for their superior biomechanical properties; however, their preformed shape cannot fit irregularly shaped root canals. This study aimed to describe a novel computer-aided method to create a custom-made one-piece dowel-and-core based on the digitization of impressions and clinical standard crown preparations. A standard maxillary die stone model containing three prepared teeth each (maxillary lateral incisor, canine, premolar) requiring dowel restorations was made. It was then mounted on an average value articulator with the mandibular stone model to simulate natural occlusion. Impressions for each tooth were obtained using vinylpolysiloxane with a sectional dual-arch tray and digitized with an optical scanner. The dowel-and-core virtual model was created by slicing 3D dowel data from impression digitization with core data selected from a standard crown preparation database of 107 records collected from clinics and digitized. The position of the chosen digital core was manually regulated to coordinate with the adjacent teeth to fulfill the crown restorative requirements. Based on virtual models, one-piece custom dowel-and-cores for three experimental teeth were milled from a glass fiber block with computer-aided manufacturing techniques. Furthermore, two patients were treated to evaluate the practicality of this new method. The one-piece glass fiber dowel-and-core made for experimental teeth fulfilled the clinical requirements for dowel restorations. Moreover, two patients were treated to validate the technique. This novel computer-aided method to create a custom one-piece glass fiber dowel-and-core proved to be practical and efficient. © 2013 by the American College of Prosthodontists.

  9. Anatomical reconstructions of pediatric airways from endoscopic images: a pilot study of the accuracy of quantitative endoscopy.

    PubMed

    Meisner, Eric M; Hager, Gregory D; Ishman, Stacey L; Brown, David; Tunkel, David E; Ishii, Masaru

    2013-11-01

    To evaluate the accuracy of three-dimensional (3D) airway reconstructions obtained using quantitative endoscopy (QE). We developed this novel technique to reconstruct precise 3D representations of airway geometries from endoscopic video streams. This method, based on machine vision methodologies, uses a post-processing step of the standard videos obtained during routine laryngoscopy and bronchoscopy. We hypothesize that this method is precise and will generate assessment of airway size and shape similar to those obtained using computed tomography (CT). This study was approved by the institutional review board (IRB). We analyzed video sequences from pediatric patients receiving rigid bronchoscopy. We generated 3D scaled airway models of the subglottis, trachea, and carina using QE. These models were compared to 3D airway models generated from CT. We used the CT data as the gold standard measure of airway size, and used a mixed linear model to estimate the average error in cross-sectional area and effective diameter for QE. The average error in cross sectional area (area sliced perpendicular to the long axis of the airway) was 7.7 mm(2) (variance 33.447 mm(4)). The average error in effective diameter was 0.38775 mm (variance 2.45 mm(2)), approximately 9% error. Our pilot study suggests that QE can be used to generate precise 3D reconstructions of airways. This technique is atraumatic, does not require ionizing radiation, and integrates easily into standard airway assessment protocols. We conjecture that this technology will be useful for staging airway disease and assessing surgical outcomes. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  10. Windowed and Wavelet Analysis of Marine Stratocumulus Cloud Inhomogeneity

    NASA Technical Reports Server (NTRS)

    Gollmer, Steven M.; Harshvardhan; Cahalan, Robert F.; Snider, Jack B.

    1995-01-01

    To improve radiative transfer calculations for inhomogeneous clouds, a consistent means of modeling inhomogeneity is needed. One current method of modeling cloud inhomogeneity is through the use of fractal parameters. This method is based on the supposition that cloud inhomogeneity over a large range of scales is related. An analysis technique named wavelet analysis provides a means of studying the multiscale nature of cloud inhomogeneity. In this paper, the authors discuss the analysis and modeling of cloud inhomogeneity through the use of wavelet analysis. Wavelet analysis as well as other windowed analysis techniques are used to study liquid water path (LWP) measurements obtained during the marine stratocumulus phase of the First ISCCP (International Satellite Cloud Climatology Project) Regional Experiment. Statistics obtained using analysis windows, which are translated to span the LWP dataset, are used to study the local (small scale) properties of the cloud field as well as their time dependence. The LWP data are transformed onto an orthogonal wavelet basis that represents the data as a number of times series. Each of these time series lies within a frequency band and has a mean frequency that is half the frequency of the previous band. Wavelet analysis combined with translated analysis windows reveals that the local standard deviation of each frequency band is correlated with the local standard deviation of the other frequency bands. The ratio between the standard deviation of adjacent frequency bands is 0.9 and remains constant with respect to time. This ratio defined as the variance coupling parameter is applicable to all of the frequency bands studied and appears to be related to the slope of the data's power spectrum. Similar analyses are performed on two cloud inhomogeneity models, which use fractal-based concepts to introduce inhomogeneity into a uniform cloud field. The bounded cascade model does this by iteratively redistributing LWP at each scale using the value of the local mean. This model is reformulated into a wavelet multiresolution framework, thereby presenting a number of variants of the bounded cascade model. One variant introduced in this paper is the 'variance coupled model,' which redistributes LWP using the local standard deviation and the variance coupling parameter. While the bounded cascade model provides an elegant two- parameter model for generating cloud inhomogeneity, the multiresolution framework provides more flexibility at the expense of model complexity. Comparisons are made with the results from the LWP data analysis to demonstrate both the strengths and weaknesses of these models.

  11. Technology and Technique Standards for Camera-Acquired Digital Dermatologic Images: A Systematic Review.

    PubMed

    Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C

    2015-08-01

    Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed, validated, and adopted to date. Dermatologic imaging is evolving without defined standards for camera-acquired images, leading to variable image quality and limited exchangeability. The development and adoption of universal technology and technique standards may first emerge in scenarios when image use is most associated with a defined clinical benefit.

  12. Investigating the capabilities of semantic enrichment of 3D CityEngine data

    NASA Astrophysics Data System (ADS)

    Solou, Dimitra; Dimopoulou, Efi

    2016-08-01

    In recent years the development of technology and the lifting of several technical limitations, has brought the third dimension to the fore. The complexity of urban environments and the strong need for land administration, intensify the need of using a three-dimensional cadastral system. Despite the progress in the field of geographic information systems and 3D modeling techniques, there is no fully digital 3D cadastre. The existing geographic information systems and the different methods of three-dimensional modeling allow for better management, visualization and dissemination of information. Nevertheless, these opportunities cannot be totally exploited because of deficiencies in standardization and interoperability in these systems. Within this context, CityGML was developed as an international standard of the Open Geospatial Consortium (OGC) for 3D city models' representation and exchange. CityGML defines geometry and topology for city modeling, also focusing on semantic aspects of 3D city information. The scope of CityGML is to reach common terminology, also addressing the imperative need for interoperability and data integration, taking into account the number of available geographic information systems and modeling techniques. The aim of this paper is to develop an application for managing semantic information of a model generated based on procedural modeling. The model was initially implemented in CityEngine ESRI's software, and then imported to ArcGIS environment. Final goal was the original model's semantic enrichment and then its conversion to CityGML format. Semantic information management and interoperability seemed to be feasible by the use of the 3DCities Project ESRI tools, since its database structure ensures adding semantic information to the CityEngine model and therefore automatically convert to CityGML for advanced analysis and visualization in different application areas.

  13. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    PubMed

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  14. Predicting ESI/MS Signal Change for Anions in Different Solvents.

    PubMed

    Kruve, Anneli; Kaupmees, Karl

    2017-05-02

    LC/ESI/MS is a technique widely used for qualitative and quantitative analysis in various fields. However, quantification is currently possible only for compounds for which the standard substances are available, as the ionization efficiency of different compounds in ESI source differs by orders of magnitude. In this paper we present an approach for quantitative LC/ESI/MS analysis without standard substances. This approach relies on accurately predicting the ionization efficiencies in ESI source based on a model, which uses physicochemical parameters of analytes. Furthermore, the model has been made transferable between different mobile phases and instrument setups by using a suitable set of calibration compounds. This approach has been validated both in flow injection and chromatographic mode with gradient elution.

  15. TWave: High-Order Analysis of Functional MRI

    PubMed Central

    Barnathan, Michael; Megalooikonomou, Vasileios; Faloutsos, Christos; Faro, Scott; Mohamed, Feroze B.

    2011-01-01

    The traditional approach to functional image analysis models images as matrices of raw voxel intensity values. Although such a representation is widely utilized and heavily entrenched both within neuroimaging and in the wider data mining community, the strong interactions among space, time, and categorical modes such as subject and experimental task inherent in functional imaging yield a dataset with “high-order” structure, which matrix models are incapable of exploiting. Reasoning across all of these modes of data concurrently requires a high-order model capable of representing relationships between all modes of the data in tandem. We thus propose to model functional MRI data using tensors, which are high-order generalizations of matrices equivalent to multidimensional arrays or data cubes. However, several unique challenges exist in the high-order analysis of functional medical data: naïve tensor models are incapable of exploiting spatiotemporal locality patterns, standard tensor analysis techniques exhibit poor efficiency, and mixtures of numeric and categorical modes of data are very often present in neuroimaging experiments. Formulating the problem of image clustering as a form of Latent Semantic Analysis and using the WaveCluster algorithm as a baseline, we propose a comprehensive hybrid tensor and wavelet framework for clustering, concept discovery, and compression of functional medical images which successfully addresses these challenges. Our approach reduced runtime and dataset size on a 9.3 GB finger opposition motor task fMRI dataset by up to 98% while exhibiting improved spatiotemporal coherence relative to standard tensor, wavelet, and voxel-based approaches. Our clustering technique was capable of automatically differentiating between the frontal areas of the brain responsible for task-related habituation and the motor regions responsible for executing the motor task, in contrast to a widely used fMRI analysis program, SPM, which only detected the latter region. Furthermore, our approach discovered latent concepts suggestive of subject handedness nearly 100x faster than standard approaches. These results suggest that a high-order model is an integral component to accurate scalable functional neuroimaging. PMID:21729758

  16. Automated monitor and control for deep space network subsystems

    NASA Technical Reports Server (NTRS)

    Smyth, P.

    1989-01-01

    The problem of automating monitor and control loops for Deep Space Network (DSN) subsystems is considered and an overview of currently available automation techniques is given. The use of standard numerical models, knowledge-based systems, and neural networks is considered. It is argued that none of these techniques alone possess sufficient generality to deal with the demands imposed by the DSN environment. However, it is shown that schemes that integrate the better aspects of each approach and are referenced to a formal system model show considerable promise, although such an integrated technology is not yet available for implementation. Frequent reference is made to the receiver subsystem since this work was largely motivated by experience in developing an automated monitor and control loop for the advanced receiver.

  17. Model-Based Data Integration and Process Standardization Techniques for Fault Management: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Haste, Deepak; Ghoshal, Sudipto; Johnson, Stephen B.; Moore, Craig

    2018-01-01

    This paper describes the theory and considerations in the application of model-based techniques to assimilate information from disjoint knowledge sources for performing NASA's Fault Management (FM)-related activities using the TEAMS® toolset. FM consists of the operational mitigation of existing and impending spacecraft failures. NASA's FM directives have both design-phase and operational-phase goals. This paper highlights recent studies by QSI and DST of the capabilities required in the TEAMS® toolset for conducting FM activities with the aim of reducing operating costs, increasing autonomy, and conforming to time schedules. These studies use and extend the analytic capabilities of QSI's TEAMS® toolset to conduct a range of FM activities within a centralized platform.

  18. The Role of Inflation and Price Escalation Adjustments in Properly Estimating Program Costs: F-35 Case Study

    DTIC Science & Technology

    2016-03-01

    regression models that yield hedonic price indexes is closely related to standard techniques for developing cost estimating relationships ( CERs ...October 2014). iii analysis) and derives a price index from the coefficients on variables reflecting the year of purchase. In CER development, the...index. The relevant cost metric in both cases is unit recurring flyaway (URF) costs. For the current project, we develop a “Baseline” CER model, taking

  19. Influence of Finite Element Software on Energy Release Rates Computed Using the Virtual Crack Closure Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Goetze, Dirk; Ransom, Jonathon (Technical Monitor)

    2006-01-01

    Strain energy release rates were computed along straight delamination fronts of Double Cantilever Beam, End-Notched Flexure and Single Leg Bending specimens using the Virtual Crack Closure Technique (VCCT). Th e results were based on finite element analyses using ABAQUS# and ANSYS# and were calculated from the finite element results using the same post-processing routine to assure a consistent procedure. Mixed-mode strain energy release rates obtained from post-processing finite elem ent results were in good agreement for all element types used and all specimens modeled. Compared to previous studies, the models made of s olid twenty-node hexahedral elements and solid eight-node incompatible mode elements yielded excellent results. For both codes, models made of standard brick elements and elements with reduced integration did not correctly capture the distribution of the energy release rate acr oss the width of the specimens for the models chosen. The results suggested that element types with similar formulation yield matching results independent of the finite element software used. For comparison, m ixed-mode strain energy release rates were also calculated within ABAQUS#/Standard using the VCCT for ABAQUS# add on. For all specimens mod eled, mixed-mode strain energy release rates obtained from ABAQUS# finite element results using post-processing were almost identical to re sults calculated using the VCCT for ABAQUS# add on.

  20. Covariate selection with group lasso and doubly robust estimation of causal effects

    PubMed Central

    Koch, Brandon; Vock, David M.; Wolfson, Julian

    2017-01-01

    Summary The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this paper, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. PMID:28636276

  1. Covariate selection with group lasso and doubly robust estimation of causal effects.

    PubMed

    Koch, Brandon; Vock, David M; Wolfson, Julian

    2018-03-01

    The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this article, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. © 2017, The International Biometric Society.

  2. Modeling single event induced crosstalk in nanometer technologies

    NASA Astrophysics Data System (ADS)

    Boorla, Vijay K.

    Radiation effects become more important in combinational logic circuits with newer technologies. When a high energetic particle strikes at the sensitive region within the combinational logic circuit a voltage pulse called Single Event Transient is created. Recently, researchers reported Single Event Crosstalk because of increasing coupling effects. In this work, the closed form expression for SE crosstalk noise is formulated for the first time. For all calculations, 4-pi model is used in this work. The crosstalk model uses a reduced transfer function between aggressor coupling node and victim node to reduce information loss. Aggressor coupling node waveform is obtained and then applied to transfer function between the coupling node and the victim output to obtain victim noise voltage. This work includes both effect of passive aggressor loading on victim and victim loading on aggressor by considering resistive shielding effect. Noise peak expressions derived in this work show very good results in comparison to HSPICE results. Results show that average error for noise peak is 3.794% while allowing for very fast analysis. Once the SE crosstalk noise is calculated, one can hire mitigation techniques such as driver sizing. A standard DTMOS technique along with sizing is proposed in this work to mitigate SE crosstalk. This combined approach can saves in some areas compared to driver sizing alone. Key Words: Crosstalk Noise, Closed Form Modeling, Standard DTMOS

  3. Single isotope evaluation of pulmonary capillary protein leak (ARDS model) using computerized gamma scintigraphy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tatum, J.L.; Strash, A.M.; Sugerman, H.J.

    Using a canine oleic acid model, a computerized gamma scintigraphic technique was evaluated to determine 1) ability to detect pulmonary capillary protein leak in a model temporally consistent with clinical adult respiratory distress syndrome (ARDS), 2) the possibility of providing a quantitative index of leak, and 3) the feasibility of closely spaced repeat evaluations. Study animals received oleic acid (controls, n . 10; 0.05 ml/kg, n . 10; 0.10 ml/kg, n . 12; 0.15 ml/kg, n . 6) 3 hours prior to a tracer dose of technetium-99m (/sup 99/mTc) HSA. One animal in each dose group also received two repeatmore » tracer injections spaced a minimum of 45 minutes apart. Digital images were obtained with a conventional gamma camera interfaced to a dedicated medical computer. Lung: heart ratio versus time curves were generated, and a slope index was calculated for each curve. Slope index values for all doses were significantly greater than control values (P(t) less than 0.0001). Each incremental dose increase was also significantly greater than the previous dose level. Oleic acid dose versus slope index fitted a linear regression model with r . 0.94. Repeat dosing produced index values with standard deviations less than the group sample standard deviations. We feel this technique may have application in the clinical study of pulmonary permeability edema.« less

  4. Accuracy of Noninvasive Estimation Techniques for the State of the Cochlear Amplifier

    NASA Astrophysics Data System (ADS)

    Dalhoff, Ernst; Gummer, Anthony W.

    2011-11-01

    Estimation of the function of the cochlea in human is possible only by deduction from indirect measurements, which may be subjective or objective. Therefore, for basic research as well as diagnostic purposes, it is important to develop methods to deduce and analyse error sources of cochlear-state estimation techniques. Here, we present a model of technical and physiologic error sources contributing to the estimation accuracy of hearing threshold and the state of the cochlear amplifier and deduce from measurements of human that the estimated standard deviation can be considerably below 6 dB. Experimental evidence is drawn from two partly independent objective estimation techniques for the auditory signal chain based on measurements of otoacoustic emissions.

  5. Model-based engineering for medical-device software.

    PubMed

    Ray, Arnab; Jetley, Raoul; Jones, Paul L; Zhang, Yi

    2010-01-01

    This paper demonstrates the benefits of adopting model-based design techniques for engineering medical device software. By using a patient-controlled analgesic (PCA) infusion pump as a candidate medical device, the authors show how using models to capture design information allows for i) fast and efficient construction of executable device prototypes ii) creation of a standard, reusable baseline software architecture for a particular device family, iii) formal verification of the design against safety requirements, and iv) creation of a safety framework that reduces verification costs for future versions of the device software. 1.

  6. Alternating Direction Implicit (ADI) schemes for a PDE-based image osmosis model

    NASA Astrophysics Data System (ADS)

    Calatroni, L.; Estatico, C.; Garibaldi, N.; Parisotto, S.

    2017-10-01

    We consider Alternating Direction Implicit (ADI) splitting schemes to compute efficiently the numerical solution of the PDE osmosis model considered by Weickert et al. in [10] for several imaging applications. The discretised scheme is shown to preserve analogous properties to the continuous model. The dimensional splitting strategy traduces numerically into the solution of simple tridiagonal systems for which standard matrix factorisation techniques can be used to improve upon the performance of classical implicit methods, even for large time steps. Applications to the shadow removal problem are presented.

  7. BF actions for the Husain-Kuchař model

    NASA Astrophysics Data System (ADS)

    Barbero G., J. Fernando; Villaseñor, Eduardo J.

    2001-04-01

    We show that the Husain-Kuchař model can be described in the framework of BF theories. This is a first step towards its quantization by standard perturbative quantum field theory techniques or the spin-foam formalism introduced in the space-time description of general relativity and other diff-invariant theories. The actions that we will consider are similar to the ones describing the BF-Yang-Mills model and some mass generating mechanisms for gauge fields. We will also discuss the role of diffeomorphisms in the new formulations that we propose.

  8. Parity oscillations and photon correlation functions in the Z2-U (1 ) Dicke model at a finite number of atoms or qubits

    NASA Astrophysics Data System (ADS)

    Yi-Xiang, Yu; Ye, Jinwu; Zhang, CunLin

    2016-08-01

    Four standard quantum optics models, that is, the Rabi, Dicke, Jaynes-Cummings, and Tavis-Cummings models, were proposed by physicists many decades ago. Despite their relative simple forms and many previous theoretical works, their physics at a finite N , especially inside the superradiant regime, remain unknown. In this work, by using the strong-coupling expansion and exact diagonalization (ED), we study the Z2-U(1 ) Dicke model with independent rotating-wave coupling g and counterrotating-wave coupling g' at a finite N . This model includes the four standard quantum optics models as its various special limits. We show that in the superradiant phase, the system's energy levels are grouped into doublets with even and odd parity. Any anisotropy β =g'/g ≠1 leads to the oscillation of parities in both the ground and excited doublets as the atom-photon coupling strength increases. The oscillations will be pushed to the infinite coupling strength in the isotropic Z2 limit β =1 . We find nearly perfect agreement between the strong-coupling expansion and the ED in the superradiant regime when β is not too small. We also compute the photon correlation functions, squeezing spectrum, and number correlation functions that can be measured by various standard optical techniques.

  9. Comparison of the Pullout Strength of Different Pedicle Screw Designs and Augmentation Techniques in an Osteoporotic Bone Model.

    PubMed

    Kiyak, Gorkem; Balikci, Tevfik; Heydar, Ahmed Majid; Bezer, Murat

    2018-02-01

    Mechanical study. To compare the pullout strength of different screw designs and augmentation techniques in an osteoporotic bone model. Adequate bone screw pullout strength is a common problem among osteoporotic patients. Various screw designs and augmentation techniques have been developed to improve the biomechanical characteristics of the bone-screw interface. Polyurethane blocks were used to mimic human osteoporotic cancellous bone, and six different screw designs were tested. Five standard and expandable screws without augmentation, eight expandable screws with polymethylmethacrylate (PMMA) or calcium phosphate augmentation, and distal cannulated screws with PMMA and calcium phosphate augmentation were tested. Mechanical tests were performed on 10 unused new screws of each group. Screws with or without augmentation were inserted in a block that was held in a fixture frame, and a longitudinal extraction force was applied to the screw head at a loading rate of 5 mm/min. Maximum load was recorded in a load displacement curve. The peak pullout force of all tested screws with or without augmentation was significantly greater than that of the standard pedicle screw. The greatest pullout force was observed with 40-mm expandable pedicle screws with four fins and PMMA augmentation. Augmented distal cannulated screws did not have a greater peak pullout force than nonaugmented expandable screws. PMMA augmentation provided a greater peak pullout force than calcium phosphate augmentation. Expandable pedicle screws had greater peak pullout forces than standard pedicle screws and had the advantage of augmentation with either PMMA or calcium phosphate cement. Although calcium phosphate cement is biodegradable, osteoconductive, and nonexothermic, PMMA provided a significantly greater peak pullout force. PMMA-augmented expandable 40-mm four-fin pedicle screws had the greatest peak pullout force.

  10. Current progress in patient-specific modeling

    PubMed Central

    2010-01-01

    We present a survey of recent advancements in the emerging field of patient-specific modeling (PSM). Researchers in this field are currently simulating a wide variety of tissue and organ dynamics to address challenges in various clinical domains. The majority of this research employs three-dimensional, image-based modeling techniques. Recent PSM publications mostly represent feasibility or preliminary validation studies on modeling technologies, and these systems will require further clinical validation and usability testing before they can become a standard of care. We anticipate that with further testing and research, PSM-derived technologies will eventually become valuable, versatile clinical tools. PMID:19955236

  11. Forecasting coconut production in the Philippines with ARIMA model

    NASA Astrophysics Data System (ADS)

    Lim, Cristina Teresa

    2015-02-01

    The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.

  12. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    NASA Astrophysics Data System (ADS)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  13. Experiments to Determine Whether Recursive Partitioning (CART) or an Artificial Neural Network Overcomes Theoretical Limitations of Cox Proportional Hazards Regression

    NASA Technical Reports Server (NTRS)

    Kattan, Michael W.; Hess, Kenneth R.; Kattan, Michael W.

    1998-01-01

    New computationally intensive tools for medical survival analyses include recursive partitioning (also called CART) and artificial neural networks. A challenge that remains is to better understand the behavior of these techniques in effort to know when they will be effective tools. Theoretically they may overcome limitations of the traditional multivariable survival technique, the Cox proportional hazards regression model. Experiments were designed to test whether the new tools would, in practice, overcome these limitations. Two datasets in which theory suggests CART and the neural network should outperform the Cox model were selected. The first was a published leukemia dataset manipulated to have a strong interaction that CART should detect. The second was a published cirrhosis dataset with pronounced nonlinear effects that a neural network should fit. Repeated sampling of 50 training and testing subsets was applied to each technique. The concordance index C was calculated as a measure of predictive accuracy by each technique on the testing dataset. In the interaction dataset, CART outperformed Cox (P less than 0.05) with a C improvement of 0.1 (95% Cl, 0.08 to 0.12). In the nonlinear dataset, the neural network outperformed the Cox model (P less than 0.05), but by a very slight amount (0.015). As predicted by theory, CART and the neural network were able to overcome limitations of the Cox model. Experiments like these are important to increase our understanding of when one of these new techniques will outperform the standard Cox model. Further research is necessary to predict which technique will do best a priori and to assess the magnitude of superiority.

  14. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique.

    PubMed

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young

    2014-03-01

    This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.

  15. Is there any alternative to standard chest compression techniques in infants? A randomized manikin trial of the new "2-thumb-fist" option.

    PubMed

    Ladny, Jerzy R; Smereka, Jacek; Rodríguez-Núñez, Antonio; Leung, Steve; Ruetzler, Kurt; Szarpak, Lukasz

    2018-02-01

    Pediatric cardiac arrest is a fatal emergent condition that is associated with high mortality, permanent neurological injury, and is a socioeconomic burden at both the individual and national levels. The aim of this study was to test in an infant manikin a new chest compression (CC) technique ("2 thumbs-fist" or nTTT) in comparison with standard 2-finger (TFT) and 2-thumb-encircling hands techniques (TTEHT). This was prospective, randomized, crossover manikin study. Sixty-three nurses who performed a randomized sequence of 2-minute continuous CC with the 3 techniques in random order. Simulated systolic (SBP), diastolic (DBP), mean arterial pressure (MAP), and pulse pressures (PP, SBP-DBP) in mm Hg were measured. The nTTT resulted in a higher median SBP value (69 [IQR, 63-74] mm Hg) than TTEHT (41.5 [IQR, 39-42] mm Hg), (P < .001) and TFT (26.5 [IQR, 25.5-29] mm Hg), (P <.001). The simulated median value of DBP was 20 (IQR, 19-20) mm Hg with nTTT, 18 (IQR, 17-19) mm Hg with TTEHT and 23.5 (IQR, 22-25.5) mm Hg with TFT. DBP was significantly higher with TFT than with TTEHT (P <.001), as well as with TTEHT than nTTT (P <.001). Median values of simulated MAP were 37 (IQR, 34.5-38) mm Hg with nTTT, 26 (IQR, 25-26) mm Hg with TTEHT and 24.5 (IQR,23.5-26.5) mm Hg with TFT. A statistically significant difference was noticed between nTTT and TFT (P <.001), nTTT and TTEHT (P <.001), and between TTEHT and TFT (P <.001). Sixty-one subjects (96.8%) preferred the nTTT over the 2 standard methods. The new nTTT technique achieved higher SBP and MAP compared to the standard CC techniques in our infant manikin model. nTTT appears to be a suitable alternative or complementary to the TFT and TTEHT.

  16. Development of an Uncertainty Model for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walter, Joel A.; Lawrence, William R.; Elder, David W.; Treece, Michael D.

    2010-01-01

    This paper introduces an uncertainty model being developed for the National Transonic Facility (NTF). The model uses a Monte Carlo technique to propagate standard uncertainties of measured values through the NTF data reduction equations to calculate the combined uncertainties of the key aerodynamic force and moment coefficients and freestream properties. The uncertainty propagation approach to assessing data variability is compared with ongoing data quality assessment activities at the NTF, notably check standard testing using statistical process control (SPC) techniques. It is shown that the two approaches are complementary and both are necessary tools for data quality assessment and improvement activities. The SPC approach is the final arbiter of variability in a facility. Its result encompasses variation due to people, processes, test equipment, and test article. The uncertainty propagation approach is limited mainly to the data reduction process. However, it is useful because it helps to assess the causes of variability seen in the data and consequently provides a basis for improvement. For example, it is shown that Mach number random uncertainty is dominated by static pressure variation over most of the dynamic pressure range tested. However, the random uncertainty in the drag coefficient is generally dominated by axial and normal force uncertainty with much less contribution from freestream conditions.

  17. Evaluation of marginal/internal fit of chrome-cobalt crowns: Direct laser metal sintering versus computer-aided design and computer-aided manufacturing.

    PubMed

    Gunsoy, S; Ulusoy, M

    2016-01-01

    The purpose of this study was to evaluate the internal and marginal fit of chrome cobalt (Co-Cr) crowns were fabricated with laser sintering, computer-aided design (CAD) and computer-aided manufacturing, and conventional methods. Polyamide master and working models were designed and fabricated. The models were initially designed with a software application for three-dimensional (3D) CAD (Maya, Autodesk Inc.). All models were fabricated models were produced by a 3D printer (EOSINT P380 SLS, EOS). 128 1-unit Co-Cr fixed dental prostheses were fabricated with four different techniques: Conventional lost wax method, milled wax with lost-wax method (MWLW), direct laser metal sintering (DLMS), and milled Co-Cr (MCo-Cr). The cement film thickness of the marginal and internal gaps was measured by an observer using a stereomicroscope after taking digital photos in ×24. Best fit rates according to mean and standard deviations of all measurements was in DLMS both in premolar (65.84) and molar (58.38) models in μm. A significant difference was found DLMS and the rest of fabrication techniques (P < 0.05). No significant difference was found between MCo-CR and MWLW in all fabrication techniques both in premolar and molar models (P > 0.05). DMLS was best fitting fabrication techniques for single crown based on the results.The best fit was found in marginal; the larger gap was found in occlusal.All groups were within the clinically acceptable misfit range.

  18. Characterization of Deficiencies in the Frequency Domain Forced Response Analysis Technique for Turbine Bladed Disks

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Schmauch, Preston

    2012-01-01

    Turbine blades in rocket and jet engine turbomachinery experience enormous harmonic loading conditions. These loads result from the integer number of upstream and downstream stator vanes as well as the other turbine stages. The standard technique for forced response analysis to assess structural integrity is to decompose a CFD generated flow field into its harmonic components, and to then perform a frequency response analysis at the problematic natural frequencies. Recent CFD analysis and water-flow testing at NASA/MSFC, though, indicates that this technique may miss substantial harmonic and non-harmonic excitation sources that become present in complex flows. These complications suggest the question of whether frequency domain analysis is capable of capturing the excitation content sufficiently. Two studies comparing frequency response analysis with transient response analysis, therefore, have been performed. The first is of a bladed disk with each blade modeled by simple beam elements. It was hypothesized that the randomness and other variation from the standard harmonic excitation would reduce the blade structural response, but the results showed little reduction. The second study was of a realistic model of a bladed-disk excited by the same CFD used in the J2X engine program. The results showed that the transient analysis results were up to 10% higher for "clean" nodal diameter excitations and six times larger for "messy" excitations, where substantial Fourier content around the main harmonic exists.

  19. Bayesian analysis of anisotropic cosmologies: Bianchi VIIh and WMAP

    NASA Astrophysics Data System (ADS)

    McEwen, J. D.; Josset, T.; Feeney, S. M.; Peiris, H. V.; Lasenby, A. N.

    2013-12-01

    We perform a definitive analysis of Bianchi VIIh cosmologies with Wilkinson Microwave Anisotropy Probe (WMAP) observations of the cosmic microwave background (CMB) temperature anisotropies. Bayesian analysis techniques are developed to study anisotropic cosmologies using full-sky and partial-sky masked CMB temperature data. We apply these techniques to analyse the full-sky internal linear combination (ILC) map and a partial-sky masked W-band map of WMAP 9 yr observations. In addition to the physically motivated Bianchi VIIh model, we examine phenomenological models considered in previous studies, in which the Bianchi VIIh parameters are decoupled from the standard cosmological parameters. In the two phenomenological models considered, Bayes factors of 1.7 and 1.1 units of log-evidence favouring a Bianchi component are found in full-sky ILC data. The corresponding best-fitting Bianchi maps recovered are similar for both phenomenological models and are very close to those found in previous studies using earlier WMAP data releases. However, no evidence for a phenomenological Bianchi component is found in the partial-sky W-band data. In the physical Bianchi VIIh model, we find no evidence for a Bianchi component: WMAP data thus do not favour Bianchi VIIh cosmologies over the standard Λ cold dark matter (ΛCDM) cosmology. It is not possible to discount Bianchi VIIh cosmologies in favour of ΛCDM completely, but we are able to constrain the vorticity of physical Bianchi VIIh cosmologies at (ω/H)0 < 8.6 × 10-10 with 95 per cent confidence.

  20. Functional Data Analysis for Dynamical System Identification of Behavioral Processes

    PubMed Central

    Trail, Jessica B.; Collins, Linda M.; Rivera, Daniel E.; Li, Runze; Piper, Megan E.; Baker, Timothy B.

    2014-01-01

    Efficient new technology has made it straightforward for behavioral scientists to collect anywhere from several dozen to several thousand dense, repeated measurements on one or more time-varying variables. These intensive longitudinal data (ILD) are ideal for examining complex change over time, but present new challenges that illustrate the need for more advanced analytic methods. For example, in ILD the temporal spacing of observations may be irregular, and individuals may be sampled at different times. Also, it is important to assess both how the outcome changes over time and the variation between participants' time-varying processes to make inferences about a particular intervention's effectiveness within the population of interest. The methods presented in this article integrate two innovative ILD analytic techniques: functional data analysis and dynamical systems modeling. An empirical application is presented using data from a smoking cessation clinical trial. Study participants provided 42 daily assessments of pre-quit and post-quit withdrawal symptoms. Regression splines were used to approximate smooth functions of craving and negative affect and to estimate the variables' derivatives for each participant. We then modeled the dynamics of nicotine craving using standard input-output dynamical systems models. These models provide a more detailed characterization of the post-quit craving process than do traditional longitudinal models, including information regarding the type, magnitude, and speed of the response to an input. The results, in conjunction with standard engineering control theory techniques, could potentially be used by tobacco researchers to develop a more effective smoking intervention. PMID:24079929

  1. Photoacoustic and luminescence spectroscopy of benzil crystals

    NASA Astrophysics Data System (ADS)

    Bonno, B.; Laporte, J. L.; Rousset, Y.

    1991-06-01

    In the present work, both photoacoustic and luminescence techniques were employed to study molecular crystals. This paper presents an extension of the standard Rosencwaig-Gersho photoacoustic model to molecular crystals, which includes finite-deexcitation-time effects and excited-state populations. In the temperature range 100-300 K, the phosphorescence quantum yield and thermal diffusivity of benzil crystals were determined.

  2. Photoelastic analysis of mandibular full-arch implant-supported fixed dentures made with different bar materials and manufacturing techniques.

    PubMed

    Zaparolli, Danilo; Peixoto, Raniel Fernandes; Pupim, Denise; Macedo, Ana Paula; Toniollo, Marcelo Bighetti; Mattos, Maria da Glória Chiarello de

    2017-12-01

    To compare the stress distribution of mandibular full dentures supported with implants according to the bar materials and manufacturing techniques using a qualitative photoelastic analysis. An acrylic master model simulating the mandibular arch was fabricated with four Morse taper implant analogs of 4.5×6mm. Four different bars were manufactured according to different material and techniques: fiber-reinforced resin (G1, Trinia, CAD/CAM), commercially pure titanium (G2, cpTi, CAD/CAM), cobalt‑chromium (G3, Co-Cr, CAD/CAM) and cobalt‑chromium (G4, Co-Cr, conventional cast). Standard clinical and laboratory procedures were used by an experienced dental technician to fabricate 4 mandibular implant-supported dentures. The photoelastic model was created based on the acrylic master model. A load simulation (150N) was performed in total occlusion against the antagonist. Dentures with fiber-reinforced resin bar (G1) exhibited better stress distribution. Dentures with machined Co-Cr bar (G3) exhibited the worst standard of stress distribution, with an overload on the distal part of the posteriors implants, followed by dentures with cast Co-Cr bar (G4) and machined cpTi bar (G2). The fiber-reinforced resin bar exhibited an adequate stress distribution and can serve as a viable alternative for oral rehabilitation with mandibular full dentures supported with implants. Moreover, the use of the G1 group offered advantages including reduced weight and less possible overload to the implants components, leading to the preservation of the support structure. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  4. Computational fluid dynamics analysis of cyclist aerodynamics: performance of different turbulence-modelling and boundary-layer modelling approaches.

    PubMed

    Defraeye, Thijs; Blocken, Bert; Koninckx, Erwin; Hespel, Peter; Carmeliet, Jan

    2010-08-26

    This study aims at assessing the accuracy of computational fluid dynamics (CFD) for applications in sports aerodynamics, for example for drag predictions of swimmers, cyclists or skiers, by evaluating the applied numerical modelling techniques by means of detailed validation experiments. In this study, a wind-tunnel experiment on a scale model of a cyclist (scale 1:2) is presented. Apart from three-component forces and moments, also high-resolution surface pressure measurements on the scale model's surface, i.e. at 115 locations, are performed to provide detailed information on the flow field. These data are used to compare the performance of different turbulence-modelling techniques, such as steady Reynolds-averaged Navier-Stokes (RANS), with several k-epsilon and k-omega turbulence models, and unsteady large-eddy simulation (LES), and also boundary-layer modelling techniques, namely wall functions and low-Reynolds number modelling (LRNM). The commercial CFD code Fluent 6.3 is used for the simulations. The RANS shear-stress transport (SST) k-omega model shows the best overall performance, followed by the more computationally expensive LES. Furthermore, LRNM is clearly preferred over wall functions to model the boundary layer. This study showed that there are more accurate alternatives for evaluating flow around bluff bodies with CFD than the standard k-epsilon model combined with wall functions, which is often used in CFD studies in sports. 2010 Elsevier Ltd. All rights reserved.

  5. Novel secret key generation techniques using memristor devices

    NASA Astrophysics Data System (ADS)

    Abunahla, Heba; Shehada, Dina; Yeun, Chan Yeob; Mohammad, Baker; Jaoude, Maguy Abi

    2016-02-01

    This paper proposes novel secret key generation techniques using memristor devices. The approach depends on using the initial profile of a memristor as a master key. In addition, session keys are generated using the master key and other specified parameters. In contrast to existing memristor-based security approaches, the proposed development is cost effective and power efficient since the operation can be achieved with a single device rather than a crossbar structure. An algorithm is suggested and demonstrated using physics based Matlab model. It is shown that the generated keys can have dynamic size which provides perfect security. Moreover, the proposed encryption and decryption technique using the memristor based generated keys outperforms Triple Data Encryption Standard (3DES) and Advanced Encryption Standard (AES) in terms of processing time. This paper is enriched by providing characterization results of a fabricated microscale Al/TiO2/Al memristor prototype in order to prove the concept of the proposed approach and study the impacts of process variations. The work proposed in this paper is a milestone towards System On Chip (SOC) memristor based security.

  6. Characterizing and Modulating Brain Circuitry through Transcranial Magnetic Stimulation Combined with Electroencephalography.

    PubMed

    Farzan, Faranak; Vernet, Marine; Shafi, Mouhsin M D; Rotenberg, Alexander; Daskalakis, Zafiris J; Pascual-Leone, Alvaro

    2016-01-01

    The concurrent combination of transcranial magnetic stimulation (TMS) with electroencephalography (TMS-EEG) is a powerful technology for characterizing and modulating brain networks across developmental, behavioral, and disease states. Given the global initiatives in mapping the human brain, recognition of the utility of this technique is growing across neuroscience disciplines. Importantly, TMS-EEG offers translational biomarkers that can be applied in health and disease, across the lifespan, and in humans and animals, bridging the gap between animal models and human studies. However, to utilize the full potential of TMS-EEG methodology, standardization of TMS-EEG study protocols is needed. In this article, we review the principles of TMS-EEG methodology, factors impacting TMS-EEG outcome measures, and the techniques for preventing and correcting artifacts in TMS-EEG data. To promote the standardization of this technique, we provide comprehensive guides for designing TMS-EEG studies and conducting TMS-EEG experiments. We conclude by reviewing the application of TMS-EEG in basic, cognitive and clinical neurosciences, and evaluate the potential of this emerging technology in brain research.

  7. Characterizing and Modulating Brain Circuitry through Transcranial Magnetic Stimulation Combined with Electroencephalography

    PubMed Central

    Farzan, Faranak; Vernet, Marine; Shafi, Mouhsin M. D.; Rotenberg, Alexander; Daskalakis, Zafiris J.; Pascual-Leone, Alvaro

    2016-01-01

    The concurrent combination of transcranial magnetic stimulation (TMS) with electroencephalography (TMS-EEG) is a powerful technology for characterizing and modulating brain networks across developmental, behavioral, and disease states. Given the global initiatives in mapping the human brain, recognition of the utility of this technique is growing across neuroscience disciplines. Importantly, TMS-EEG offers translational biomarkers that can be applied in health and disease, across the lifespan, and in humans and animals, bridging the gap between animal models and human studies. However, to utilize the full potential of TMS-EEG methodology, standardization of TMS-EEG study protocols is needed. In this article, we review the principles of TMS-EEG methodology, factors impacting TMS-EEG outcome measures, and the techniques for preventing and correcting artifacts in TMS-EEG data. To promote the standardization of this technique, we provide comprehensive guides for designing TMS-EEG studies and conducting TMS-EEG experiments. We conclude by reviewing the application of TMS-EEG in basic, cognitive and clinical neurosciences, and evaluate the potential of this emerging technology in brain research. PMID:27713691

  8. State resolved vibrational relaxation modeling for strongly nonequilibrium flows

    NASA Astrophysics Data System (ADS)

    Boyd, Iain D.; Josyula, Eswar

    2011-05-01

    Vibrational relaxation is an important physical process in hypersonic flows. Activation of the vibrational mode affects the fundamental thermodynamic properties and finite rate relaxation can reduce the degree of dissociation of a gas. Low fidelity models of vibrational activation employ a relaxation time to capture the process at a macroscopic level. High fidelity, state-resolved models have been developed for use in continuum gas dynamics simulations based on computational fluid dynamics (CFD). By comparison, such models are not as common for use with the direct simulation Monte Carlo (DSMC) method. In this study, a high fidelity, state-resolved vibrational relaxation model is developed for the DSMC technique. The model is based on the forced harmonic oscillator approach in which multi-quantum transitions may become dominant at high temperature. Results obtained for integrated rate coefficients from the DSMC model are consistent with the corresponding CFD model. Comparison of relaxation results obtained with the high-fidelity DSMC model shows significantly less excitation of upper vibrational levels in comparison to the standard, lower fidelity DSMC vibrational relaxation model. Application of the new DSMC model to a Mach 7 normal shock wave in carbon monoxide provides better agreement with experimental measurements than the standard DSMC relaxation model.

  9. Cold dark matter confronts the cosmic microwave background - Large-angular-scale anisotropies in Omega sub 0 + lambda 1 models

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.; Silk, Joseph; Vittorio, Nicola

    1992-01-01

    A new technique is used to compute the correlation function for large-angle cosmic microwave background anisotropies resulting from both the space and time variations in the gravitational potential in flat, vacuum-dominated, cold dark matter cosmological models. Such models with Omega sub 0 of about 0.2, fit the excess power, relative to the standard cold dark matter model, observed in the large-scale galaxy distribution and allow a high value for the Hubble constant. The low order multipoles and quadrupole anisotropy that are potentially observable by COBE and other ongoing experiments should definitively test these models.

  10. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  11. New methods and results for quantification of lightning-aircraft electrodynamics

    NASA Technical Reports Server (NTRS)

    Pitts, Felix L.; Lee, Larry D.; Perala, Rodney A.; Rudolph, Terence H.

    1987-01-01

    The NASA F-106 collected data on the rates of change of electromagnetic parameters on the aircraft surface during over 700 direct lightning strikes while penetrating thunderstorms at altitudes from 15,000 t0 40,000 ft (4,570 to 12,190 m). These in situ measurements provided the basis for the first statistical quantification of the lightning electromagnetic threat to aircraft appropriate for determining indirect lightning effects on aircraft. These data are used to update previous lightning criteria and standards developed over the years from ground-based measurements. The proposed standards will be the first which reflect actual aircraft responses measured at flight altitudes. Nonparametric maximum likelihood estimates of the distribution of the peak electromagnetic rates of change for consideration in the new standards are obtained based on peak recorder data for multiple-strike flights. The linear and nonlinear modeling techniques developed provide means to interpret and understand the direct-strike electromagnetic data acquired on the F-106. The reasonable results obtained with the models, compared with measured responses, provide increased confidence that the models may be credibly applied to other aircraft.

  12. The Standard Model: how far can it go and how can we tell?

    PubMed

    Butterworth, J M

    2016-08-28

    The Standard Model of particle physics encapsulates our current best understanding of physics at the smallest distances and highest energies. It incorporates quantum electrodynamics (the quantized version of Maxwell's electromagnetism) and the weak and strong interactions, and has survived unmodified for decades, save for the inclusion of non-zero neutrino masses after the observation of neutrino oscillations in the late 1990s. It describes a vast array of data over a wide range of energy scales. I review a selection of these successes, including the remarkably successful prediction of a new scalar boson, a qualitatively new kind of object observed in 2012 at the Large Hadron Collider. New calculational techniques and experimental advances challenge the Standard Model across an ever-wider range of phenomena, now extending significantly above the electroweak symmetry breaking scale. I will outline some of the consequences of these new challenges, and briefly discuss what is still to be found.This article is part of the themed issue 'Unifying physics and technology in light of Maxwell's equations'. © 2016 The Author(s).

  13. Parametrisation D'effets Non-Standard EN Phenomenologie Electrofaible

    NASA Astrophysics Data System (ADS)

    Maksymyk, Ivan

    Cette these pat articles porte sur la parametrisation d'effets non standard en physique electrofaible. Dans chaque analyse, nous avons ajoute plusieurs operateurs non standard au lagrangien du modele standard electrofaible. Les operateurs non standard decrivent les nouveaux effets decoulant d'un modele sous-jacent non-specefie. D'emblee, le nombre d'operateurs non standard que l'on peut inclure dans une telle analyse est illimite. Mais pour une classe specifique de modeles sous-jacents, les effets non standard peuvent etre decrits par un nombre raisonnable d'operateurs. Dans chaque analyse nous avons developpe des expressions pour des observables electrofaibles, en fonction des coefficients des operateurs nouveaux. En effectuant un "fit" statistique sur un ensemble de donnees experimentales precises, nous avons obtenu des contraintes phenomenologiques sur ces coefficients. Dans "Model-Independent Global Constraints on New Physics", nous avons adopte des hypotheses tres peu contraignantes relatives aux modeles sous-jacents. Nous avons tronque le lagrangien effectif a la dimension cinq (inclusivement). Visant la plus grande generalite possible, nous avons admis des interactions qui ne respectent pas les symetries discretes (soit C, P et CP) ainsi que des interactions qui ne conservent pas la saveur. Le lagrangien effectif contient une quarantaine d'operateurs nouveaux. Nous avons determine que, pour la plupart des coefficients des nouveaux operateurs, les contraintes sont assez serrees (2 ou 3%), mais il y a des exceptions interessantes. Dans "Bounding Anomalous Three-Gauge-Boson Couplings", nous avons determine des contraintes phenomenologiques sur les deviations des couplages a trois bosons de jauge par rapport aux interactions prescrites par le modele standard. Pour ce faire, nous avons calcule les contributions indirectes des CTBJ non standard aux observables de basse energie. Puisque le lagrangien effectif est non-renormalisable, certaines difficultes techniques se posent: pour regulariser les integrales de Feynman les chercheurs se sont generalement servi de la methode de coupure, mais cette methode peut mener a des resultats incorrects. Nous avons opte pour une technique alternative: la regularisation dimensionnelle et la "soustraction minimale avec decouplage". Dans "Beyond S, T and U" nous presentons le formalisme STUVWX, qui est une extension du formalisme STU de Peskin et Takeuchi. Ces formalismes sont bases sur l'hypothese que la theorie sous-jacente se manifeste au moyen de self -energies de bosons de jauge. Ce type d'effet s'appelle 'oblique'. A la base du formalisme STU se trouve la supposition que l'echelle de la nouvelle physique, M, est beaucoup plus grande que q, l'echelle a laquelle on effectue des mesures. Il en resulte que les effets obliques se parametrisent par les trois variables S, T et U. Par contre, dans le formalisme STUVWX, nous avons admis la possibilite que M~ q. Dans "A Global Fit to Extended Oblique Parameters", nous avons effectue deux fits statistiques sur un ensemble de mesures electrofaibles de haute precision. Dans le premier fit, nous avons pose V=W=X=0, obtenant ainsi des contraintes pour l'ensemble {S,T,U}. Dans le second fit, nous avons inclus tous les six parametres.

  14. Synthetic Biology Open Language (SBOL) Version 2.0.0.

    PubMed

    Bartley, Bryan; Beal, Jacob; Clancy, Kevin; Misirli, Goksel; Roehner, Nicholas; Oberortner, Ernst; Pocock, Matthew; Bissell, Michael; Madsen, Curtis; Nguyen, Tramy; Zhang, Zhen; Gennari, John H; Myers, Chris; Wipat, Anil; Sauro, Herbert

    2015-09-04

    Synthetic biology builds upon the techniques and successes of genetics, molecular biology, and metabolic engineering by applying engineering principles to the design of biological systems. The field still faces substantial challenges, including long development times, high rates of failure, and poor reproducibility. One method to ameliorate these problems would be to improve the exchange of information about designed systems between laboratories. The Synthetic Biology Open Language (SBOL) has been developed as a standard to support the specification and exchange of biological design information in synthetic biology, filling a need not satisfied by other pre-existing standards. This document details version 2.0 of SBOL, introducing a standardized format for the electronic exchange of information on the structural and functional aspects of biological designs. The standard has been designed to support the explicit and unambiguous description of biological designs by means of a well defined data model. The standard also includes rules and best practices on how to use this data model and populate it with relevant design details. The publication of this specification is intended to make these capabilities more widely accessible to potential developers and users in the synthetic biology community and beyond.

  15. Territories typification technique with use of statistical models

    NASA Astrophysics Data System (ADS)

    Galkin, V. I.; Rastegaev, A. V.; Seredin, V. V.; Andrianov, A. V.

    2018-05-01

    Territories typification is required for solution of many problems. The results of geological zoning received by means of various methods do not always agree. That is why the main goal of the research given is to develop a technique of obtaining a multidimensional standard classified indicator for geological zoning. In the course of the research, the probabilistic approach was used. In order to increase the reliability of geological information classification, the authors suggest using complex multidimensional probabilistic indicator P K as a criterion of the classification. The second criterion chosen is multidimensional standard classified indicator Z. These can serve as characteristics of classification in geological-engineering zoning. Above mentioned indicators P K and Z are in good correlation. Correlation coefficient values for the entire territory regardless of structural solidity equal r = 0.95 so each indicator can be used in geological-engineering zoning. The method suggested has been tested and the schematic map of zoning has been drawn.

  16. Measuring The Neutron Lifetime to One Second Using in Beam Techniques

    NASA Astrophysics Data System (ADS)

    Mulholland, Jonathan; NIST In Beam Lifetime Collaboration

    2013-10-01

    The decay of the free neutron is the simplest nuclear beta decay and is the prototype for charged current semi-leptonic weak interactions. A precise value for the neutron lifetime is required for consistency tests of the Standard Model and is an essential parameter in the theory of Big Bang Nucleosynthesis. A new measurement of the neutron lifetime using the in-beam method is planned at the National Institute of Standards and Technology Center for Neutron Research. The systematic effects associated with the in-beam method are markedly different than those found in storage experiments utilizing ultracold neutrons. Experimental improvements, specifically recent advances in the determination of absolute neutron fluence, should permit an overall uncertainty of 1 second on the neutron lifetime. The technical improvements in the in-beam technique, and the path toward improving the precision of the new measurement will be discussed.

  17. Krylov subspace methods on supercomputers

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1988-01-01

    A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.

  18. Preliminary work toward the development of a dimensional tolerance standard for rapid prototyping

    NASA Technical Reports Server (NTRS)

    Kennedy, W. J.

    1996-01-01

    Rapid prototyping is a new technology for building parts quickly from CAD models. It works by slicing a CAD model into layers, then by building a model of the part one layer at a time. Since most parts can be sliced, most parts can be modeled using rapid prototyping. The layers themselves are created in a number of different ways - by using a laser to cure a layer of an epoxy or a resin, by depositing a layer of plastic or wax upon a surface, by using a laser to sinter a layer of powder, or by using a laser to cut a layer of paper. Rapid prototyping (RP) is new, and a standard part for use in comparing dimensional tolerances has not yet been chosen and accepted by ASTM (the American Society for Testing Materials). Such a part is needed when RP is used to build parts for investment casting or for direct use. The objective of this project was to start the development of a standard part by using statistical techniques to choose the features of the part which show curl - the vertical deviation of a part from its intended horizontal plane.

  19. A Framework for the Optimization of Discrete-Event Simulation Models

    NASA Technical Reports Server (NTRS)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  20. Segment-based acoustic models for continuous speech recognition

    NASA Astrophysics Data System (ADS)

    Ostendorf, Mari; Rohlicek, J. R.

    1993-07-01

    This research aims to develop new and more accurate stochastic models for speaker-independent continuous speech recognition, by extending previous work in segment-based modeling and by introducing a new hierarchical approach to representing intra-utterance statistical dependencies. These techniques, which are more costly than traditional approaches because of the large search space associated with higher order models, are made feasible through rescoring a set of HMM-generated N-best sentence hypotheses. We expect these different modeling techniques to result in improved recognition performance over that achieved by current systems, which handle only frame-based observations and assume that these observations are independent given an underlying state sequence. In the fourth quarter of the project, we have completed the following: (1) ported our recognition system to the Wall Street Journal task, a standard task in the ARPA community; (2) developed an initial dependency-tree model of intra-utterance observation correlation; and (3) implemented baseline language model estimation software. Our initial results on the Wall Street Journal task are quite good and represent significantly improved performance over most HMM systems reporting on the Nov. 1992 5k vocabulary test set.

  1. Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.

  2. Model for selecting quality standards for a salad bar through identifying elements of customer satisfaction.

    PubMed

    Ouellet, D; Norback, J P

    1993-11-01

    Continuous quality improvement is the new requirement of the Joint Commission on Accreditation of Healthcare Organizations. This means that meeting quality standards will not be enough. Dietitians will need to improve those standards and the way they are selected. Because quality is defined in terms of the customers, all quality improvement projects must start by defining what customers want. Using a salad bar as an example, this article presents and illustrates a technique developed in Japan to identify which elements in a product or service will satisfy or dissatisfy consumers. Using a model and a questionnaire format developed by Kano and coworkers, 273 students were surveyed to classify six quality elements of a salad bar. Four elements showed a dominant "must-be" characteristic: food freshness, labeling of the dressings, no spills in the food, and no spills on the salad bar. The two other elements (food easy to reach and food variety) showed a dominant one-dimensional characteristic. By better understanding consumer perceptions of quality elements, foodservice managers can select quality standards that focus on what really matters to their consumers.

  3. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    PubMed

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  4. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT

    NASA Astrophysics Data System (ADS)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan

    2017-09-01

    Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (Mea{{n}RHD} , ST{{D}RHD} and C{{V}RHD}{) }~ of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (C{{V}RHD} ) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology.

  5. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT

    PubMed Central

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan

    2017-01-01

    Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (MeanRHD, and STDRHD CVRHD) of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (CVRHD) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology. PMID:28786399

  6. Standardized surgical techniques for adult living donor liver transplantation using a modified right lobe graft: a video presentation from bench to reperfusion.

    PubMed

    Hwang, Shin; Ha, Tae-Yong; Ahn, Chul-Soo; Moon, Deok-Bog; Kim, Ki-Hun; Song, Gi-Won; Jung, Dong-Hwan; Park, Gil-Chun; Lee, Sung-Gyu

    2016-08-01

    After having experienced more than 2,000 cases of adult living donor liver transplantation (LDLT), we established the concepts of right liver graft standardization. Right liver graft standardization intends to provide hemodynamics-based and regeneration-compliant reconstruction of vascular inflow and outflow. Right liver graft standardization consists of the following components: Right hepatic vein reconstruction includes a combination of caudal-side deep incision and patch venoplasty of the graft right hepatic vein to remove the acute angle between the graft right hepatic vein and the inferior vena cava; middle hepatic vein reconstruction includes interposition of a uniform-shaped conduit with large-sized homologous or prosthetic grafts; if the inferior right hepatic vein is present, its reconstruction includes funneling and unification venoplasty for multiple short hepatic veins; if donor portal vein anomaly is present, its reconstruction includes conjoined unification venoplasty for two or more portal vein orifices. This video clip that shows the surgical technique from bench to reperfusion was a case presentation of adult LDLT using a modified right liver graft from the patient's son. Our intention behind proposing the concept of right liver graft standardization is that it can be universally applicable and may guarantee nearly the same outcomes regardless of the surgeon's experience. We believe that this reconstruction model would be primarily applied to a majority of adult LDLT cases.

  7. UIAGM Ropehandling Techniques.

    ERIC Educational Resources Information Center

    Cloutier, K. Ross

    The Union Internationale des Associations des Guides de Montagne's (UIAGM) rope handling techniques are intended to form the standard for guiding ropework worldwide. These techniques have become the legal standard for instructional institutions and commercial guiding organizations in UIAGM member countries: Austria, Canada, France, Germany, Great…

  8. Stochastic capture zone analysis of an arsenic-contaminated well using the generalized likelihood uncertainty estimator (GLUE) methodology

    NASA Astrophysics Data System (ADS)

    Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro

    2003-06-01

    In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.

  9. Supervised machine learning for analysing spectra of exoplanetary atmospheres

    NASA Astrophysics Data System (ADS)

    Márquez-Neila, Pablo; Fisher, Chloe; Sznitman, Raphael; Heng, Kevin

    2018-06-01

    The use of machine learning is becoming ubiquitous in astronomy1-3, but remains rare in the study of the atmospheres of exoplanets. Given the spectrum of an exoplanetary atmosphere, a multi-parameter space is swept through in real time to find the best-fit model4-6. Known as atmospheric retrieval, this technique originates in the Earth and planetary sciences7. Such methods are very time-consuming, and by necessity there is a compromise between physical and chemical realism and computational feasibility. Machine learning has previously been used to determine which molecules to include in the model, but the retrieval itself was still performed using standard methods8. Here, we report an adaptation of the `random forest' method of supervised machine learning9,10, trained on a precomputed grid of atmospheric models, which retrieves full posterior distributions of the abundances of molecules and the cloud opacity. The use of a precomputed grid allows a large part of the computational burden to be shifted offline. We demonstrate our technique on a transmission spectrum of the hot gas-giant exoplanet WASP-12b using a five-parameter model (temperature, a constant cloud opacity and the volume mixing ratios or relative abundances of molecules of water, ammonia and hydrogen cyanide)11. We obtain results consistent with the standard nested-sampling retrieval method. We also estimate the sensitivity of the measured spectrum to the model parameters, and we are able to quantify the information content of the spectrum. Our method can be straightforwardly applied using more sophisticated atmospheric models to interpret an ensemble of spectra without having to retrain the random forest.

  10. Linear regression analysis and its application to multivariate chromatographic calibration for the quantitative analysis of two-component mixtures.

    PubMed

    Dinç, Erdal; Ozdemir, Abdil

    2005-01-01

    Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.

  11. Clinical data interoperability based on archetype transformation.

    PubMed

    Costa, Catalina Martínez; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2011-10-01

    The semantic interoperability between health information systems is a major challenge to improve the quality of clinical practice and patient safety. In recent years many projects have faced this problem and provided solutions based on specific standards and technologies in order to satisfy the needs of a particular scenario. Most of such solutions cannot be easily adapted to new scenarios, thus more global solutions are needed. In this work, we have focused on the semantic interoperability of electronic healthcare records standards based on the dual model architecture and we have developed a solution that has been applied to ISO 13606 and openEHR. The technological infrastructure combines reference models, archetypes and ontologies, with the support of Model-driven Engineering techniques. For this purpose, the interoperability infrastructure developed in previous work by our group has been reused and extended to cover the requirements of data transformation. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. An Ontology Based Approach to Information Security

    NASA Astrophysics Data System (ADS)

    Pereira, Teresa; Santos, Henrique

    The semantically structure of knowledge, based on ontology approaches have been increasingly adopted by several expertise from diverse domains. Recently ontologies have been moved from the philosophical and metaphysics disciplines to be used in the construction of models to describe a specific theory of a domain. The development and the use of ontologies promote the creation of a unique standard to represent concepts within a specific knowledge domain. In the scope of information security systems the use of an ontology to formalize and represent the concepts of security information challenge the mechanisms and techniques currently used. This paper intends to present a conceptual implementation model of an ontology defined in the security domain. The model presented contains the semantic concepts based on the information security standard ISO/IEC_JTC1, and their relationships to other concepts, defined in a subset of the information security domain.

  13. A burnout prediction model based around char morphology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao Wu; Edward Lester; Michael Cloke

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coalmore » particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.« less

  14. Fusion of multiscale wavelet-based fractal analysis on retina image for stroke prediction.

    PubMed

    Che Azemin, M Z; Kumar, Dinesh K; Wong, T Y; Wang, J J; Kawasaki, R; Mitchell, P; Arjunan, Sridhar P

    2010-01-01

    In this paper, we present a novel method of analyzing retinal vasculature using Fourier Fractal Dimension to extract the complexity of the retinal vasculature enhanced at different wavelet scales. Logistic regression was used as a fusion method to model the classifier for 5-year stroke prediction. The efficacy of this technique has been tested using standard pattern recognition performance evaluation, Receivers Operating Characteristics (ROC) analysis and medical prediction statistics, odds ratio. Stroke prediction model was developed using the proposed system.

  15. Beyond single-stream with the Schrödinger method

    NASA Astrophysics Data System (ADS)

    Uhlemann, Cora; Kopp, Michael

    2016-10-01

    We investigate large scale structure formation of collisionless dark matter in the phase space description based on the Vlasov-Poisson equation. We present the Schrödinger method, originally proposed by \\cite{WK93} as numerical technique based on the Schrödinger Poisson equation, as an analytical tool which is superior to the common standard pressureless fluid model. Whereas the dust model fails and develops singularities at shell crossing the Schrödinger method encompasses multi-streaming and even virialization.

  16. Higher rank ABJM Wilson loops from matrix models

    DOE PAGES

    Cookmeyer, Jonathan; Liu, James T.; Pando Zayas, Leopoldo A.

    2016-11-21

    We compute the vacuum expectation values of 1/6 supersymmetric Wilson loops in higher dimensional representations of the gauge group in ABJM theory. We then present results for the m-symmetric and m-antisymmetric representations by exploiting standard matrix model techniques. At leading order, in the saddle point approximation, our expressions reproduce holographic results from both D6 and D2 branes corresponding to the antisymmetric and symmetric representations, respectively. We also compute 1/N corrections to the leading saddle point results.

  17. 3D Surface Temperature Measurement of Plant Canopies Using Photogrammetry Techniques From A UAV.

    NASA Astrophysics Data System (ADS)

    Irvine, M.; Lagouarde, J. P.

    2017-12-01

    Surface temperature of plant canopies and within canopies results from the coupling of radiative and energy exchanges processes which govern the fluxes at the interface soil-plant-atmosphere. As a key parameter, surface temperature permits the estimation of canopy exchanges using processes based modeling methods. However detailed 3D surface temperature measurements or even profile surface temperature measurements are rarely made as they have inherent difficulties. Such measurements would greatly improve multi-level canopy models such as NOAH (Chen and Dudhia 2001) or MuSICA (Ogée and Brunet 2002, Ogée et al 2003) where key surface temperature estimations, at present, are not tested. Additionally, at larger scales, canopy structure greatly influences satellite based surface temperature measurements as the structure impacts the observations which are intrinsically made at varying satellite viewing angles and solar heights. In order to account for these differences, again accurate modeling is required such as through the above mentioned multi-layer models or with several source type models such as SCOPE (Van der Tol 2009) in order to standardize observations. As before, in order to validate these models, detailed field observations are required. With the need for detailed surface temperature observations in mind we have planned a series of experiments over non-dense plant canopies to investigate the use of photogrammetry techniques. Photogrammetry is normally used for visible wavelengths to produce 3D images using cloud point reconstruction of aerial images (for example Dandois and Ellis, 2010, 2013 over a forest). From these cloud point models it should be possible to establish 3D plant surface temperature images when using thermal infrared array sensors. In order to do this our experiments are based on the use of a thermal Infrared camera embarked on a UAV. We adapt standard photogrammetry to account for limits imposed by thermal imaginary, especially the low image resolution compared with standard RGB sensors. At the session B081, we intend to present first results of our thermal photogrammetric experiments with 3D surface temperature plots in order to discuss and adapt our methods to the modelling community's needs.

  18. Measuring CAMD technique performance. 2. How "druglike" are drugs? Implications of Random test set selection exemplified using druglikeness classification models.

    PubMed

    Good, Andrew C; Hermsmeier, Mark A

    2007-01-01

    Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.

  19. Impact of Aquifer Heterogeneities on Autotrophic Denitrification.

    NASA Astrophysics Data System (ADS)

    McCarthy, A.; Roques, C.; Selker, J. S.; Istok, J. D.; Pett-Ridge, J. C.

    2015-12-01

    Nitrate contamination in groundwater is a big challenge that will need to be addressed by hydrogeologists throughout the world. With a drinking water standard of 10mg/L of NO3-, innovative techniques will need to be pursued to ensure a decrease in drinking water nitrate concentration. At the pumping site scale, the influence and relationship between heterogeneous flow, mixing, and reactivity is not well understood. The purpose of this project is to incorporate both physical and chemical modeling techniques to better understand the effect of aquifer heterogeneities on autotrophic denitrification. We will investigate the link between heterogeneous hydraulic properties, transport, and the rate of autotrophic denitrification. Data collected in previous studies in laboratory experiments and pumping site scale experiments will be used to validate the models. The ultimate objective of this project is to develop a model in which such coupled processes are better understood resulting in best management practices of groundwater.

  20. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE PAGES

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    2016-10-20

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  1. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  2. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera.

    PubMed

    Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.

  3. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera

    PubMed Central

    Clausner, Tommy; Dalal, Sarang S.; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D. Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position. PMID:28559791

  4. Evaluating Trauma Sonography for Operational Use in the Microgravity Environment

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, Andrew W.; Jones, Jeffrey A.; Sargsyan, Ashot; Hamilton, Douglas; Melton, Shannon; Beck, George; Nicolaou, Savvas; Campbell, Mark; Dulchavsky, Scott

    2007-01-01

    Sonography is the only medical imaging modality aboard the ISS, and is likely to remain the leading imaging modality in future human space flight programs. While trauma sonography (TS) has been well recognized for terrestrial trauma settings, the technique had to be evaluated for suitability in space flight prior to adopting it as an operational capability. The authors found the following four-phased evaluative approach applicable to this task: 1) identifying standard or novel terrestrial techniques for potential use in space medicine; 2) developing and testing these techniques with suggested modifications on the ground (1g) either in clinical settings or in animal models, as appropriate; 3) evaluating and refining the techniques in parabolic flight (0g); and 4) validating and implementing for clinical use in space. In Phase I of the TS project, expert opinion and literature review suggested TS to be a potential screening tool for trauma in space. In Phase II, animal models were developed and tested in ground studies, and clinical studies were carried out in collaborating trauma centers. In Phase III, animal models were flight-tested in the NASA KC-135 Reduced Gravity Laboratory. Preliminary results of the first three phases demonstrated potential clinical utility of TS in microgravity. Phase IV studies have begun to address crew training issues, on-board imaging protocols, and data transfer procedures necessary to offer the modified TS technique for space use.

  5. The measurement of linear frequency drift in oscillators

    NASA Astrophysics Data System (ADS)

    Barnes, J. A.

    1985-04-01

    A linear drift in frequency is an important element in most stochastic models of oscillator performance. Quartz crystal oscillators often have drifts in excess of a part in ten to the tenth power per day. Even commercial cesium beam devices often show drifts of a few parts in ten to the thirteenth per year. There are many ways to estimate the drift rates from data samples (e.g., regress the phase on a quadratic; regress the frequency on a linear; compute the simple mean of the first difference of frequency; use Kalman filters with a drift term as one element in the state vector; and others). Although most of these estimators are unbiased, they vary in efficiency (i.e., confidence intervals). Further, the estimation of confidence intervals using the standard analysis of variance (typically associated with the specific estimating technique) can give amazingly optimistic results. The source of these problems is not an error in, say, the regressions techniques, but rather the problems arise from correlations within the residuals. That is, the oscillator model is often not consistent with constraints on the analysis technique or, in other words, some specific analysis techniques are often inappropriate for the task at hand. The appropriateness of a specific analysis technique is critically dependent on the oscillator model and can often be checked with a simple whiteness test on the residuals.

  6. A New Data Representation Based on Training Data Characteristics to Extract Drug Name Entity in Medical Text

    PubMed Central

    Basaruddin, T.

    2016-01-01

    One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text mining poses more challenges, for example, more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug, the lack of labeled dataset sources and external knowledge, and the multiple token representations for a single drug name. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, that is, MLP. The second technique involves two deep network classifiers, that is, DBN and SAE. The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, that is, LSTM. In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645. PMID:27843447

  7. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study.

    PubMed

    Brodén, Cyrus; Olivecrona, Henrik; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Sköldenberg, Olof

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting.

  8. Identification of an internal combustion engine model by nonlinear multi-input multi-output system identification. Ph.D. Thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luh, G.C.

    1994-01-01

    This thesis presents the application of advanced modeling techniques to construct nonlinear forward and inverse models of internal combustion engines for the detection and isolation of incipient faults. The NARMAX (Nonlinear Auto-Regressive Moving Average modeling with eXogenous inputs) technique of system identification proposed by Leontaritis and Billings was used to derive the nonlinear model of a internal combustion engine, over operating conditions corresponding to the I/M240 cycle. The I/M240 cycle is a standard proposed by the United States Environmental Protection Agency to measure tailpipe emissions in inspection and maintenance programs and consists of a driving schedule developed for the purposemore » of testing compliance with federal vehicle emission standards for carbon monoxide, unburned hydrocarbons, and nitrogen oxides. The experimental work for model identification and validation was performed on a 3.0 liter V6 engine installed in an engine test cell at the Center for Automotive Research at The Ohio State University. In this thesis, different types of model structures were proposed to obtain multi-input multi-output (MIMO) nonlinear NARX models. A modification of the algorithm proposed by He and Asada was used to estimate the robust orders of the derived MIMO nonlinear models. A methodology for the analysis of inverse NARX model was developed. Two methods were proposed to derive the inverse NARX model: (1) inversion from the forward NARX model; and (2) direct identification of inverse model from the output-input data set. In this thesis, invertibility, minimum-phase characteristic of zero dynamics, and stability analysis of NARX forward model are also discussed. Stability in the sense of Lyapunov is also investigated to check the stability of the identified forward and inverse models. This application of inverse problem leads to the estimation of unknown inputs and to actuator fault diagnosis.« less

  9. 48 CFR 9905.505-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... this cost accounting principle does not require that allocation of unallowable costs to final cost.... 9905.505-50 Section 9905.505-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS FOR EDUCATIONAL INSTITUTIONS 9905.505-50 Techniques for...

  10. 48 CFR 9904.403-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 9904.403-50 Section 9904.403-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.403-50 Techniques for application. (a)(1) Separate...

  11. Body Composition of Bangladeshi Children: Comparison and Development of Leg-to-Leg Bioelectrical Impedance Equation

    PubMed Central

    Khan, I.; Hawlader, Sophie Mohammad Delwer Hossain; Arifeen, Shams El; Moore, Sophie; Hills, Andrew P.; Wells, Jonathan C.; Persson, Lars-Åke; Kabir, Iqbal

    2012-01-01

    The aim of this study was to investigate the validity of the Tanita TBF 300A leg-to-leg bioimpedance analyzer for estimating fat-free mass (FFM) in Bangladeshi children aged 4-10 years and to develop novel prediction equations for use in this population, using deuterium dilution as the reference method. Two hundred Bangladeshi children were enrolled. The isotope dilution technique with deuterium oxide was used for estimation of total body water (TBW). FFM estimated by Tanita was compared with results of deuterium oxide dilution technique. Novel prediction equations were created for estimating FFM, using linear regression models, fitting child's height and impedance as predictors. There was a significant difference in FFM and percentage of body fat (BF%) between methods (p<0.01), Tanita underestimating TBW in boys (p=0.001) and underestimating BF% in girls (p<0.001). A basic linear regression model with height and impedance explained 83% of the variance in FFM estimated by deuterium oxide dilution technique. The best-fit equation to predict FFM from linear regression modelling was achieved by adding weight, sex, and age to the basic model, bringing the adjusted R2 to 89% (standard error=0.90, p<0.001). These data suggest Tanita analyzer may be a valid field-assessment technique in Bangladeshi children when using population-specific prediction equations, such as the ones developed here. PMID:23082630

  12. Standardization of Laser Methods and Techniques for Vibration Measurements and Calibrations

    NASA Astrophysics Data System (ADS)

    von Martens, Hans-Jürgen

    2010-05-01

    The realization and dissemination of the SI units of motion quantities (vibration and shock) have been based on laser interferometer methods specified in international documentary standards. New and refined laser methods and techniques developed by national metrology institutes and by leading manufacturers in the past two decades have been swiftly specified as standard methods for inclusion into in the series ISO 16063 of international documentary standards. A survey of ISO Standards for the calibration of vibration and shock transducers demonstrates the extended ranges and improved accuracy (measurement uncertainty) of laser methods and techniques for vibration and shock measurements and calibrations. The first standard for the calibration of laser vibrometers by laser interferometry or by a reference accelerometer calibrated by laser interferometry (ISO 16063-41) is on the stage of a Draft International Standard (DIS) and may be issued by the end of 2010. The standard methods with refined techniques proved to achieve wider measurement ranges and smaller measurement uncertainties than that specified in the ISO Standards. The applicability of different standardized interferometer methods to vibrations at high frequencies was recently demonstrated up to 347 kHz (acceleration amplitudes up to 350 km/s2). The relative deviations between the amplitude measurement results of the different interferometer methods that were applied simultaneously, differed by less than 1% in all cases.

  13. There aren't Non-Standard Solutions for the Braid Group Representations of the QYBE Associated with 10-D Representations of SU(4)

    NASA Technical Reports Server (NTRS)

    Yijun, Huang; Guochen, Yu; Hong, Sun

    1996-01-01

    It is well known that the quantum Yang-Baxter equations (QYBE) play an important role in various theoretical and mathematical physics, such as completely integrable system in (1 + 1)-dimensions, exactly solvable models in statistical mechanics, the quantum inverse scattering method and the conformal field theories in 2-dimensions. Recently, much remarkable progress has been made in constructing the solutions of the QYBE associated with the representations of lie algebras. It is shown that for some cases except the standard solutions, there also exist new solutions, but the others have not non-standard solutions. In this paper by employing the weight conservation and the diagrammatic techniques we show that the solution associated with the 10-D representations of SU (4) are standard alone.

  14. Use of Standardized, Quantitative Digital Photography in a Multicenter Web-based Study

    PubMed Central

    Molnar, Joseph A.; Lew, Wesley K.; Rapp, Derek A.; Gordon, E. Stanley; Voignier, Denise; Rushing, Scott; Willner, William

    2009-01-01

    Objective: We developed a Web-based, blinded, prospective, randomized, multicenter trial, using standardized digital photography to clinically evaluate hand burn depth and accurately determine wound area with digital planimetry. Methods: Photos in each center were taken with identical digital cameras with standardized settings on a custom backdrop developed at Wake Forest University containing a gray, white, black, and centimeter scale. The images were downloaded, transferred via the Web, and stored on servers at the principal investigator's home institution. Color adjustments to each photo were made using Adobe Photoshop 6.0 (Adobe, San Jose, Calif). In an initial pilot study, model hands marked with circles of known areas were used to determine the accuracy of the planimetry technique. Two-dimensional digital planimetry using SigmaScan Pro 5.0 (SPSS Science, Chicago, Ill) was used to calculate wound area from the digital images. Results: Digital photography is a simple and cost-effective method for quantifying wound size when used in conjunction with digital planimetry (SigmaScan) and photo enhancement (Adobe Photoshop) programs. The accuracy of the SigmaScan program in calculating predetermined areas was within 4.7% (95% CI, 3.4%–5.9%). Dorsal hand burns of the initial 20 patients in a national study involving several centers were evaluated with this technique. Images obtained by individuals denying experience in photography proved reliable and useful for clinical evaluation and quantification of wound area. Conclusion: Standardized digital photography may be used quantitatively in a Web-based, multicenter trial of burn care. This technique could be modified for other medical studies with visual endpoints. PMID:19212431

  15. Use of standardized, quantitative digital photography in a multicenter Web-based study.

    PubMed

    Molnar, Joseph A; Lew, Wesley K; Rapp, Derek A; Gordon, E Stanley; Voignier, Denise; Rushing, Scott; Willner, William

    2009-01-01

    We developed a Web-based, blinded, prospective, randomized, multicenter trial, using standardized digital photography to clinically evaluate hand burn depth and accurately determine wound area with digital planimetry. Photos in each center were taken with identical digital cameras with standardized settings on a custom backdrop developed at Wake Forest University containing a gray, white, black, and centimeter scale. The images were downloaded, transferred via the Web, and stored on servers at the principal investigator's home institution. Color adjustments to each photo were made using Adobe Photoshop 6.0 (Adobe, San Jose, Calif). In an initial pilot study, model hands marked with circles of known areas were used to determine the accuracy of the planimetry technique. Two-dimensional digital planimetry using SigmaScan Pro 5.0 (SPSS Science, Chicago, Ill) was used to calculate wound area from the digital images. Digital photography is a simple and cost-effective method for quantifying wound size when used in conjunction with digital planimetry (SigmaScan) and photo enhancement (Adobe Photoshop) programs. The accuracy of the SigmaScan program in calculating predetermined areas was within 4.7% (95% CI, 3.4%-5.9%). Dorsal hand burns of the initial 20 patients in a national study involving several centers were evaluated with this technique. Images obtained by individuals denying experience in photography proved reliable and useful for clinical evaluation and quantification of wound area. Standardized digital photography may be used quantitatively in a Web-based, multicenter trial of burn care. This technique could be modified for other medical studies with visual endpoints.

  16. The design of a turboshaft speed governor using modern control techniques

    NASA Technical Reports Server (NTRS)

    Delosreyes, G.; Gouchoe, D. R.

    1986-01-01

    The objectives of this program were: to verify the model of off schedule compressor variable geometry in the T700 turboshaft engine nonlinear model; to evaluate the use of the pseudo-random binary noise (PRBN) technique for obtaining engine frequency response data; and to design a high performance power turbine speed governor using modern control methods. Reduction of T700 engine test data generated at NASA-Lewis indicated that the off schedule variable geometry effects were accurate as modeled. Analysis also showed that the PRBN technique combined with the maximum likelihood model identification method produced a Bode frequency response that was as accurate as the response obtained from standard sinewave testing methods. The frequency response verified the accuracy of linear models consisting of engine partial derivatives and used for design. A power turbine governor was designed using the Linear Quadratic Regulator (LQR) method of full state feedback control. A Kalman filter observer was used to estimate helicopter main rotor blade velocity. Compared to the baseline T700 power turbine speed governor, the LQR governor reduced droop up to 25 percent for a 490 shaft horsepower transient in 0.1 sec simulating a wind gust, and up to 85 percent for a 700 shaft horsepower transient in 0.5 sec simulating a large collective pitch angle transient.

  17. Fully laparoscopic left hepatectomy - a technical reference proposed for standard practice compared to the open approach: a retrospective propensity score model.

    PubMed

    Valente, Roberto; Sutcliffe, Robert; Levesque, Eric; Costa, Mara; De' Angelis, Nicola; Tayar, Claude; Cherqui, Daniel; Laurent, Alexis

    2018-04-01

    Laparoscopic left hemihepatectomy (LLH) may be an alternative to open (OLH). There are several original variations in the technical aspects of LLH, and no accepted standard. The aim of this study is to assess the safety and effectiveness of the technique developed at Henri Mondor Hospital since 1996. The technique of LLH was conceived for safety and training of two mature generations of lead surgeons. The technique includes full laparoscopy, ventral approach to the common trunk, extrahepatic pedicle dissection, CUSA ® parenchymal transection, division of the left hilar plate laterally to the Arantius ligament, and ventral transection of the left hepatic vein. The outcomes of LLH and OLH were compared. Perioperative analysis included intra- and postoperative, and histology variables. Propensity Score Matching was undertaken of background covariates including age, ASA, BMI, fibrosis, steatosis, tumour size, and specimen weight. 17 LLH and 51 OLH were performed from 1996 to 2014 with perioperative mortality rates of 0% and 6%, respectively. In the LLH group, two patients underwent conversion to open surgery. Propensity matching selected 10 LLH/OLH pairs. The LLH group had a higher proportion of procedures for benign disease. LLH was associated with longer operating time and less blood loss. Perioperative complications occurred in 30% (LLH) and 10% (OLH) (p = 1). Mortality and ITU stay were similar. This technique is recommended as a possible technical reference for standard LLH. Copyright © 2017 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.

  18. An improved water-filled impedance tube.

    PubMed

    Wilson, Preston S; Roy, Ronald A; Carey, William M

    2003-06-01

    A water-filled impedance tube capable of improved measurement accuracy and precision is reported. The measurement instrument employs a variation of the standardized two-sensor transfer function technique. Performance improvements were achieved through minimization of elastic waveguide effects and through the use of sound-hard wall-mounted acoustic pressure sensors. Acoustic propagation inside the water-filled impedance tube was found to be well described by a plane wave model, which is a necessary condition for the technique. Measurements of the impedance of a pressure-release terminated transmission line, and the reflection coefficient from a water/air interface, were used to verify the system.

  19. Label-free evanescent microscopy for membrane nano-tomography in living cells.

    PubMed

    Bon, Pierre; Barroca, Thomas; Lévèque-Fort, Sandrine; Fort, Emmanuel

    2014-11-01

    We show that through-the-objective evanescent microscopy (epi-EM) is a powerful technique to image membranes in living cells. Readily implementable on a standard inverted microscope, this technique enables full-field and real-time tracking of membrane processes without labeling and thus signal fading. In addition, we demonstrate that the membrane/interface distance can be retrieved with 10 nm precision using a multilayer Fresnel model. We apply this nano-axial tomography of living cell membranes to retrieve quantitative information on membrane invagination dynamics. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

  20. Precise measurement of the half-life of the Fermi {beta} decay of {sup 26}Al{sup m}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Rebecca J.; Thompson, Maxwell N.; Rassool, Roger P.

    2011-08-15

    State-of-the-art signal digitization and analysis techniques have been used to measure the half-life of the Fermi {beta} decay of {sup 26}Al{sup m}. The half-life was determined to be 6347.8 {+-} 2.5 ms. This new datum contributes to the experimental testing of the conserved-vector-current hypothesis and the required unitarity of the Cabibbo-Kobayashi-Maskawa matrix: two essential components of the standard model. Detailed discussion of the experimental techniques and data analysis and a thorough investigation of the statistical and systematic uncertainties are presented.

  1. Prototyping an online wetland ecosystem services model using open model sharing standards

    USGS Publications Warehouse

    Feng, M.; Liu, S.; Euliss, N.H.; Young, Caitlin; Mushet, D.M.

    2011-01-01

    Great interest currently exists for developing ecosystem models to forecast how ecosystem services may change under alternative land use and climate futures. Ecosystem services are diverse and include supporting services or functions (e.g., primary production, nutrient cycling), provisioning services (e.g., wildlife, groundwater), regulating services (e.g., water purification, floodwater retention), and even cultural services (e.g., ecotourism, cultural heritage). Hence, the knowledge base necessary to quantify ecosystem services is broad and derived from many diverse scientific disciplines. Building the required interdisciplinary models is especially challenging as modelers from different locations and times may develop the disciplinary models needed for ecosystem simulations, and these models must be identified and made accessible to the interdisciplinary simulation. Additional difficulties include inconsistent data structures, formats, and metadata required by geospatial models as well as limitations on computing, storage, and connectivity. Traditional standalone and closed network systems cannot fully support sharing and integrating interdisciplinary geospatial models from variant sources. To address this need, we developed an approach to openly share and access geospatial computational models using distributed Geographic Information System (GIS) techniques and open geospatial standards. We included a means to share computational models compliant with Open Geospatial Consortium (OGC) Web Processing Services (WPS) standard to ensure modelers have an efficient and simplified means to publish new models. To demonstrate our approach, we developed five disciplinary models that can be integrated and shared to simulate a few of the ecosystem services (e.g., water storage, waterfowl breeding) that are provided by wetlands in the Prairie Pothole Region (PPR) of North America.

  2. Evidence for the associated production of a W boson and a top quark at ATLAS

    NASA Astrophysics Data System (ADS)

    Koll, James

    This thesis discusses a search for the Standard Model single top Wt-channel process. An analysis has been performed searching for the Wt-channel process using 4.7 fb-1 of integrated luminosity collected with the ATLAS detector at the Large Hadron Collider. A boosted decision tree is trained using machine learning techniques to increase the separation between signal and background. A profile likelihood fit is used to measure the cross-section of the Wt-channel process at sigma(pp → Wt + X) = 16.8 +/-2.9 (stat) +/- 4.9(syst) pb, consistent with the Standard Model prediction. This fit is also used to generate pseudoexperiments to calculate the significance, finding an observed (expected) 3.3 sigma (3.4 sigma) excess over background.

  3. Exploring Flavor Physics with Lattice QCD

    NASA Astrophysics Data System (ADS)

    Du, Daping; Fermilab/MILC Collaborations Collaboration

    2016-03-01

    The Standard Model has been a very good description of the subatomic particle physics. In the search for physics beyond the Standard Model in the context of flavor physics, it is important to sharpen our probes using some gold-plated processes (such as B rare decays), which requires the knowledge of the input parameters, such as the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements and other nonperturbative quantities, with sufficient precision. Lattice QCD is so far the only first-principle method which could compute these quantities with competitive and systematically improvable precision using the state of the art simulation techniques. I will discuss the recent progress of lattice QCD calculations on some of these nonpurturbative quantities and their applications in flavor physics. I will also discuss the implications and future perspectives of these calculations in flavor physics.

  4. Cloud Computing Security Model with Combination of Data Encryption Standard Algorithm (DES) and Least Significant Bit (LSB)

    NASA Astrophysics Data System (ADS)

    Basri, M.; Mawengkang, H.; Zamzami, E. M.

    2018-03-01

    Limitations of storage sources is one option to switch to cloud storage. Confidentiality and security of data stored on the cloud is very important. To keep up the confidentiality and security of such data can be done one of them by using cryptography techniques. Data Encryption Standard (DES) is one of the block cipher algorithms used as standard symmetric encryption algorithm. This DES will produce 8 blocks of ciphers combined into one ciphertext, but the ciphertext are weak against brute force attacks. Therefore, the last 8 block cipher will be converted into 8 random images using Least Significant Bit (LSB) algorithm which later draws the result of cipher of DES algorithm to be merged into one.

  5. MRI Proton Density Fat Fraction Is Robust Across the Biologically Plausible Range of Triglyceride Spectra in Adults With Nonalcoholic Steatohepatitis

    PubMed Central

    Hong, Cheng William; Mamidipalli, Adrija; Hooker, Jonathan C.; Hamilton, Gavin; Wolfson, Tanya; Chen, Dennis H.; Dehkordy, Soudabeh Fazeli; Middleton, Michael S.; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.

    2017-01-01

    Background Proton density fat fraction (PDFF) estimation requires spectral modeling of the hepatic triglyceride (TG) signal. Deviations in the TG spectrum may occur, leading to bias in PDFF quantification. Purpose To investigate the effects of varying six-peak TG spectral models on PDFF estimation bias. Study Type Retrospective secondary analysis of prospectively acquired clinical research data. Population Forty-four adults with biopsy-confirmed nonalcoholic steatohepatitis. Field Strength/Sequence Confounder-corrected chemical-shift-encoded 3T MRI (using a 2D multiecho gradient-recalled echo technique with magnitude reconstruction) and MR spectroscopy. Assessment In each patient, 61 pairs of colocalized MRI-PDFF and MRS-PDFF values were estimated: one pair used the standard six-peak spectral model, the other 60 were six-peak variants calculated by adjusting spectral model parameters over their biologically plausible ranges. MRI-PDFF values calculated using each variant model and the standard model were compared, and the agreement between MRI-PDFF and MRS-PDFF was assessed. Statistical Tests MRS-PDFF and MRI-PDFF were summarized descriptively. Bland–Altman (BA) analyses were performed between PDFF values calculated using each variant model and the standard model. Linear regressions were performed between BA biases and mean PDFF values for each variant model, and between MRI-PDFF and MRS-PDFF. Results Using the standard model, mean MRS-PDFF of the study population was 17.9±8.0% (range: 4.1–34.3%). The difference between the highest and lowest mean variant MRI-PDFF values was 1.5%. Relative to the standard model, the model with the greatest absolute BA bias overestimated PDFF by 1.2%. Bias increased with increasing PDFF (P < 0.0001 for 59 of the 60 variant models). MRI-PDFF and MRS-PDFF agreed closely for all variant models (R2=0.980, P < 0.0001). Data Conclusion Over a wide range of hepatic fat content, PDFF estimation is robust across the biologically plausible range of TG spectra. Although absolute estimation bias increased with higher PDFF, its magnitude was small and unlikely to be clinically meaningful. Level of Evidence 3 Technical Efficacy Stage 2 PMID:28851124

  6. Three-Dimensional Printing: An Aid to Epidural Access for Neuromodulation.

    PubMed

    Taverner, Murray G; Monagle, John P

    2017-08-01

    The case report details to use of three-dimensional (3D) printing as an aid to neuromodulation. A patient is described in whom previous attempts at spinal neuromodulation had failed due to lack of epidural or intrathecal access, and the use of a 3D printed model allowed for improved planning and ultimately, success. Successful spinal cord stimulation was achieved with the plan developed by access to a 3D model of the patient's spine. Neuromodulation techniques can provide the optimal analgesic techniques for individual patients. At times these can fail due to lack of access to the site for intervention, in this case epidural access. 3D printing may provide additional information to improve the likelihood of access when anatomy is distorted and standard approaches prove difficult. © 2017 International Neuromodulation Society.

  7. Improvements to Wire Bundle Thermal Modeling for Ampacity Determination

    NASA Technical Reports Server (NTRS)

    Rickman, Steve L.; Iannello, Christopher J.; Shariff, Khadijah

    2017-01-01

    Determining current carrying capacity (ampacity) of wire bundles in aerospace vehicles is critical not only to safety but also to efficient design. Published standards provide guidance on determining wire bundle ampacity but offer little flexibility for configurations where wire bundles of mixed gauges and currents are employed with varying external insulation jacket surface properties. Thermal modeling has been employed in an attempt to develop techniques to assist in ampacity determination for these complex configurations. Previous developments allowed analysis of wire bundle configurations but was constrained to configurations comprised of less than 50 elements. Additionally, for vacuum analyses, configurations with very low emittance external jackets suffered from numerical instability in the solution. A new thermal modeler is presented allowing for larger configurations and is not constrained for low bundle infrared emissivity calculations. Formulation of key internal radiation and interface conductance parameters is discussed including the effects of temperature and air pressure on wire to wire thermal conductance. Test cases comparing model-predicted ampacity and that calculated from standards documents are presented.

  8. A Formal Model of Partitioning for Integrated Modular Avionics

    NASA Technical Reports Server (NTRS)

    DiVito, Ben L.

    1998-01-01

    The aviation industry is gradually moving toward the use of integrated modular avionics (IMA) for civilian transport aircraft. An important concern for IMA is ensuring that applications are safely partitioned so they cannot interfere with one another. We have investigated the problem of ensuring safe partitioning and logical non-interference among separate applications running on a shared Avionics Computer Resource (ACR). This research was performed in the context of ongoing standardization efforts, in particular, the work of RTCA committee SC-182, and the recently completed ARINC 653 application executive (APEX) interface standard. We have developed a formal model of partitioning suitable for evaluating the design of an ACR. The model draws from the mathematical modeling techniques developed by the computer security community. This report presents a formulation of partitioning requirements expressed first using conventional mathematical notation, then formalized using the language of SRI'S Prototype Verification System (PVS). The approach is demonstrated on three candidate designs, each an abstraction of features found in real systems.

  9. 48 CFR 9904.409-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 9904.409-50 Section 9904.409-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.409-50 Techniques for application. (a) Determination of... of consumption of services in the cost accounting periods included in such life. In selecting service...

  10. 48 CFR 9904.414-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the case of process cost accounting systems, the contracting parties may agree to substitute an.... 9904.414-50 Section 9904.414-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.414-50 Techniques for application. (a) The investment...

  11. 48 CFR 9904.404-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 9904.404-50 Section 9904.404-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.404-50 Techniques for application. (a) The cost to...

  12. 48 CFR 9904.405-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 9904.405-50 Section 9904.405-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.405-50 Techniques for application. (a) The detail and...

  13. 48 CFR 9904.406-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 9904.406-50 Section 9904.406-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.406-50 Techniques for application. (a) The cost of an...

  14. Accounting for methodological, structural, and parameter uncertainty in decision-analytic models: a practical guide.

    PubMed

    Bilcke, Joke; Beutels, Philippe; Brisson, Marc; Jit, Mark

    2011-01-01

    Accounting for uncertainty is now a standard part of decision-analytic modeling and is recommended by many health technology agencies and published guidelines. However, the scope of such analyses is often limited, even though techniques have been developed for presenting the effects of methodological, structural, and parameter uncertainty on model results. To help bring these techniques into mainstream use, the authors present a step-by-step guide that offers an integrated approach to account for different kinds of uncertainty in the same model, along with a checklist for assessing the way in which uncertainty has been incorporated. The guide also addresses special situations such as when a source of uncertainty is difficult to parameterize, resources are limited for an ideal exploration of uncertainty, or evidence to inform the model is not available or not reliable. for identifying the sources of uncertainty that influence results most are also described. Besides guiding analysts, the guide and checklist may be useful to decision makers who need to assess how well uncertainty has been accounted for in a decision-analytic model before using the results to make a decision.

  15. Accelerated testing of space mechanisms

    NASA Technical Reports Server (NTRS)

    Murray, S. Frank; Heshmat, Hooshang

    1995-01-01

    This report contains a review of various existing life prediction techniques used for a wide range of space mechanisms. Life prediction techniques utilized in other non-space fields such as turbine engine design are also reviewed for applicability to many space mechanism issues. The development of new concepts on how various tribological processes are involved in the life of the complex mechanisms used for space applications are examined. A 'roadmap' for the complete implementation of a tribological prediction approach for complex mechanical systems including standard procedures for test planning, analytical models for life prediction and experimental verification of the life prediction and accelerated testing techniques are discussed. A plan is presented to demonstrate a method for predicting the life and/or performance of a selected space mechanism mechanical component.

  16. Spin formalism and applications to new physics searches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haber, H.E.

    1994-12-01

    An introduction to spin techniques in particle physics is given. Among the topics covered are: helicity formalism and its applications to the decay and scattering of spin-1/2 and spin-1 particles, techniques for evaluating helicity amplitudes (including projection operator methods and the spinor helicity method), and density matrix techniques. The utility of polarization and spin correlations for untangling new physics beyond the Standard Model at future colliders such as the LHC and a high energy e{sup +}e{sup {minus}} linear collider is then considered. A number of detailed examples are explored including the search for low-energy supersymmetry, a non-minimal Higgs boson sector,more » and new gauge bosons beyond the W{sup {+-}} and Z.« less

  17. Data needs for X-ray astronomy satellites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallman, T.

    I review the current status of atomic data for X-ray astronomy satellites. This includes some of the astrophysical issues which can be addressed, current modeling and analysis techniques, computational tools, the limitations imposed by currently available atomic data, and the validity of standard assumptions. I also discuss the future: challenges associated with future missions and goals for atomic data collection.

  18. The Development of Maritime English Learning Model Using Authentic Assessment Based Bridge Simulator in Merchant Marine Polytechnic, Makassar

    ERIC Educational Resources Information Center

    Fauzi, Ahmad; Bundu, Patta; Tahmir, Suradi

    2016-01-01

    Bridge simulator constitutes a very fundamental and vital tool to trigger and ensure that seamen or seafarers possess the standardized competence required. By using the bridge simulator technique, a reality based study can be presented easily and delivered to the students in ongoing basis to their classroom or study place. Afterwards, the validity…

  19. Semilinear programming: applications and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, S.

    Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less

  20. Blind multirigid retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2015-04-01

    Physiological nonrigid motion is inevitable when imaging, e.g., abdominal viscera, and can lead to serious deterioration of the image quality. Prospective techniques for motion correction can handle only special types of nonrigid motion, as they only allow global correction. Retrospective methods developed so far need guidance from navigator sequences or external sensors. We propose a fully retrospective nonrigid motion correction scheme that only needs raw data as an input. Our method is based on a forward model that describes the effects of nonrigid motion by partitioning the image into patches with locally rigid motion. Using this forward model, we construct an objective function that we can optimize with respect to both unknown motion parameters per patch and the underlying sharp image. We evaluate our method on both synthetic and real data in 2D and 3D. In vivo data was acquired using standard imaging sequences. The correction algorithm significantly improves the image quality. Our compute unified device architecture (CUDA)-enabled graphic processing unit implementation ensures feasible computation times. The presented technique is the first computationally feasible retrospective method that uses the raw data of standard imaging sequences, and allows to correct for nonrigid motion without guidance from external motion sensors. © 2014 Wiley Periodicals, Inc.

  1. Spacecraft thermal balance testing using infrared sources

    NASA Technical Reports Server (NTRS)

    Tan, G. B. T.; Walker, J. B.

    1982-01-01

    A thermal balance test (controlled flux intensity) on a simple black dummy spacecraft using IR lamps was performed and evaluated, the latter being aimed specifically at thermal mathematical model (TMM) verification. For reference purposes the model was also subjected to a solar simulation test (SST). The results show that the temperature distributions measured during IR testing for two different model attitudes under steady state conditions are reproducible with a TMM. The TMM test data correlation is not as accurate for IRT as for SST. Using the standard deviation of the temperature difference distribution (analysis minus test) the SST data correlation is better by a factor of 1.8 to 2.5. The lower figure applies to the measured and the higher to the computer-generated IR flux intensity distribution. Techniques of lamp power control are presented. A continuing work program is described which is aimed at quantifying the differences between solar simulation and infrared techniques for a model representing the thermal radiating surfaces of a large communications spacecraft.

  2. Multidomain proteins under force

    NASA Astrophysics Data System (ADS)

    Valle-Orero, Jessica; Andrés Rivas-Pardo, Jaime; Popa, Ionel

    2017-04-01

    Advancements in single-molecule force spectroscopy techniques such as atomic force microscopy and magnetic tweezers allow investigation of how domain folding under force can play a physiological role. Combining these techniques with protein engineering and HaloTag covalent attachment, we investigate similarities and differences between four model proteins: I10 and I91—two immunoglobulin-like domains from the muscle protein titin, and two α + β fold proteins—ubiquitin and protein L. These proteins show a different mechanical response and have unique extensions under force. Remarkably, when normalized to their contour length, the size of the unfolding and refolding steps as a function of force reduces to a single master curve. This curve can be described using standard models of polymer elasticity, explaining the entropic nature of the measured steps. We further validate our measurements with a simple energy landscape model, which combines protein folding with polymer physics and accounts for the complex nature of tandem domains under force. This model can become a useful tool to help in deciphering the complexity of multidomain proteins operating under force.

  3. Adaptive wall research with two- and three-dimensional models in low speed and transonic tunnels

    NASA Technical Reports Server (NTRS)

    Lewis, M. C.; Neal, G.; Goodyer, M. J.

    1988-01-01

    This paper summarises recent research at the University of Southampton into adaptive wall technology and outlines the direction of current efforts. The work is aimed at developing techniques for use in test sections where the top and bottom walls may be adjusted in single curvature. Wall streamlining eliminates, as far as experimentally possible, the top and bottom wall interference in low speed and transonic aerofoil testing. A streamlining technique has been developed for low speeds which allows the testing of swept wing panels in low interference environments. At higher speeds, a comparison of several two-dimensional transonic streamlining algorithms has been made and a technique for streamlining with a choked test section has also been developed. Three-dimensional work has mainly concentrated on tests of sidewall mounted half-wings and the development of the software packages required to assess interference and to adjust the flexible walls. It has been demonstrated that two-dimensional wall adaptation can significantly modify the level of wall interference around relatively large three-dimensional models. The residual interferences are small and are probably amenable to standard post-test correction methods. Tests on a calibrated wing-body model are planned in the near future to further validate the proposed streamlining technique.

  4. A novel cardiac MR chamber volume model for mechanical dyssynchrony assessment

    NASA Astrophysics Data System (ADS)

    Song, Ting; Fung, Maggie; Stainsby, Jeffrey A.; Hood, Maureen N.; Ho, Vincent B.

    2009-02-01

    A novel cardiac chamber volume model is proposed for the assessment of left ventricular mechanical dyssynchrony. The tool is potentially useful for assessment of regional cardiac function and identification of mechanical dyssynchrony on MRI. Dyssynchrony results typically from a contraction delay between one or more individual left ventricular segments, which in turn leads to inefficient ventricular function and ultimately heart failure. Cardiac resynchronization therapy has emerged as an electrical treatment of choice for heart failure patients with dyssynchrony. Prior MRI techniques have relied on assessments of actual cardiac wall changes either using standard cine MR images or specialized pulse sequences. In this abstract, we detail a semi-automated method that evaluates dyssynchrony based on segmental volumetric analysis of the left ventricular (LV) chamber as illustrated on standard cine MR images. Twelve sectors each were chosen for the basal and mid-ventricular slices and 8 sectors were chosen for apical slices for a total of 32 sectors. For each slice (i.e. basal, mid and apical), a systolic dyssynchrony index (SDI) was measured. SDI, a parameter used for 3D echocardiographic analysis of dyssynchrony, was defined as the corrected standard deviation of the time at which minimal volume is reached in each sector. The SDI measurement of a healthy volunteer was 3.54%. In a patient with acute myocardial infarction, the SDI measurements 10.98%, 16.57% and 1.41% for basal, mid-ventricular and apical LV slices, respectively. Based on published 3D echocardiogram reference threshold values, the patient's SDI corresponds to moderate basal dysfunction, severe mid-ventricular dysfunction, and normal apical LV function, which were confirmed on echocardiography. The LV chamber segmental volume analysis model and SDI is feasible using standard cine MR data and may provide more reliable assessment of patients with dyssynchrony especially if the LV myocardium is thin or if the MR images have spatial resolution insufficient for proper resolution of wall thickness-features problematic for dyssynchrony assessment using existing MR techniques.

  5. A facial reconstruction and identification technique for seriously devastating head wounds.

    PubMed

    Joukal, Marek; Frišhons, Jan

    2015-07-01

    Many authors have focused on facial identification techniques, and facial reconstructions for cases when skulls have been found are especially well known. However, a standardized facial identification technique for an unknown body with seriously devastating head injuries has not yet been developed. A reconstruction and identification technique was used in 7 cases of accidents involving trains striking pedestrians. This identification technique is based on the removal of skull bone fragments, subsequent fixation of soft tissue onto a universal commercial polystyrene head model, precise suture of dermatomuscular flaps, and definitive adjustment using cosmetic treatments. After reconstruction, identifying marks such as scars, eyebrows, facial lines, facial hair and partly hairstyle become evident. It is then possible to present a modified picture of the reconstructed face to relatives. After comparing the results with photos of the person before death, this technique has proven to be very useful for identifying unknown bodies when other identification techniques are not available. This technique is useful for its being rather quick and especially for its results. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. A radial transmission line material measurement apparatus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warne, L.K.; Moyer, R.D.; Koontz, T.E.

    1993-05-01

    A radial transmission line material measurement sample apparatus (sample holder, offset short standards, measurement software, and instrumentation) is described which has been proposed, analyzed, designed, constructed, and tested. The purpose of the apparatus is to obtain accurate surface impedance measurements of lossy, possibly anisotropic, samples at low and intermediate frequencies (vhf and low uhf). The samples typically take the form of sections of the material coatings on conducting objects. Such measurements thus provide the key input data for predictive numerical scattering codes. Prediction of the sample surface impedance from the coaxial input impedance measurement is carried out by two techniques.more » The first is an analytical model for the coaxial-to-radial transmission line junction. The second is an empirical determination of the bilinear transformation model of the junction by the measurement of three full standards. The standards take the form of three offset shorts (and an additional lossy Salisbury load), which have also been constructed. The accuracy achievable with the device appears to be near one percent.« less

  7. Precision measurements of the RSA method using a phantom model of hip prosthesis.

    PubMed

    Mäkinen, Tatu J; Koort, Jyri K; Mattila, Kimmo T; Aro, Hannu T

    2004-04-01

    Radiostereometric analysis (RSA) has become one of the recommended techniques for pre-market evaluation of new joint implant designs. In this study we evaluated the effect of repositioning of X-ray tubes and phantom model on the precision of the RSA method. In precision measurements, we utilized mean error of rigid body fitting (ME) values as an internal control for examinations. ME value characterizes relative motion among the markers within each rigid body and is conventionally used to detect loosening of a bone marker. Three experiments, each consisting of 10 double examinations, were performed. In the first experiment, the X-ray tubes and the phantom model were not repositioned between one double examination. In experiments two and three, the X-ray tubes were repositioned between one double examination. In addition, the position of the phantom model was changed in experiment three. Results showed that significant differences could be found in 2 of 12 comparisons when evaluating the translation and rotation of the prosthetic components. Repositioning procedures increased ME values mimicking deformation of rigid body segments. Thus, ME value seemed to be a more sensitive parameter than migration values in this study design. These results confirmed the importance of standardized radiographic technique and accurate patient positioning for RSA measurements. Standardization and calibration procedures should be performed with phantom models in order to avoid unnecessary radiation dose of the patients. The present model gives the means to establish and to follow the intra-laboratory precision of the RSA method. The model is easily applicable in any research unit and allows the comparison of the precision values in different laboratories of multi-center trials.

  8. A new technique for measuring listening and reading literacy in developing countries

    NASA Astrophysics Data System (ADS)

    Greene, Barbara A.; Royer, James M.; Anzalone, Stephen

    1990-03-01

    One problem in evaluating educational interventions in developing countries is the absence of tests that adequately reflect the culture and curriculum. The Sentence Verification Technique is a new procedure for measuring reading and listening comprehension that allows for the development of tests based on materials indigenous to a given culture. The validity of using the Sentence Verification Technique to measure reading comprehension in Grenada was evaluated in the present study. The study involved 786 students at standards 3, 4 and 5. The tests for each standard consisted of passages that varied in difficulty. The students identified as high ability students in all three standards performed better than those identified as low ability. All students performed better with easier passages. Additionally, students in higher standards performed bettter than students in lower standards on a given passage. These results supported the claim that the Sentence Verification Technique is a valid measure of reading comprehension in Grenada.

  9. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  10. Improved Propulsion Modeling for Low-Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Knittel, Jeremy M.; Englander, Jacob A.; Ozimek, Martin T.; Atchison, Justin A.; Gould, Julian J.

    2017-01-01

    Low-thrust trajectory design is tightly coupled with spacecraft systems design. In particular, the propulsion and power characteristics of a low-thrust spacecraft are major drivers in the design of the optimal trajectory. Accurate modeling of the power and propulsion behavior is essential for meaningful low-thrust trajectory optimization. In this work, we discuss new techniques to improve the accuracy of propulsion modeling in low-thrust trajectory optimization while maintaining the smooth derivatives that are necessary for a gradient-based optimizer. The resulting model is significantly more realistic than the industry standard and performs well inside an optimizer. A variety of deep-space trajectory examples are presented.

  11. Adaptive Elastic Net for Generalized Methods of Moments.

    PubMed

    Caner, Mehmet; Zhang, Hao Helen

    2014-01-30

    Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.

  12. Kalman filter approach for uncertainty quantification in time-resolved laser-induced incandescence.

    PubMed

    Hadwin, Paul J; Sipkens, Timothy A; Thomson, Kevin A; Liu, Fengshan; Daun, Kyle J

    2018-03-01

    Time-resolved laser-induced incandescence (TiRe-LII) data can be used to infer spatially and temporally resolved volume fractions and primary particle size distributions of soot-laden aerosols, but these estimates are corrupted by measurement noise as well as uncertainties in the spectroscopic and heat transfer submodels used to interpret the data. Estimates of the temperature, concentration, and size distribution of soot primary particles within a sample aerosol are typically made by nonlinear regression of modeled spectral incandescence decay, or effective temperature decay, to experimental data. In this work, we employ nonstationary Bayesian estimation techniques to infer aerosol properties from simulated and experimental LII signals, specifically the extended Kalman filter and Schmidt-Kalman filter. These techniques exploit the time-varying nature of both the measurements and the models, and they reveal how uncertainty in the estimates computed from TiRe-LII data evolves over time. Both techniques perform better when compared with standard deterministic estimates; however, we demonstrate that the Schmidt-Kalman filter produces more realistic uncertainty estimates.

  13. Satellite and Ground-based Radiometers Reveal Much Lower Dust Absorption of Sunlight than Used in Climate Models

    NASA Technical Reports Server (NTRS)

    Kaufman, Y. J.; Tanre, D.; Dubovik, O.; Karnieli, A.; Remer, L. A.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The ability of dust to absorb solar radiation and heat the atmosphere is one of the main uncertainties in climate modeling and the prediction of climate change. Dust absorption is not well known due to limitations of in situ measurements. New techniques to measure dust absorption are needed in order to assess the impact of dust on climate. Here we report two new independent remote sensing techniques that provide sensitive measurements of dust absorption. Both are based on remote sensing. One uses satellite spectral measurements, the second uses ground based sky measurements from the AERONET network. Both techniques demonstrate that Saharan dust absorption of solar radiation is several times smaller than the current international standards. Dust cooling of the earth system in the solar spectrum is therefore significantly stronger than recent calculations indicate. We shall also address the issue of the effects of dust non-sphericity on the aerosol optical properties.

  14. Analytical Model of Large Data Transactions in CoAP Networks

    PubMed Central

    Ludovici, Alessandro; Di Marco, Piergiuseppe; Calveras, Anna; Johansson, Karl H.

    2014-01-01

    We propose a novel analytical model to study fragmentation methods in wireless sensor networks adopting the Constrained Application Protocol (CoAP) and the IEEE 802.15.4 standard for medium access control (MAC). The blockwise transfer technique proposed in CoAP and the 6LoWPAN fragmentation are included in the analysis. The two techniques are compared in terms of reliability and delay, depending on the traffic, the number of nodes and the parameters of the IEEE 802.15.4 MAC. The results are validated trough Monte Carlo simulations. To the best of our knowledge this is the first study that evaluates and compares analytically the performance of CoAP blockwise transfer and 6LoWPAN fragmentation. A major contribution is the possibility to understand the behavior of both techniques with different network conditions. Our results show that 6LoWPAN fragmentation is preferable for delay-constrained applications. For highly congested networks, the blockwise transfer slightly outperforms 6LoWPAN fragmentation in terms of reliability. PMID:25153143

  15. Applications of nonlinear systems theory to control design

    NASA Technical Reports Server (NTRS)

    Hunt, L. R.; Villarreal, Ramiro

    1988-01-01

    For most applications in the control area, the standard practice is to approximate a nonlinear mathematical model by a linear system. Since the feedback linearizable systems contain linear systems as a subclass, the procedure of approximating a nonlinear system by a feedback linearizable one is examined. Because many physical plants (e.g., aircraft at the NASA Ames Research Center) have mathematical models which are close to feedback linearizable systems, such approximations are certainly justified. Results and techniques are introduced for measuring the gap between the model and its truncated linearizable part. The topic of pure feedback systems is important to the study.

  16. 48 CFR 9904.417-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... capitalized, such as the method used for financial accounting and reporting, may be used, provided the.... 9904.417-50 Section 9904.417-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.417-50 Techniques for application. (a) The cost of money...

  17. 48 CFR 9904.412-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 9904.412-50 Section 9904.412-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.412-50 Techniques for application. (a) Components of... identified part of the pension cost of a cost accounting period and shall be included in equal annual...

  18. 48 CFR 9904.415-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 9904.415-50 Section 9904.415-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.415-50 Techniques for application. (a) The contractor... shall be assignable only to the cost accounting period or periods in which the compensation is paid to...

  19. 48 CFR 9904.408-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 9904.408-50 Section 9904.408-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.408-50 Techniques for application. (a) Determinations... determination shall be made beginning with the first cost accounting period to which such new or changed plan or...

  20. 48 CFR 9904.416-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.416-50 Techniques for application. (a) Measurement of.... 9904.416-50 Section 9904.416-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... be assigned pro rata among the cost accounting periods covered by the policy term, except as provided...

  1. 48 CFR 9904.410-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 9904.410-50 Section 9904.410-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.410-50 Techniques for application. (a) G&A expenses of a...

  2. Detecting dark matter in the Milky Way with cosmic and gamma radiation

    NASA Astrophysics Data System (ADS)

    Carlson, Eric C.

    Over the last decade, experiments in high-energy astroparticle physics have reached unprecedented precision and sensitivity which span the electromagnetic and cosmic-ray spectra. These advances have opened a new window onto the universe for which little was previously known. Such dramatic increases in sensitivity lead naturally to claims of excess emission, which call for either revised astrophysical models or the existence of exotic new sources such as particle dark matter. Here we stand firmly with Occam, sharpening his razor by (i) developing new techniques for discriminating astrophysical signatures from those of dark matter, and (ii) by developing detailed foreground models which can explain excess signals and shed light on the underlying astrophysical processes at hand. We concentrate most directly on observations of Galactic gamma and cosmic rays, factoring the discussion into three related parts which each contain significant advancements from our cumulative works. In Part I we introduce concepts which are fundamental to the Indirect Detection of particle dark matter, including motivations, targets, experiments, production of Standard Model particles, and a variety of statistical techniques. In Part II we introduce basic and advanced modelling techniques for propagation of cosmic-rays through the Galaxy and describe astrophysical gamma-ray production, as well as presenting state-of-the-art propagation models of the Milky Way.Finally, in Part III, we employ these models and techniques in order to study several indirect detection signals, including the Fermi GeV excess at the Galactic center, the Fermi 135 GeV line, the 3.5 keV line, and the WMAP-Planck haze.

  3. A Novel Fast Helical 4D-CT Acquisition Technique to Generate Low-Noise Sorting Artifact–Free Images at User-Selected Breathing Phases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, David, E-mail: dhthomas@mednet.ucla.edu; Lamb, James; White, Benjamin

    2014-05-01

    Purpose: To develop a novel 4-dimensional computed tomography (4D-CT) technique that exploits standard fast helical acquisition, a simultaneous breathing surrogate measurement, deformable image registration, and a breathing motion model to remove sorting artifacts. Methods and Materials: Ten patients were imaged under free-breathing conditions 25 successive times in alternating directions with a 64-slice CT scanner using a low-dose fast helical protocol. An abdominal bellows was used as a breathing surrogate. Deformable registration was used to register the first image (defined as the reference image) to the subsequent 24 segmented images. Voxel-specific motion model parameters were determined using a breathing motion model. Themore » tissue locations predicted by the motion model in the 25 images were compared against the deformably registered tissue locations, allowing a model prediction error to be evaluated. A low-noise image was created by averaging the 25 images deformed to the first image geometry, reducing statistical image noise by a factor of 5. The motion model was used to deform the low-noise reference image to any user-selected breathing phase. A voxel-specific correction was applied to correct the Hounsfield units for lung parenchyma density as a function of lung air filling. Results: Images produced using the model at user-selected breathing phases did not suffer from sorting artifacts common to conventional 4D-CT protocols. The mean prediction error across all patients between the breathing motion model predictions and the measured lung tissue positions was determined to be 1.19 ± 0.37 mm. Conclusions: The proposed technique can be used as a clinical 4D-CT technique. It is robust in the presence of irregular breathing and allows the entire imaging dose to contribute to the resulting image quality, providing sorting artifact–free images at a patient dose similar to or less than current 4D-CT techniques.« less

  4. A novel fast helical 4D-CT acquisition technique to generate low-noise sorting artifact-free images at user-selected breathing phases.

    PubMed

    Thomas, David; Lamb, James; White, Benjamin; Jani, Shyam; Gaudio, Sergio; Lee, Percy; Ruan, Dan; McNitt-Gray, Michael; Low, Daniel

    2014-05-01

    To develop a novel 4-dimensional computed tomography (4D-CT) technique that exploits standard fast helical acquisition, a simultaneous breathing surrogate measurement, deformable image registration, and a breathing motion model to remove sorting artifacts. Ten patients were imaged under free-breathing conditions 25 successive times in alternating directions with a 64-slice CT scanner using a low-dose fast helical protocol. An abdominal bellows was used as a breathing surrogate. Deformable registration was used to register the first image (defined as the reference image) to the subsequent 24 segmented images. Voxel-specific motion model parameters were determined using a breathing motion model. The tissue locations predicted by the motion model in the 25 images were compared against the deformably registered tissue locations, allowing a model prediction error to be evaluated. A low-noise image was created by averaging the 25 images deformed to the first image geometry, reducing statistical image noise by a factor of 5. The motion model was used to deform the low-noise reference image to any user-selected breathing phase. A voxel-specific correction was applied to correct the Hounsfield units for lung parenchyma density as a function of lung air filling. Images produced using the model at user-selected breathing phases did not suffer from sorting artifacts common to conventional 4D-CT protocols. The mean prediction error across all patients between the breathing motion model predictions and the measured lung tissue positions was determined to be 1.19 ± 0.37 mm. The proposed technique can be used as a clinical 4D-CT technique. It is robust in the presence of irregular breathing and allows the entire imaging dose to contribute to the resulting image quality, providing sorting artifact-free images at a patient dose similar to or less than current 4D-CT techniques. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Using normalization 3D model for automatic clinical brain quantative analysis and evaluation

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping

    2003-05-01

    Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.

  6. An in vitro comparison of photogrammetric and conventional complete-arch implant impression techniques.

    PubMed

    Bergin, Junping Ma; Rubenstein, Jeffrey E; Mancl, Lloyd; Brudvik, James S; Raigrodski, Ariel J

    2013-10-01

    Conventional impression techniques for recording the location and orientation of implant-supported, complete-arch prostheses are time consuming and prone to error. The direct optical recording of the location and orientation of implants, without the need for intermediate transfer steps, could reduce or eliminate those disadvantages. The objective of this study was to assess the feasibility of using a photogrammetric technique to record the location and orientation of multiple implants and to compare the results with those of a conventional complete-arch impression technique. A stone cast of an edentulous mandibular arch containing 5 implant analogs was fabricated to create a master model. The 3-dimensional (3D) spatial orientations of implant analogs on the master model were measured with a coordinate measuring machine (CMM) (control). Five definitive casts were made from the master model with a splinted impression technique. The positions of the implant analogs on the 5 casts were measured with a NobelProcera scanner (conventional method). Prototype optical targets were attached to the master model implant analogs, and 5 sets of images were recorded with a digital camera and a standardized image capture protocol. Dimensional data were imported into commercially available photogrammetry software (photogrammetric method). The precision and accuracy of the 2 methods were compared with a 2-sample t test (α=.05) and a 95% confidence interval. The location precision (standard error of measurement) for CMM was 3.9 µm (95% CI 2.7 to 7.1), for photogrammetry, 5.6 µm (95% CI 3.4 to 16.1), and for the conventional method, 17.2 µm (95% CI 10.3 to 49.4). The average measurement error was 26.2 µm (95% CI 15.9 to 36.6) for the conventional method and 28.8 µm (95% CI 24.8 to 32.9) for the photogrammetric method. The overall measurement accuracy was not significantly different when comparing the conventional to the photogrammetric method (mean difference = -2.6 µm, 95% CI -12.8 to 7.6). The precision of the photogrammetric method was similar to CMM, but lower for the conventional method as compared to CMM and the photogrammetric method. However, the overall measurement accuracy of the photogrammetric and conventional methods was similar. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  7. 48 CFR 9905.506-50 - Techniques for application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 9905.506-50 Section 9905.506-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS FOR EDUCATIONAL INSTITUTIONS 9905.506-50 Techniques for application. (a) The cost of an indirect function which exists for only a part of a cost accounting period may...

  8. Pure retroperitoneal natural orifice translumenal endoscopic surgery (NOTES) transvaginal nephrectomy using standard laparoscopic instruments: a safety and feasibility study in a porcine model.

    PubMed

    Wei, Dechao; Han, Yili; Li, Mingchuan; Wang, Yongxing; Chen, Yatong; Luo, Yong; Jiang, Yongguang

    2016-06-11

    Among the different organs used for NOTES (natural orifice translumenal endoscopic surgery) technique, the transvaginal approach may be the optimal choice because of a simple and secure closure of colpotomy site. Pure and hybrid NOTES transvaginal operations were routinely performed via transperitoneal access. In this study, we investigate the safety and feasibility of pure retroperitoneal natural orifice translumenal endoscopic surgery (NOTES) transvaginal nephrectomy using conventional laparoscopic techniques in a porcine model. Six female pigs, weighing an average of 30 kg, were used in this study. Under general anesthesia, pure retroperitoneal NOTES transvaginal nephrectomy was conducted using standard laparoscopic instruments. Posterolateral colpotomy was performed, and the incision was enlarged laterally using blunt dissection and pneumatic dilation. A single-port device was inserted to construct the operative channel. The retroperitoneal space was created using sharp and blunt dissection under endoscopic guidance up to the level of the kidney. Dissection and removal of the kidney were performed according to standard surgical procedure, and the colpotomy site was closed using interrupted sutures. The survival and complications were observed 1 week postoperatively. Our results showed that two cases failed because of peritoneal rupture. One case was successful, but required the assistance of an extra 5 mm laparoscopic trocar inserted in the flank. Three cases of pure retroperitoneal NOTES transvaginal nephrectomy were completed, and survived 1 week after the operation. In these three cases, no intra- or postoperative complications were observed. All findings confirmed the safety and feasibility of the retroperitoneal pure retroperitoneal NOTES transvaginal nephrectomy using standard laparoscopic instruments, which suggested the possibility of clinical application in human beings in the future.

  9. Functional data analysis for dynamical system identification of behavioral processes.

    PubMed

    Trail, Jessica B; Collins, Linda M; Rivera, Daniel E; Li, Runze; Piper, Megan E; Baker, Timothy B

    2014-06-01

    Efficient new technology has made it straightforward for behavioral scientists to collect anywhere from several dozen to several thousand dense, repeated measurements on one or more time-varying variables. These intensive longitudinal data (ILD) are ideal for examining complex change over time but present new challenges that illustrate the need for more advanced analytic methods. For example, in ILD the temporal spacing of observations may be irregular, and individuals may be sampled at different times. Also, it is important to assess both how the outcome changes over time and the variation between participants' time-varying processes to make inferences about a particular intervention's effectiveness within the population of interest. The methods presented in this article integrate 2 innovative ILD analytic techniques: functional data analysis and dynamical systems modeling. An empirical application is presented using data from a smoking cessation clinical trial. Study participants provided 42 daily assessments of pre-quit and post-quit withdrawal symptoms. Regression splines were used to approximate smooth functions of craving and negative affect and to estimate the variables' derivatives for each participant. We then modeled the dynamics of nicotine craving using standard input-output dynamical systems models. These models provide a more detailed characterization of the post-quit craving process than do traditional longitudinal models, including information regarding the type, magnitude, and speed of the response to an input. The results, in conjunction with standard engineering control theory techniques, could potentially be used by tobacco researchers to develop a more effective smoking intervention. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  10. Impact of airway gas exchange on the multiple inert gas elimination technique: theory.

    PubMed

    Anderson, Joseph C; Hlastala, Michael P

    2010-03-01

    The multiple inert gas elimination technique (MIGET) provides a method for estimating alveolar gas exchange efficiency. Six soluble inert gases are infused into a peripheral vein. Measurements of these gases in breath, arterial blood, and venous blood are interpreted using a mathematical model of alveolar gas exchange (MIGET model) that neglects airway gas exchange. A mathematical model describing airway and alveolar gas exchange predicts that two of these gases, ether and acetone, exchange primarily within the airways. To determine the effect of airway gas exchange on the MIGET, we selected two additional gases, toluene and m-dichlorobenzene, that have the same blood solubility as ether and acetone and minimize airway gas exchange via their low water solubility. The airway-alveolar gas exchange model simulated the exchange of toluene, m-dichlorobenzene, and the six MIGET gases under multiple conditions of alveolar ventilation-to-perfusion, VA/Q, heterogeneity. We increased the importance of airway gas exchange by changing bronchial blood flow, Qbr. From these simulations, we calculated the excretion and retention of the eight inert gases and divided the results into two groups: (1) the standard MIGET gases which included acetone and ether and (2) the modified MIGET gases which included toluene and m-dichlorobenzene. The MIGET mathematical model predicted distributions of ventilation and perfusion for each grouping of gases and multiple perturbations of VA/Q and Qbr. Using the modified MIGET gases, MIGET predicted a smaller dead space fraction, greater mean VA, greater log(SDVA), and more closely matched the imposed VA distribution than that using the standard MIGET gases. Perfusion distributions were relatively unaffected.

  11. Evaluation of a blocking ELISA for the detection of antibodies against Lawsonia intracellularis in pig sera.

    PubMed

    Jacobson, Magdalena; Wallgren, Per; Nordengrahn, Ann; Merza, Malik; Emanuelson, Ulf

    2011-04-01

    Lawsonia intracellularis is a common cause of chronic diarrhoea and poor performance in young growing pigs. Diagnosis of this obligate intracellular bacterium is based on the demonstration of the microbe or microbial DNA in tissue specimens or faecal samples, or the demonstration of L. intracellularis-specific antibodies in sera. The aim of the present study was to evaluate a blocking ELISA in the detection of serum antibodies to L. intracellularis, by comparison to the previously widely used immunofluorescent antibody test (IFAT). Sera were collected from 176 pigs aged 8-12 weeks originating from 24 herds with or without problems with diarrhoea and poor performance in young growing pigs. Sera were analyzed by the blocking ELISA and by IFAT. Bayesian modelling techniques were used to account for the absence of a gold standard test and the results of the blocking ELISA was modelled against the IFAT test with a "2 dependent tests, 2 populations, no gold standard" model. At the finally selected cut-off value of percent inhibition (PI) 35, the diagnostic sensitivity of the blocking ELISA was 72% and the diagnostic specificity was 93%. The positive predictive value was 0.82 and the negative predictive value was 0.89, at the observed prevalence of 33.5%. The sensitivity and specificity as evaluated by Bayesian statistic techniques differed from that previously reported. Properties of diagnostic tests may well vary between countries, laboratories and among populations of animals. In the absence of a true gold standard, the importance of validating new methods by appropriate statistical methods and with respect to the target population must be emphasized.

  12. Intelligent Devices - Sensors and Actuators - A KSC Perspective

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.; Perotti, Jose M.

    2008-01-01

    The primary objective of this workshop is to identify areas of advancement in sensor measurements and technologies that will help to define standard practices and procedures that will better enable the infusion into flight programs of sensors with improved capabilities but limited or no flight heritage. These standards would be crucial to demonstrating a methodology for validating current models while also creating the possibility of being able to have sufficient data to either update these models (e. g., spatial or temporal resolution, etc.) or develop new models based on the ability to simulate the new measured physical parameters. The workshop is also intended to narrow the gap between sensor measurements (and techniques), data processing techniques and the ability to make use of that data by gathering together experts in the field for a short workshop. This collaboration will unite NASA and other government agencies with contractor capabilities industry-wide to prevent duplication, spawn synergistic growth in sensor technology, help analysts make good engineering decisions and help focus new sensor maturation efforts to better meet future flight program customers' needs. This is the first such workshop designed to specifically address establishing a standardized protocol/methodology for demonstrating the technology readiness of non-flight heritage sensor systems. While other similar workshops are held covering many areas of interest to the sensor development community, no other meeting is specific enough to address this vital but often overlooked topic. By encouraging cross-fertilization of ideas from instrument experts from many different backgrounds, it is hoped that this workshop will initiate innovative new ideas and concepts in sensor development, calibration and validation. It is anticipated this workshop will repeat periodically as needed.

  13. TOMS and SBUV Data: Comparison to 3D Chemical-Transport Model Results

    NASA Technical Reports Server (NTRS)

    Stolarski, Richard S.; Douglass, Anne R.; Steenrod, Steve; Frith, Stacey

    2003-01-01

    We have updated our merged ozone data (MOD) set using the TOMS data from the new version 8 algorithm. We then analyzed these data for contributions from solar cycle, volcanoes, QBO, and halogens using a standard statistical time series model. We have recently completed a hindcast run of our 3D chemical-transport model for the same years. This model uses off-line winds from the finite-volume GCM, a full stratospheric photochemistry package, and time-varying forcing due to halogens, solar uv, and volcanic aerosols. We will report on a parallel analysis of these model results using the same statistical time series technique as used for the MOD data.

  14. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    NASA Astrophysics Data System (ADS)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  15. In vitro comparison of intra-abdominal hypertension development after different temporary abdominal closure techniques.

    PubMed

    Benninger, Emanuel; Labler, Ludwig; Seifert, Burkhardt; Trentz, Otmar; Menger, Michael D; Meier, Christoph

    2008-01-01

    To compare volume reserve capacity (VRC) and development of intra-abdominal hypertension after different in vitro temporary abdominal closure (TAC) techniques. A model of the abdomen was designed. The abdominal wall was simulated with polychloroprene, a synthetic rubber compound. A lentil-shaped defect of 150 cm(2) was cut into the anterior aspect of the abdominal wall. TAC of this defect was performed by a zipper system (ZS), a bag silo closure (BSC), or a vacuum assisted closure (VAC) with subatmospheric pressures ranging from 0- to 200 mmHg. The model with intact abdominal wall served as reference. The model was filled with water to baseline level. The intra-abdominal pressure was increased in 2 mmHg steps from baseline level (6 mmHg) to 40 mmHg by adding volume to the system according to a standardized protocol. VRC with corresponding intra-abdominal pressure were analyzed and compared for the different TAC techniques. VRC was the highest after BSC at all pressure levels studied (P < 0.05). VAC and ZS resulted in significantly lower VRC compared with BSC and reference (P < 0.05). The magnitude of negative pressure on the VAC did not significantly influence the VRC. In the present in vitro model, BSC demonstrated the highest VRC of all evaluated TAC techniques. Different levels of subatmospheric pressures applied to the VAC did not affect VRC. The results for ZS and VAC indicate that these TAC techniques may increase the risk for recurrent intra-abdominal hypertension and should therefore not be used in high-risk patients during the initial phase after abdominal decompression.

  16. Comprehensive Assessment of Coronary Artery Disease by Using First-Pass Analysis Dynamic CT Perfusion: Validation in a Swine Model.

    PubMed

    Hubbard, Logan; Lipinski, Jerry; Ziemer, Benjamin; Malkasian, Shant; Sadeghi, Bahman; Javan, Hanna; Groves, Elliott M; Dertli, Brian; Molloi, Sabee

    2018-01-01

    Purpose To retrospectively validate a first-pass analysis (FPA) technique that combines computed tomographic (CT) angiography and dynamic CT perfusion measurement into one low-dose examination. Materials and Methods The study was approved by the animal care committee. The FPA technique was retrospectively validated in six swine (mean weight, 37.3 kg ± 7.5 [standard deviation]) between April 2015 and October 2016. Four to five intermediate-severity stenoses were generated in the left anterior descending artery (LAD), and 20 contrast material-enhanced volume scans were acquired per stenosis. All volume scans were used for maximum slope model (MSM) perfusion measurement, but only two volume scans were used for FPA perfusion measurement. Perfusion measurements in the LAD, left circumflex artery (LCx), right coronary artery, and all three coronary arteries combined were compared with microsphere perfusion measurements by using regression, root-mean-square error, root-mean-square deviation, Lin concordance correlation, and diagnostic outcomes analysis. The CT dose index and size-specific dose estimate per two-volume FPA perfusion measurement were also determined. Results FPA and MSM perfusion measurements (P FPA and P MSM ) in all three coronary arteries combined were related to reference standard microsphere perfusion measurements (P MICRO ), as follows: P FPA_COMBINED = 1.02 P MICRO_COMBINED + 0.11 (r = 0.96) and P MSM_COMBINED = 0.28 P MICRO_COMBINED + 0.23 (r = 0.89). The CT dose index and size-specific dose estimate per two-volume FPA perfusion measurement were 10.8 and 17.8 mGy, respectively. Conclusion The FPA technique was retrospectively validated in a swine model and has the potential to be used for accurate, low-dose vessel-specific morphologic and physiologic assessment of coronary artery disease. © RSNA, 2017.

  17. Combination of five diagnostic tests to estimate the prevalence of hookworm infection among school-aged children from a rural area of colombia.

    PubMed

    Barreto, Rafael E; Narváez, Javier; Sepúlveda, Natalia A; Velásquez, Fabián C; Díaz, Sandra C; López, Myriam Consuelo; Reyes, Patricia; Moncada, Ligia I

    2017-09-01

    Public health programs for the control of soil-transmitted helminthiases require valid diagnostic tests for surveillance and parasitic control evaluation. However, there is currently no agreement about what test should be used as a gold standard for the diagnosis of hookworm infection. Still, in presence of concurrent data for multiple tests it is possible to use statistical models to estimate measures of test performance and prevalence. The aim of this study was to estimate the diagnostic accuracy of five parallel tests (direct microscopic examination, Kato-Katz, Harada-Mori, modified Ritchie-Frick, and culture in agar plate) to detect hookworm infections in a sample of school-aged children from a rural area in Colombia. We used both, a frequentist approach, and Bayesian latent class models to estimate the sensitivity and specificity of five tests for hookworm detection, and to estimate the prevalence of hookworm infection in absence of a Gold Standard. The Kato-Katz and agar plate methods had an overall agreement of 95% and kappa coefficient of 0.76. Different models estimated a sensitivity between 76% and 92% for the agar plate technique, and 52% to 87% for the Kato-Katz technique. The other tests had lower sensitivity. All tests had specificity between 95% and 98%. The prevalence estimated by the Kato-Katz and Agar plate methods for different subpopulations varied between 10% and 14%, and was consistent with the prevalence estimated from the combination of all tests. The Harada-Mori, Ritchie-Frick and direct examination techniques resulted in lower and disparate prevalence estimates. Bayesian approaches assuming imperfect specificity resulted in lower prevalence estimates than the frequentist approach. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Decadal climate predictions improved by ocean ensemble dispersion filtering

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-06-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the whole ensemble with its ensemble average, improves a prediction system. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Our study shows that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure applying the average during the model run, called ensemble dispersion filter, results in more accurate results than the standard prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3235517','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3235517"><span>Minimum Information about a Genotyping Experiment (MIGEN)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Huang, Jie; Mirel, Daniel; Pugh, Elizabeth; Xing, Chao; Robinson, Peter N.; Pertsemlidis, Alexander; Ding, LiangHao; Kozlitina, Julia; Maher, Joseph; Rios, Jonathan; Story, Michael; Marthandan, Nishanth; Scheuermann, Richard H.</p> <p>2011-01-01</p> <p>Genotyping experiments are widely used in clinical and basic research laboratories to identify associations between genetic variations and normal/abnormal phenotypes. Genotyping assay techniques vary from single genomic regions that are interrogated using PCR reactions to high throughput assays examining genome-wide sequence and structural variation. The resulting genotype data may include millions of markers of thousands of individuals, requiring various statistical, modeling or other data analysis methodologies to interpret the results. To date, there are no standards for reporting genotyping experiments. Here we present the Minimum Information about a Genotyping Experiment (MIGen) standard, defining the minimum information required for reporting genotyping experiments. MIGen standard covers experimental design, subject description, genotyping procedure, quality control and data analysis. MIGen is a registered project under MIBBI (Minimum Information for Biological and Biomedical Investigations) and is being developed by an interdisciplinary group of experts in basic biomedical science, clinical science, biostatistics and bioinformatics. To accommodate the wide variety of techniques and methodologies applied in current and future genotyping experiment, MIGen leverages foundational concepts from the Ontology for Biomedical Investigations (OBI) for the description of the various types of planned processes and implements a hierarchical document structure. The adoption of MIGen by the research community will facilitate consistent genotyping data interpretation and independent data validation. MIGen can also serve as a framework for the development of data models for capturing and storing genotyping results and experiment metadata in a structured way, to facilitate the exchange of metadata. PMID:22180825</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25225157','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25225157"><span>Repeated stool sampling and use of multiple techniques enhance the sensitivity of helminth diagnosis: a cross-sectional survey in southern Lao People's Democratic Republic.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sayasone, Somphou; Utzinger, Jürg; Akkhavong, Kongsap; Odermatt, Peter</p> <p>2015-01-01</p> <p>Intestinal parasitic infections are common in Lao People's Democratic Republic (Lao PDR). We investigated the accuracy of the Kato-Katz (KK) technique in relation to varying stool sampling efforts, and determined the effect of the concurrent use of a quantitative formalin-ethyl acetate concentration technique (FECT) for helminth diagnosis and appraisal of concomitant infections. The study was carried out between March and May 2006 in Champasack province, southern Lao PDR. Overall, 485 individuals aged ≥6 months who provided three stool samples were included in the final analysis. All stool samples were subjected to the KK technique. Additionally, one stool sample per individual was processed by FECT. Diagnosis was done under a light microscope by experienced laboratory technicians. Analysis of three stool samples with KK plus a single FECT was considered as diagnostic 'gold' standard and resulted in prevalence estimates of hookworm, Opisthorchis viverrini, Ascaris lumbricoides, Trichuris trichiura and Schistosoma mekongi infection of 77.9%, 65.0%, 33.4%, 26.2% and 24.3%, respectively. As expected, a single KK and a single FECT missed a considerable number of infections. While our diagnostic 'gold' standard produced similar results than those obtained by a mathematical model for most helminth infections, the 'true' prevalence predicted by the model for S. mekongi (28.1%) was somewhat higher than after multiple KK plus a single FECT (24.3%). In the current setting, triplicate KK plus a single FECT diagnosed helminth infections with high sensitivity. Hence, such a diagnostic approach might be utilised for generating high-quality baseline data, assessing anthelminthic drug efficacy and rigorous monitoring of community interventions. Copyright © 2014 Elsevier B.V. All rights reserved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhDT.......344S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhDT.......344S"><span>Modeling and Control for Microgrids</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Steenis, Joel</p> <p></p> <p>Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating range. In this document a linear model is derived for an inverter connected to the Thevenin equivalent of a microgrid. This model is then compared to a nonlinear simulation model and analyzed using the open and closed loop systems in both the time and frequency domains. The modeling error is quantified with emphasis on its use for controller design purposes. Control design examples are given using a Glover McFarlane controller, gain scheduled Glover McFarlane controller, and bumpless transfer controller which are compared to the standard droop control approach. These examples serve as a guide to illustrate the use of multi-variable modeling techniques in the context of robust controller design and show that gain scheduled MIMO control techniques can extend the operating range of a microgrid. A hardware implementation is used to compare constant gain droop controllers with Glover McFarlane controllers and shows a clear advantage of the Glover McFarlane approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29395253','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29395253"><span>Financial Impact of PEVAR Compared With Standard Endovascular Repair in Canadian Hospitals.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Roche-Nagle, Graham; Hazel, Maureen; Rajan, Dheeraj K</p> <p>2018-05-01</p> <p>The percutaneous endovascular abdominal aortic repair (PEVAR) approach is a minimally invasive technique that has demonstrated clinical benefit over traditional surgical cut down associated with standard endovascular abdominal aortic aneurysm (AAA) repair (EVAR). The objective of our study was to evaluate the budget impact to a Canadian hospital of changing the technique for AAA repair from the EVAR approach to the PEVAR approach. We examined the budget impact of replacing the EVAR approach with the PEVAR approach in a Canadian hospital that performs 100 endovascular AAA repairs annually. The model incorporates the costs associated with surgery, length of stay, and postoperative complications occurring within 30 days. The use of PEVAR in AAA repair is associated with increased access device costs when compared with the EVAR approach (CAD$1000 vs CAD$400). However, AAA repair completed with the PEVAR approach demonstrates reduced operating time (101 minutes vs 133 minutes), length of stay (2.2 days vs 3.5 days), time in the recovery room (174 minutes vs 193 minutes), and postoperative complications (6% vs 30%), which offset the increased device costs. The model establishes that switching to the PEVAR approach in a Canadian hospital performing 100 AAA repairs annually would result in a potential cost avoidance of CAD$245,120. A change in AAA repair technique from EVAR to PEVAR can be a cost-effective solution for Canadian hospitals. Copyright © 2017 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.8018E..17M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.8018E..17M"><span>Signal processing for the detection of explosive residues on varying substrates using laser-induced breakdown spectroscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Morton, Kenneth D., Jr.; Torrione, Peter A.; Collins, Leslie</p> <p>2011-05-01</p> <p>Laser induced breakdown spectroscopy (LIBS) can provide rapid, minimally destructive, chemical analysis of substances with the benefit of little to no sample preparation. Therefore, LIBS is a viable technology for the detection of substances of interest in near real-time fielded remote sensing scenarios. Of particular interest to military and security operations is the detection of explosive residues on various surfaces. It has been demonstrated that LIBS is capable of detecting such residues, however, the surface or substrate on which the residue is present can alter the observed spectra. Standard chemometric techniques such as principal components analysis and partial least squares discriminant analysis have previously been applied to explosive residue detection, however, the classification techniques developed on such data perform best against residue/substrate pairs that were included in model training but do not perform well when the residue/substrate pairs are not in the training set. Specifically residues in the training set may not be correctly detected if they are presented on a previously unseen substrate. In this work, we explicitly model LIBS spectra resulting from the residue and substrate to attempt to separate the response from each of the two components. This separation process is performed jointly with classifier design to ensure that the classifier that is developed is able to detect residues of interest without being confused by variations in the substrates. We demonstrate that the proposed classification algorithm provides improved robustness to variations in substrate compared to standard chemometric techniques for residue detection.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27457051','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27457051"><span>An Objective Evaluation of Mass Scaling Techniques Utilizing Computational Human Body Finite Element Models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Davis, Matthew L; Scott Gayzik, F</p> <p>2016-10-01</p> <p>Biofidelity response corridors developed from post-mortem human subjects are commonly used in the design and validation of anthropomorphic test devices and computational human body models (HBMs). Typically, corridors are derived from a diverse pool of biomechanical data and later normalized to a target body habitus. The objective of this study was to use morphed computational HBMs to compare the ability of various scaling techniques to scale response data from a reference to a target anthropometry. HBMs are ideally suited for this type of study since they uphold the assumptions of equal density and modulus that are implicit in scaling method development. In total, six scaling procedures were evaluated, four from the literature (equal-stress equal-velocity, ESEV, and three variations of impulse momentum) and two which are introduced in the paper (ESEV using a ratio of effective masses, ESEV-EffMass, and a kinetic energy approach). In total, 24 simulations were performed, representing both pendulum and full body impacts for three representative HBMs. These simulations were quantitatively compared using the International Organization for Standardization (ISO) ISO-TS18571 standard. Based on these results, ESEV-EffMass achieved the highest overall similarity score (indicating that it is most proficient at scaling a reference response to a target). Additionally, ESEV was found to perform poorly for two degree-of-freedom (DOF) systems. However, the results also indicated that no single technique was clearly the most appropriate for all scenarios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25867252','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25867252"><span>A new technique for quantitative analysis of hair loss in mice using grayscale analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ponnapakkam, Tulasi; Katikaneni, Ranjitha; Gulati, Rohan; Gensure, Robert</p> <p>2015-03-09</p> <p>Alopecia is a common form of hair loss which can occur in many different conditions, including male-pattern hair loss, polycystic ovarian syndrome, and alopecia areata. Alopecia can also occur as a side effect of chemotherapy in cancer patients. In this study, our goal was to develop a consistent and reliable method to quantify hair loss in mice, which will allow investigators to accurately assess and compare new therapeutic approaches for these various forms of alopecia. The method utilizes a standard gel imager to obtain and process images of mice, measuring the light absorption, which occurs in rough proportion to the amount of black (or gray) hair on the mouse. Data that has been quantified in this fashion can then be analyzed using standard statistical techniques (i.e., ANOVA, T-test). This methodology was tested in mouse models of chemotherapy-induced alopecia, alopecia areata and alopecia from waxing. In this report, the detailed protocol is presented for performing these measurements, including validation data from C57BL/6 and C3H/HeJ strains of mice. This new technique offers a number of advantages, including relative simplicity of application, reliance on equipment which is readily available in most research laboratories, and applying an objective, quantitative assessment which is more robust than subjective evaluations. Improvements in quantification of hair growth in mice will improve study of alopecia models and facilitate evaluation of promising new therapies in preclinical studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20493760','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20493760"><span>Objective and expert-independent validation of retinal image registration algorithms by a projective imaging distortion model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Sangyeol; Reinhardt, Joseph M; Cattin, Philippe C; Abràmoff, Michael D</p> <p>2010-08-01</p> <p>Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal images by modeling the geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present a validation process that can be used for any retinal image registration method by tracing through the distortion path and assessing the geometric misalignment in the coordinate system of the reference standard. The proposed method can be used to perform an accuracy evaluation over the whole image, so that distortion in the non-overlapping regions of the montage components can be easily assessed. We demonstrate the technique by generating test image sets with a variety of overlap conditions and compare the accuracy of several retinal image registration models. Copyright 2010 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2010-title48-vol7/pdf/CFR-2010-title48-vol7-sec9904-402-50.pdf','CFR'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2010-title48-vol7/pdf/CFR-2010-title48-vol7-sec9904-402-50.pdf"><span>48 CFR 9904.402-50 - Techniques for application.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2010&page.go=Go">Code of Federal Regulations, 2010 CFR</a></p> <p></p> <p>2010-10-01</p> <p>..., OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.402-50 Techniques for application. (a) The Fundamental...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.usgs.gov/of/2005/1428/','USGSPUBS'); return false;" href="https://pubs.usgs.gov/of/2005/1428/"><span>Digital Mapping Techniques '05--Workshop Proceedings, Baton Rouge, Louisiana, April 24-27, 2005</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Soller, David R.</p> <p>2005-01-01</p> <p>Intorduction: The Digital Mapping Techniques '05 (DMT'05) workshop was attended by more than 100 technical experts from 47 agencies, universities, and private companies, including representatives from 25 state geological surveys (see Appendix A). This workshop was similar in nature to the previous eight meetings, held in Lawrence, Kansas (Soller, 1997), in Champaign, Illinois (Soller, 1998), in Madison, Wisconsin (Soller, 1999), in Lexington, Kentucky (Soller, 2000), in Tuscaloosa, Alabama (Soller, 2001), in Salt Lake City, Utah (Soller, 2002), in Millersville, Pennsylvania (Soller, 2003), and in Portland, Oregon (Soller, 2004). This year's meeting was hosted by the Louisiana Geological Survey, from April 24-27, 2005, on the Louisiana State University campus in Baton Rouge, Louisiana. As in the previous meetings, the objective was to foster informal discussion and exchange of technical information. It is with great pleasure I note that the objective was successfully met, as attendees continued to share and exchange knowledge and information, and to renew friendships and collegial work begun at past DMT workshops. Each DMT workshop has been coordinated by the Association of American State Geologists (AASG) and U.S. Geological Survey (USGS) Data Capture Working Group, which was formed in August 1996, to support the AASG and the USGS in their effort to build a National Geologic Map Database (see Soller and Berg, this volume, and http://ngmdb.usgs.gov/info/standards/datacapt/). The Working Group was formed because increased production efficiencies, standardization, and quality of digital map products were needed for the database?and for the State and Federal geological surveys?to provide more high-quality digital maps to the public. At the 2005 meeting, oral and poster presentations and special discussion sessions emphasized: 1) methods for creating and publishing map products (here, 'publishing' includes Web-based release); 2) field data capture software and techniques, including the use of LIDAR; 3) digital cartographic techniques; 4) migration of digital maps into ArcGIS Geodatabase format; 5) analytical GIS techniques; 6) continued development of the National Geologic Map Database; and 7) progress toward building and implementing a standard geologic map data model and standard science language for the U.S. and for North America.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25514588','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25514588"><span>Neurosurgical endoscopic training via a realistic 3-dimensional model with pathology.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Waran, Vicknes; Narayanan, Vairavan; Karuppiah, Ravindran; Thambynayagam, Hari Chandran; Muthusamy, Kalai Arasu; Rahman, Zainal Ariff Abdul; Kirollos, Ramez Wadie</p> <p>2015-02-01</p> <p>Training in intraventricular endoscopy is particularly challenging because the volume of cases is relatively small and the techniques involved are unlike those usually used in conventional neurosurgery. Present training models are inadequate for various reasons. Using 3-dimensional (3D) printing techniques, models with pathology can be created using actual patient's imaging data. This technical article introduces a new training model based on a patient with hydrocephalus secondary to a pineal tumour, enabling the models to be used to simulate third ventriculostomies and pineal biopsies. Multiple models of the head of a patient with hydrocephalus were created using 3D rapid prototyping technique. These models were modified to allow for a fluid-filled ventricular system under appropriate tension. The models were qualitatively assessed in the various steps involved in an endoscopic third ventriculostomy and intraventricular biopsy procedure, initially by 3 independent neurosurgeons and subsequently by 12 participants of an intraventricular endoscopy workshop. All 3 surgeons agreed on the ease and usefulness of these models in the teaching of endoscopic third ventriculostomy, performing endoscopic biopsies, and the integration of navigation to ventriculoscopy. Their overall score for the ventricular model realism was above average. The 12 participants of the intraventricular endoscopy workshop averaged between a score of 4.0 to 4.6 of 5 for every individual step of the procedure. Neurosurgical endoscopic training currently is a long process of stepwise training. These 3D printed models provide a realistic simulation environment for a neuroendoscopy procedure that allows safe and effective teaching of navigation and endoscopy in a standardized and repetitive fashion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19247643','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19247643"><span>Force coordination in static manipulation tasks performed using standard and non-standard grasping techniques.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>de Freitas, Paulo B; Jaric, Slobodan</p> <p>2009-04-01</p> <p>We evaluated coordination of the hand grip force (GF; normal component of the force acting at the hand-object contact area) and load force (LF; the tangential component) in a variety of grasping techniques and two LF directions. Thirteen participants exerted a continuous sinusoidal LF pattern against externally fixed handles applying both standard (i.e., using either the tips of the digits or the palms; the precision and palm grasps, respectively) and non-standard grasping techniques (using wrists and the dorsal finger areas; the wrist and fist grasp). We hypothesized (1) that the non-standard grasping techniques would provide deteriorated indices of force coordination when compared with the standard ones, and (2) that the nervous system would be able to adjust GF to the differences in friction coefficients of various skin areas used for grasping. However, most of the indices of force coordination remained similar across the tested grasping techniques, while the GF adjustments for the differences in friction coefficients (highest in the palm and the lowest in the fist and wrist grasp) provided inconclusive results. As hypothesized, GF relative to the skin friction was lowest in the precision grasp, but highest in the palm grasp. Therefore, we conclude that (1) the elaborate coordination of GF and LF consistently seen across the standard grasping techniques could be generalized to the non-standard ones, while (2) the ability to adjust GF using the same grasping technique to the differences in friction of various objects cannot be fully generalized to the GF adjustment when different grasps (i.e., hand segments) are used to manipulate the same object. Due to the importance of the studied phenomena for understanding both the functional and neural control aspects of manipulation, future studies should extend the current research to the transient and dynamic tasks, as well as to the general role of friction in our mechanical interactions with the environment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70020872','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70020872"><span>A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Heidari, M.; Ranjithan, S.R.</p> <p>1998-01-01</p> <p>In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19860065521&hterms=thunderstorm+protection&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dthunderstorm%2Bprotection','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19860065521&hterms=thunderstorm+protection&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dthunderstorm%2Bprotection"><span>F-106 data summary and model results relative to threat criteria and protection design analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pitts, F. L.; Finelli, G. B.; Perala, R. A.; Rudolph, T. H.</p> <p>1986-01-01</p> <p>The NASA F-106 has acquired considerable data on the rates-of-change of electromagnetic parameters on the aircraft surface during 690 direct lightning strikes while penetrating thunderstorms at altitudes ranging from 15,000 to 40,000 feet. These in-situ measurements have provided the basis for the first statistical quantification of the lightning electromagnetic threat to aircrat appropriate for determining lightning indirect effects on aircraft. The data are presently being used in updating previous lightning criteria and standards developed over the years from ground-based measurements. The new lightning standards will, therefore, be the first which reflect actual aircraft responses measured at flight altitudes. The modeling technique developed to interpret and understand the direct strike electromagnetic data acquired on the F-106 provides a means to model the interaction of the lightning channel with the F-106. The reasonable results obtained with the model, compared to measured responses, yield confidence that the model may be credibly applied to other aircraft types and uses in the prediction of internal coupling effects in the design of lightning protection for new aircraft.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/14668230','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/14668230"><span>A random variance model for detection of differential gene expression in small microarray experiments.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wright, George W; Simon, Richard M</p> <p>2003-12-12</p> <p>Microarray techniques provide a valuable way of characterizing the molecular nature of disease. Unfortunately expense and limited specimen availability often lead to studies with small sample sizes. This makes accurate estimation of variability difficult, since variance estimates made on a gene by gene basis will have few degrees of freedom, and the assumption that all genes share equal variance is unlikely to be true. We propose a model by which the within gene variances are drawn from an inverse gamma distribution, whose parameters are estimated across all genes. This results in a test statistic that is a minor variation of those used in standard linear models. We demonstrate that the model assumptions are valid on experimental data, and that the model has more power than standard tests to pick up large changes in expression, while not increasing the rate of false positives. This method is incorporated into BRB-ArrayTools version 3.0 (http://linus.nci.nih.gov/BRB-ArrayTools.html). ftp://linus.nci.nih.gov/pub/techreport/RVM_supplement.pdf</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25220278','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25220278"><span>Standard plane localization in ultrasound by radial component model and selective search.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ni, Dong; Yang, Xin; Chen, Xin; Chin, Chien-Ting; Chen, Siping; Heng, Pheng Ann; Li, Shengli; Qin, Jing; Wang, Tianfu</p> <p>2014-11-01</p> <p>Acquisition of the standard plane is crucial for medical ultrasound diagnosis. However, this process requires substantial experience and a thorough knowledge of human anatomy. Therefore it is very challenging for novices and even time consuming for experienced examiners. We proposed a hierarchical, supervised learning framework for automatically detecting the standard plane from consecutive 2-D ultrasound images. We tested this technique by developing a system that localizes the fetal abdominal standard plane from ultrasound video by detecting three key anatomical structures: the stomach bubble, umbilical vein and spine. We first proposed a novel radial component-based model to describe the geometric constraints of these key anatomical structures. We then introduced a novel selective search method which exploits the vessel probability algorithm to produce probable locations for the spine and umbilical vein. Next, using component classifiers trained by random forests, we detected the key anatomical structures at their probable locations within the regions constrained by the radial component-based model. Finally, a second-level classifier combined the results from the component detection to identify an ultrasound image as either a "fetal abdominal standard plane" or a "non- fetal abdominal standard plane." Experimental results on 223 fetal abdomen videos showed that the detection accuracy of our method was as high as 85.6% and significantly outperformed both the full abdomen and the separate anatomy detection methods without geometric constraints. The experimental results demonstrated that our system shows great promise for application to clinical practice. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70031202','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70031202"><span>A global search inversion for earthquake kinematic rupture history: Application to the 2000 western Tottori, Japan earthquake</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Piatanesi, A.; Cirella, A.; Spudich, P.; Cocco, M.</p> <p>2007-01-01</p> <p>We present a two-stage nonlinear technique to invert strong motions records and geodetic data to retrieve the rupture history of an earthquake on a finite fault. To account for the actual rupture complexity, the fault parameters are spatially variable peak slip velocity, slip direction, rupture time and risetime. The unknown parameters are given at the nodes of the subfaults, whereas the parameters within a subfault are allowed to vary through a bilinear interpolation of the nodal values. The forward modeling is performed with a discrete wave number technique, whose Green's functions include the complete response of the vertically varying Earth structure. During the first stage, an algorithm based on the heat-bath simulated annealing generates an ensemble of models that efficiently sample the good data-fitting regions of parameter space. In the second stage (appraisal), the algorithm performs a statistical analysis of the model ensemble and computes a weighted mean model and its standard deviation. This technique, rather than simply looking at the best model, extracts the most stable features of the earthquake rupture that are consistent with the data and gives an estimate of the variability of each model parameter. We present some synthetic tests to show the effectiveness of the method and its robustness to uncertainty of the adopted crustal model. Finally, we apply this inverse technique to the well recorded 2000 western Tottori, Japan, earthquake (Mw 6.6); we confirm that the rupture process is characterized by large slip (3-4 m) at very shallow depths but, differently from previous studies, we imaged a new slip patch (2-2.5 m) located deeper, between 14 and 18 km depth. Copyright 2007 by the American Geophysical Union.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006PhDT........41X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006PhDT........41X"><span>Non-invasive assessment of bone quantity and quality in human trabeculae using scanning ultrasound imaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xia, Yi</p> <p></p> <p>Fractures and associated bone fragility induced by osteoporosis and osteopenia are widespread health threat to current society. Early detection of fracture risk associated with bone quantity and quality is important for both the prevention and treatment of osteoporosis and consequent complications. Quantitative ultrasound (QUS) is an engineering technology for monitoring bone quantity and quality of humans on earth and astronauts subjected to long duration microgravity. Factors currently limiting the acceptance of QUS technology involve precision, accuracy, single index and standardization. The objective of this study was to improve the accuracy and precision of an image-based QUS technique for non-invasive evaluation of trabecular bone quantity and quality by developing new techniques and understanding ultrasound/tissue interaction. Several new techniques have been developed in this dissertation study, including the automatic identification of irregular region of interest (iROI) in bone, surface topology mapping (STM) and mean scattering spacing (MSS) estimation for evaluating trabecular bone structure. In vitro results have shown that (1) the inter- and intra-observer errors in QUS measurement were reduced two to five fold by iROI compared to previous results; (2) the accuracy of QUS parameter, e.g., ultrasound velocity (UV) through bone, was improved 16% by STM; and (3) the averaged trabecular spacing can be estimated by MSS technique (r2=0.72, p<0.01). The measurement errors of BUA and UV introduced by the soft tissue and cortical shells in vivo can be quantified by developed foot model and simplified cortical-trabecular-cortical sandwich model, which were verified by the experimental results. The mechanisms of the errors induced by the cortical and soft tissues were revealed by the model. With developed new techniques and understanding of sound-tissue interaction, in vivo clinical trail and bed rest study were preformed to evaluate the performance of QUS in clinical applications. It has been demonstrated that the QUS has similar performance for in vivo bone density measurement compared to current gold-standard method, i.e., DXA, while additional information are obtained by the QUS for predicting fracture risk by monitoring of bone's quality. The developed QUS imaging technique can be used to assess bone's quantity and quality with improved accuracy and precision.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19880014378','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19880014378"><span>Flight testing a V/STOL aircraft to identify a full-envelope aerodynamic model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mcnally, B. David; Bach, Ralph E., Jr.</p> <p>1988-01-01</p> <p>Flight-test techniques are being used to generate a data base for identification of a full-envelope aerodynamic model of a V/STOL fighter aircraft, the YAV-8B Harrier. The flight envelope to be modeled includes hover, transition to conventional flight and back to hover, STOL operation, and normal cruise. Standard V/STOL procedures such as vertical takeoff and landings, and short takeoff and landings are used to gather data in the powered-lift flight regime. Long (3 to 5 min) maneuvers which include a variety of input types are used to obtain large-amplitude control and response excitations. The aircraft is under continuous radar tracking; a laser tracker is used for V/STOL operations near the ground. Tracking data are used with state-estimation techniques to check data consistency and to derive unmeasured variables, for example, angular accelerations. A propulsion model of the YAV-8B's engine and reaction control system is used to isolate aerodynamic forces and moments for model identification. Representative V/STOL flight data are presented. The processing of a typical short takeoff and slow landing maneuver is illustrated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20050199399','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20050199399"><span>Mechanical Properties of Nanostructured Materials Determined Through Molecular Modeling Techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Clancy, Thomas C.; Gates, Thomas S.</p> <p>2005-01-01</p> <p>The potential for gains in material properties over conventional materials has motivated an effort to develop novel nanostructured materials for aerospace applications. These novel materials typically consist of a polymer matrix reinforced with particles on the nanometer length scale. In this study, molecular modeling is used to construct fully atomistic models of a carbon nanotube embedded in an epoxy polymer matrix. Functionalization of the nanotube which consists of the introduction of direct chemical bonding between the polymer matrix and the nanotube, hence providing a load transfer mechanism, is systematically varied. The relative effectiveness of functionalization in a nanostructured material may depend on a variety of factors related to the details of the chemical bonding and the polymer structure at the nanotube-polymer interface. The objective of this modeling is to determine what influence the details of functionalization of the carbon nanotube with the polymer matrix has on the resulting mechanical properties. By considering a range of degree of functionalization, the structure-property relationships of these materials is examined and mechanical properties of these models are calculated using standard techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA602474','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA602474"><span>Exploration of MEMS G-Switches at 100-10,000 G-Levels with Redundancy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2014-04-01</p> <p>Muntz, A.D. Ketsdever, “Kinetic Modeling of Temperature -Driven Flows in Short Microchannels,” International Journal of Thermal Sciences, Vol. 45, No...switches silicon DRIE Unclassified Unclassified Unclassified UU 59 Suhithi Peiris 703-767-4732 CONVERSION...designed. The devices were fabricated on low resistivity (ɘ.01 Ω-cm) silicon on insulator wafers (SOI) using standard micromachining techniques. Fixed</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA127705','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA127705"><span>The Importance of Practice in the Development of Statistics.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1983-01-01</p> <p>RESOLUTION TEST CHART NATIONAL BUREAU OIF STANDARDS 1963 -A NRC Technical Summary Report #2471 C THE IMORTANCE OF PRACTICE IN to THE DEVELOPMENT OF STATISTICS...component analysis, bioassay, limits for a ratio, quality control, sampling inspection, non-parametric tests , transformation theory, ARIMA time series...models, sequential tests , cumulative sum charts, data analysis plotting techniques, and a resolution of the Bayes - frequentist controversy. It appears</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>