Sample records for existing conventional methods

  1. Why conventional detection methods fail in identifying the existence of contamination events.

    PubMed

    Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han

    2016-04-15

    Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nawrocki, G.J.; Seaver, C.L.; Kowalkowski, J.B.

    As controls needs at the Advanced Photon Source matured from an installation phase to an operational phase, the need to monitor the existing conventional facilities control system with the EPICS-based accelerator control system was realized. This existing conventional facilities control network is based on a proprietary system from Johnson Controls called Metasys. Initially read-only monitoring of the Metasys parameters will be provided; however, the ability for possible future expansion to full control is available. This paper describes a method of using commercially available hardware and existing EPICS software as a bridge between the Metasys and EPICS control systems.

  3. BAYESIAN META-ANALYSIS ON MEDICAL DEVICES: APPLICATION TO IMPLANTABLE CARDIOVERTER DEFIBRILLATORS

    PubMed Central

    Youn, Ji-Hee; Lord, Joanne; Hemming, Karla; Girling, Alan; Buxton, Martin

    2012-01-01

    Objectives: The aim of this study is to describe and illustrate a method to obtain early estimates of the effectiveness of a new version of a medical device. Methods: In the absence of empirical data, expert opinion may be elicited on the expected difference between the conventional and modified devices. Bayesian Mixed Treatment Comparison (MTC) meta-analysis can then be used to combine this expert opinion with existing trial data on earlier versions of the device. We illustrate this approach for a new four-pole implantable cardioverter defibrillator (ICD) compared with conventional ICDs, Class III anti-arrhythmic drugs, and conventional drug therapy for the prevention of sudden cardiac death in high risk patients. Existing RCTs were identified from a published systematic review, and we elicited opinion on the difference between four-pole and conventional ICDs from experts recruited at a cardiology conference. Results: Twelve randomized controlled trials were identified. Seven experts provided valid probability distributions for the new ICDs compared with current devices. The MTC model resulted in estimated relative risks of mortality of 0.74 (0.60–0.89) (predictive relative risk [RR] = 0.77 [0.41–1.26]) and 0.83 (0.70–0.97) (predictive RR = 0.84 [0.55–1.22]) with the new ICD therapy compared to Class III anti-arrhythmic drug therapy and conventional drug therapy, respectively. These results showed negligible differences from the preliminary results for the existing ICDs. Conclusions: The proposed method incorporating expert opinion to adjust for a modification made to an existing device may play a useful role in assisting decision makers to make early informed judgments on the effectiveness of frequently modified healthcare technologies. PMID:22559753

  4. Strain gage measurement errors in the transient heating of structural components

    NASA Technical Reports Server (NTRS)

    Richards, W. Lance

    1993-01-01

    Significant strain-gage errors may exist in measurements acquired in transient thermal environments if conventional correction methods are applied. Conventional correction theory was modified and a new experimental method was developed to correct indicated strain data for errors created in radiant heating environments ranging from 0.6 C/sec (1 F/sec) to over 56 C/sec (100 F/sec). In some cases the new and conventional methods differed by as much as 30 percent. Experimental and analytical results were compared to demonstrate the new technique. For heating conditions greater than 6 C/sec (10 F/sec), the indicated strain data corrected with the developed technique compared much better to analysis than the same data corrected with the conventional technique.

  5. Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method

    NASA Astrophysics Data System (ADS)

    Mehl, S.

    2012-12-01

    Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.

  6. Application of Coamplification at Lower Denaturation Temperature-PCR Sequencing for Early Detection of Antiviral Drug Resistance Mutations of Hepatitis B Virus

    PubMed Central

    Wong, Danny Ka-Ho; Tsoi, Ottilia; Huang, Fung-Yu; Seto, Wai-Kay; Fung, James; Lai, Ching-Lung

    2014-01-01

    Nucleoside/nucleotide analogue for the treatment of chronic hepatitis B virus (HBV) infection is hampered by the emergence of drug resistance mutations. Conventional PCR sequencing cannot detect minor variants of <20%. We developed a modified co-amplification at lower denaturation temperature-PCR (COLD-PCR) method for the detection of HBV minority drug resistance mutations. The critical denaturation temperature for COLD-PCR was determined to be 78°C. Sensitivity of COLD-PCR sequencing was determined using serially diluted plasmids containing mixed proportions of HBV reverse transcriptase (rt) wild-type and mutant sequences. Conventional PCR sequencing detected mutations only if they existed in ≥25%, whereas COLD-PCR sequencing detected mutations when they existed in 5 to 10% of the viral population. The performance of COLD-PCR was compared to conventional PCR sequencing and a line probe assay (LiPA) using 215 samples obtained from 136 lamivudine- or telbivudine-treated patients with virological breakthrough. Among these 215 samples, drug resistance mutations were detected in 155 (72%), 148 (69%), and 113 samples (53%) by LiPA, COLD-PCR, and conventional PCR sequencing, respectively. Nineteen (9%) samples had mutations detectable by COLD-PCR but not LiPA, while 26 (12%) samples had mutations detectable by LiPA but not COLD-PCR, indicating both methods were comparable (P = 0.371). COLD-PCR was more sensitive than conventional PCR sequencing. Thirty-five (16%) samples had mutations detectable by COLD-PCR but not conventional PCR sequencing, while none had mutations detected by conventional PCR sequencing but not COLD-PCR (P < 0.0001). COLD-PCR sequencing is a simple method which is comparable to LiPA and superior to conventional PCR sequencing in detecting minor lamivudine/telbivudine resistance mutations. PMID:24951803

  7. Evaluation of the marginal fit of single-unit, complete-coverage ceramic restorations fabricated after digital and conventional impressions: A systematic review and meta-analysis.

    PubMed

    Tsirogiannis, Panagiotis; Reissmann, Daniel R; Heydecke, Guido

    2016-09-01

    In existing published reports, some studies indicate the superiority of digital impression systems in terms of the marginal accuracy of ceramic restorations, whereas others show that the conventional method provides restorations with better marginal fit than fully digital fabrication. Which impression method provides the lowest mean values for marginal adaptation is inconclusive. The findings from those studies cannot be easily generalized, and in vivo studies that could provide valid and meaningful information are limited in the existing publications. The purpose of this study was to systematically review existing reports and evaluate the marginal fit of ceramic single-tooth restorations after either digital or conventional impression methods by combining the available evidence in a meta-analysis. The search strategy for this systematic review of the publications was based on a Population, Intervention, Comparison, and Outcome (PICO) framework. For the statistical analysis, the mean marginal fit values of each study were extracted and categorized according to the impression method to calculate the mean value, together with the 95% confidence intervals (CI) of each category, and to evaluate the impact of each impression method on the marginal adaptation by comparing digital and conventional techniques separately for in vitro and in vivo studies. Twelve studies were included in the meta-analysis from the 63 identified records after database searching. For the in vitro studies, where ceramic restorations were fabricated after conventional impressions, the mean value of the marginal fit was 58.9 μm (95% CI: 41.1-76.7 μm), whereas after digital impressions, it was 63.3 μm (95% CI: 50.5-76.0 μm). In the in vivo studies, the mean marginal discrepancy of the restorations after digital impressions was 56.1 μm (95% CI: 46.3-65.8 μm), whereas after conventional impressions, it was 79.2 μm (95% CI: 59.6-98.9 μm) No significant difference was observed regarding the marginal discrepancy of single-unit ceramic restorations fabricated after digital or conventional impressions. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  8. Some Findings Concerning Requirements in Agile Methodologies

    NASA Astrophysics Data System (ADS)

    Rodríguez, Pilar; Yagüe, Agustín; Alarcón, Pedro P.; Garbajosa, Juan

    Agile methods have appeared as an attractive alternative to conventional methodologies. These methods try to reduce the time to market and, indirectly, the cost of the product through flexible development and deep customer involvement. The processes related to requirements have been extensively studied in literature, in most cases in the frame of conventional methods. However, conclusions of conventional methodologies could not be necessarily valid for Agile; in some issues, conventional and Agile processes are radically different. As recent surveys report, inadequate project requirements is one of the most conflictive issues in agile approaches and better understanding about this is needed. This paper describes some findings concerning requirements activities in a project developed under an agile methodology. The project intended to evolve an existing product and, therefore, some background information was available. The major difficulties encountered were related to non-functional needs and management of requirements dependencies.

  9. Measuring Teacher Classroom Management Skills: A Comparative Analysis of Distance Trained and Conventional Trained Teachers

    ERIC Educational Resources Information Center

    Henaku, Christina Bampo; Pobbi, Michael Asamani

    2017-01-01

    Many researchers and educationist remain skeptical about the effectiveness of distance learning program and have termed it as second to the conventional training method. This perception is largely due to several challenges which exist within the management of distance learning program across the country. The general aim of the study is compare the…

  10. Emergent surgical airway: comparison of the three-step method and conventional cricothyroidotomy utilizing high-fidelity simulation.

    PubMed

    Quick, Jacob A; MacIntyre, Allan D; Barnes, Stephen L

    2014-02-01

    Surgical airway creation has a high potential for disaster. Conventional methods can be cumbersome and require special instruments. A simple method utilizing three steps and readily available equipment exists, but has yet to be adequately tested. Our objective was to compare conventional cricothyroidotomy with the three-step method utilizing high-fidelity simulation. Utilizing a high-fidelity simulator, 12 experienced flight nurses and paramedics performed both methods after a didactic lecture, simulator briefing, and demonstration of each technique. Six participants performed the three-step method first, and the remaining 6 performed the conventional method first. Each participant was filmed and timed. We analyzed videos with respect to the number of hand repositions, number of airway instrumentations, and technical complications. Times to successful completion were measured from incision to balloon inflation. The three-step method was completed faster (52.1 s vs. 87.3 s; p = 0.007) as compared with conventional surgical cricothyroidotomy. The two methods did not differ statistically regarding number of hand movements (3.75 vs. 5.25; p = 0.12) or instrumentations of the airway (1.08 vs. 1.33; p = 0.07). The three-step method resulted in 100% successful airway placement on the first attempt, compared with 75% of the conventional method (p = 0.11). Technical complications occurred more with the conventional method (33% vs. 0%; p = 0.05). The three-step method, using an elastic bougie with an endotracheal tube, was shown to require fewer total hand movements, took less time to complete, resulted in more successful airway placement, and had fewer complications compared with traditional cricothyroidotomy. Published by Elsevier Inc.

  11. Fibre Optic Sensors for Selected Wastewater Characteristics

    PubMed Central

    Chong, Su Sin; Abdul Aziz, A. R.; Harun, Sulaiman W.

    2013-01-01

    Demand for online and real-time measurements techniques to meet environmental regulation and treatment compliance are increasing. However the conventional techniques, which involve scheduled sampling and chemical analysis can be expensive and time consuming. Therefore cheaper and faster alternatives to monitor wastewater characteristics are required as alternatives to conventional methods. This paper reviews existing conventional techniques and optical and fibre optic sensors to determine selected wastewater characteristics which are colour, Chemical Oxygen Demand (COD) and Biological Oxygen Demand (BOD). The review confirms that with appropriate configuration, calibration and fibre features the parameters can be determined with accuracy comparable to conventional method. With more research in this area, the potential for using FOS for online and real-time measurement of more wastewater parameters for various types of industrial effluent are promising. PMID:23881131

  12. The possibility of applying spectral redundancy in DWDM systems on existing long-distance FOCLs for increasing the data transmission rate and decreasing nonlinear effects and double Rayleigh scattering without changes in the communication channel

    NASA Astrophysics Data System (ADS)

    Nekuchaev, A. O.; Shuteev, S. A.

    2014-04-01

    A new method of data transmission in DWDM systems along existing long-distance fiber-optic communication lines is proposed. The existing method, e.g., uses 32 wavelengths in the NRZ code with an average power of 16 conventional units (16 units and 16 zeros on the average) and transmission of 32 bits/cycle. In the new method, one of 124 wavelengths with a duration of one cycle each (at any time instant, no more than 16 obligatory different wavelengths) and capacity of 4 bits with an average power of 15 conventional units and rate of 64 bits/cycle is transmitted at every instant of a 1/16 cycle. The cross modulation and double Rayleigh scattering are significantly decreased owing to uniform distribution of power over time at different wavelengths. The time redundancy (forward error correction (FEC)) is about 7% and allows one to achieve a coding enhancement of about 6 dB by detecting and removing deletions and errors simultaneously.

  13. A Rapid Method for Measuring Strontium-90 Activity in Crops in China

    NASA Astrophysics Data System (ADS)

    Pan, Lingjing Pan; Yu, Guobing; Wen, Deyun; Chen, Zhi; Sheng, Liusi; Liu, Chung-King; Xu, X. George

    2017-09-01

    A rapid method for measuring Sr-90 activity in crop ashes is presented. Liquid scintillation counting, combined with ion exchange columns 4`, 4"(5")-di-t-butylcyclohexane-18-crown-6, is used to determine the activity of Sr-90 in crops. The yields of chemical procedure are quantified using gravimetric analysis. The conventional method that uses ion-exchange resin with HDEHP could not completely remove all the bismuth when comparatively large lead and bismuth exist in the samples. This is overcome by the rapid method. The chemical yield of this method is about 60% and the MDA for Sr-90 is found to be 2:32 Bq/kg. The whole procedure together with using spectrum analysis to determine the activity only takes about one day, which is really a large improvement compared with the conventional method. A modified conventional method is also described here to verify the value of the rapid one. These two methods can meet di_erent needs of daily monitoring and emergency situation.

  14. Droplet Microarray Based on Superhydrophobic-Superhydrophilic Patterns for Single Cell Analysis.

    PubMed

    Jogia, Gabriella E; Tronser, Tina; Popova, Anna A; Levkin, Pavel A

    2016-12-09

    Single-cell analysis provides fundamental information on individual cell response to different environmental cues and is a growing interest in cancer and stem cell research. However, current existing methods are still facing challenges in performing such analysis in a high-throughput manner whilst being cost-effective. Here we established the Droplet Microarray (DMA) as a miniaturized screening platform for high-throughput single-cell analysis. Using the method of limited dilution and varying cell density and seeding time, we optimized the distribution of single cells on the DMA. We established culturing conditions for single cells in individual droplets on DMA obtaining the survival of nearly 100% of single cells and doubling time of single cells comparable with that of cells cultured in bulk cell population using conventional methods. Our results demonstrate that the DMA is a suitable platform for single-cell analysis, which carries a number of advantages compared with existing technologies allowing for treatment, staining and spot-to-spot analysis of single cells over time using conventional analysis methods such as microscopy.

  15. Iterative methods for dose reduction and image enhancement in tomography

    DOEpatents

    Miao, Jianwei; Fahimian, Benjamin Pooya

    2012-09-18

    A system and method for creating a three dimensional cross sectional image of an object by the reconstruction of its projections that have been iteratively refined through modification in object space and Fourier space is disclosed. The invention provides systems and methods for use with any tomographic imaging system that reconstructs an object from its projections. In one embodiment, the invention presents a method to eliminate interpolations present in conventional tomography. The method has been experimentally shown to provide higher resolution and improved image quality parameters over existing approaches. A primary benefit of the method is radiation dose reduction since the invention can produce an image of a desired quality with a fewer number projections than seen with conventional methods.

  16. Multi-layer coatings

    DOEpatents

    Maghsoodi, Sina; Brophy, Brenor L.; Abrams, Ze'ev R.; Gonsalves, Peter R.

    2016-06-28

    Disclosed herein are coating materials and methods for applying a top-layer coating that is durable, abrasion resistant, highly transparent, hydrophobic, low-friction, moisture-sealing, anti-soiling, and self-cleaning to an existing conventional high temperature anti-reflective coating. The top coat imparts superior durability performance and new properties to the under-laying conventional high temperature anti-reflective coating without reducing the anti-reflectiveness of the coating. Methods and data for optimizing the relative thickness of the under-layer high temperature anti-reflective coating and the top-layer thickness for optimizing optical performance are also disclosed.

  17. Application of Electrical Resistivity Method (ERM) in Groundwater Exploration

    NASA Astrophysics Data System (ADS)

    Izzaty Riwayat, Akhtar; Nazri, Mohd Ariff Ahmad; Hazreek Zainal Abidin, Mohd

    2018-04-01

    The geophysical method which dominant by geophysicists become one of most popular method applied by engineers in civil engineering fields. Electrical Resistivity Method (ERM) is one of geophysical tool that offer very attractive technique for subsurface profile characterization in larger area. Applicable alternative technique in groundwater exploration such as ERM which complement with existing conventional method may produce comprehensive and convincing output thus effective in terms of cost, time, data coverage and sustainable. ERM has been applied by various application in groundwater exploration. Over the years, conventional method such as excavation and test boring are the tools used to obtain information of earth layer especially during site investigation. There are several problems regarding the application of conventional technique as it only provides information at actual drilling point only. This review paper was carried out to expose the application of ERM in groundwater exploration. Results from ERM could be additional information to respective expert for their problem solving such as the information on groundwater pollution, leachate, underground and source of water supply.

  18. Mapping Diffuse Seismicity Using Empirical Matched Field Processing Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J; Templeton, D C; Harris, D B

    The objective of this project is to detect and locate more microearthquakes using the empirical matched field processing (MFP) method than can be detected using only conventional earthquake detection techniques. We propose that empirical MFP can complement existing catalogs and techniques. We test our method on continuous seismic data collected at the Salton Sea Geothermal Field during November 2009 and January 2010. In the Southern California Earthquake Data Center (SCEDC) earthquake catalog, 619 events were identified in our study area during this time frame and our MFP technique identified 1094 events. Therefore, we believe that the empirical MFP method combinedmore » with conventional methods significantly improves the network detection ability in an efficient matter.« less

  19. Technology Assessment Report: Aqueous Sludge Gasification Technologies

    EPA Science Inventory

    The study reveals that sludge gasification is a potentially suitable alternative to conventional sludge handling and disposal methods. However, very few commercial operations are in existence. The limited pilot, demonstration or commercial application of gasification technology t...

  20. Matrix completion by deep matrix factorization.

    PubMed

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Classification of hyperspectral imagery with neural networks: comparison to conventional tools

    NASA Astrophysics Data System (ADS)

    Merényi, Erzsébet; Farrand, William H.; Taranik, James V.; Minor, Timothy B.

    2014-12-01

    Efficient exploitation of hyperspectral imagery is of great importance in remote sensing. Artificial intelligence approaches have been receiving favorable reviews for classification of hyperspectral data because the complexity of such data challenges the limitations of many conventional methods. Artificial neural networks (ANNs) were shown to outperform traditional classifiers in many situations. However, studies that use the full spectral dimensionality of hyperspectral images to classify a large number of surface covers are scarce if non-existent. We advocate the need for methods that can handle the full dimensionality and a large number of classes to retain the discovery potential and the ability to discriminate classes with subtle spectral differences. We demonstrate that such a method exists in the family of ANNs. We compare the maximum likelihood, Mahalonobis distance, minimum distance, spectral angle mapper, and a hybrid ANN classifier for real hyperspectral AVIRIS data, using the full spectral resolution to map 23 cover types and using a small training set. Rigorous evaluation of the classification accuracies shows that the ANN outperforms the other methods and achieves ≈90% accuracy on test data.

  2. A New Correction Technique for Strain-Gage Measurements Acquired in Transient-Temperature Environments

    NASA Technical Reports Server (NTRS)

    Richards, W. Lance

    1996-01-01

    Significant strain-gage errors may exist in measurements acquired in transient-temperature environments if conventional correction methods are applied. As heating or cooling rates increase, temperature gradients between the strain-gage sensor and substrate surface increase proportionally. These temperature gradients introduce strain-measurement errors that are currently neglected in both conventional strain-correction theory and practice. Therefore, the conventional correction theory has been modified to account for these errors. A new experimental method has been developed to correct strain-gage measurements acquired in environments experiencing significant temperature transients. The new correction technique has been demonstrated through a series of tests in which strain measurements were acquired for temperature-rise rates ranging from 1 to greater than 100 degrees F/sec. Strain-gage data from these tests have been corrected with both the new and conventional methods and then compared with an analysis. Results show that, for temperature-rise rates greater than 10 degrees F/sec, the strain measurements corrected with the conventional technique produced strain errors that deviated from analysis by as much as 45 percent, whereas results corrected with the new technique were in good agreement with analytical results.

  3. Precision enhancement of pavement roughness localization with connected vehicles

    NASA Astrophysics Data System (ADS)

    Bridgelall, R.; Huang, Y.; Zhang, Z.; Deng, F.

    2016-02-01

    Transportation agencies rely on the accurate localization and reporting of roadway anomalies that could pose serious hazards to the traveling public. However, the cost and technical limitations of present methods prevent their scaling to all roadways. Connected vehicles with on-board accelerometers and conventional geospatial position receivers offer an attractive alternative because of their potential to monitor all roadways in real-time. The conventional global positioning system is ubiquitous and essentially free to use but it produces impractically large position errors. This study evaluated the improvement in precision achievable by augmenting the conventional geo-fence system with a standard speed bump or an existing anomaly at a pre-determined position to establish a reference inertial marker. The speed sensor subsequently generates position tags for the remaining inertial samples by computing their path distances relative to the reference position. The error model and a case study using smartphones to emulate connected vehicles revealed that the precision in localization improves from tens of metres to sub-centimetre levels, and the accuracy of measuring localized roughness more than doubles. The research results demonstrate that transportation agencies will benefit from using the connected vehicle method to achieve precision and accuracy levels that are comparable to existing laser-based inertial profilers.

  4. Theory and practice of conventional adventitious virus testing.

    PubMed

    Gregersen, Jens-Peter

    2011-01-01

    CONFERENCE PROCEEDING Proceedings of the PDA/FDA Adventitious Viruses in Biologics: Detection and Mitigation Strategies Workshop in Bethesda, MD, USA; December 1-3, 2010 Guest Editors: Arifa Khan (Bethesda, MD), Patricia Hughes (Bethesda, MD) and Michael Wiebe (San Francisco, CA) For decades conventional tests in cell cultures and in laboratory animals have served as standard methods for broad-spectrum screening for adventitious viruses. New virus detection methods based on molecular biology have broadened and improved our knowledge about potential contaminating viruses and about the suitability of the conventional test methods. This paper summarizes and discusses practical aspects of conventional test schemes, such as detectability of various viruses, questionable or false-positive results, animal numbers needed, time and cost of testing, and applicability for rapidly changing starting materials. Strategies to improve the virus safety of biological medicinal products are proposed. The strategies should be based upon a flexible application of existing and new methods along with a scientifically based risk assessment. However, testing alone does not guarantee the absence of adventitious agents and must be accompanied by virus removing or virus inactivating process steps for critical starting materials, raw materials, and for the drug product.

  5. A novel method linking neural connectivity to behavioral fluctuations: Behavior-regressed connectivity.

    PubMed

    Passaro, Antony D; Vettel, Jean M; McDaniel, Jonathan; Lawhern, Vernon; Franaszczuk, Piotr J; Gordon, Stephen M

    2017-03-01

    During an experimental session, behavioral performance fluctuates, yet most neuroimaging analyses of functional connectivity derive a single connectivity pattern. These conventional connectivity approaches assume that since the underlying behavior of the task remains constant, the connectivity pattern is also constant. We introduce a novel method, behavior-regressed connectivity (BRC), to directly examine behavioral fluctuations within an experimental session and capture their relationship to changes in functional connectivity. This method employs the weighted phase lag index (WPLI) applied to a window of trials with a weighting function. Using two datasets, the BRC results are compared to conventional connectivity results during two time windows: the one second before stimulus onset to identify predictive relationships, and the one second after onset to capture task-dependent relationships. In both tasks, we replicate the expected results for the conventional connectivity analysis, and extend our understanding of the brain-behavior relationship using the BRC analysis, demonstrating subject-specific BRC maps that correspond to both positive and negative relationships with behavior. Comparison with Existing Method(s): Conventional connectivity analyses assume a consistent relationship between behaviors and functional connectivity, but the BRC method examines performance variability within an experimental session to understand dynamic connectivity and transient behavior. The BRC approach examines connectivity as it covaries with behavior to complement the knowledge of underlying neural activity derived from conventional connectivity analyses. Within this framework, BRC may be implemented for the purpose of understanding performance variability both within and between participants. Published by Elsevier B.V.

  6. Site dependent factors affecting the economic feasibility of solar powered absorption cooling

    NASA Technical Reports Server (NTRS)

    Bartlett, J. C.

    1978-01-01

    A procedure was developed to evaluate the cost effectiveness of combining an absorption cycle chiller with a solar energy system. A basic assumption of the procedure is that a solar energy system exists for meeting the heating load of the building, and that the building must be cooled. The decision to be made is to either cool the building with a conventional vapor compression cycle chiller or to use the existing solar energy system to provide a heat input to the absorption chiller. Two methods of meeting the cooling load not supplied by solar energy were considered. In the first method, heat is supplied to the absorption chiller by a boiler using fossil fuel. In the second method, the load not met by solar energy is net by a conventional vapor compression chiller. In addition, the procedure can consider waste heat as another form of auxiliary energy. Commercial applications of solar cooling with an absorption chiller were found to be more cost effective than the residential applications. In general, it was found that the larger the chiller, the more economically feasible it would be. Also, it was found that a conventional vapor compression chiller is a viable alternative for the auxiliary cooling source, especially for the larger chillers. The results of the analysis gives a relative rating of the sites considered as to their economic feasibility of solar cooling.

  7. Bad data detection in two stage estimation using phasor measurements

    NASA Astrophysics Data System (ADS)

    Tarali, Aditya

    The ability of the Phasor Measurement Unit (PMU) to directly measure the system state, has led to steady increase in the use of PMU in the past decade. However, in spite of its high accuracy and the ability to measure the states directly, they cannot completely replace the conventional measurement units due to high cost. Hence it is necessary for the modern estimators to use both conventional and phasor measurements together. This thesis presents an alternative method to incorporate the new PMU measurements into the existing state estimator in a systematic manner such that no major modification is necessary to the existing algorithm. It is also shown that if PMUs are placed appropriately, the phasor measurements can be used to detect and identify the bad data associated with critical measurements by using this model, which cannot be detected by conventional state estimation algorithm. The developed model is tested on IEEE 14, IEEE 30 and IEEE 118 bus under various conditions.

  8. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  9. CAFFEINE SPECIFICITY OF VARIOUS NON-IMPRINTED POLYMERS IN AQUEOUS MEDIA

    EPA Science Inventory

    Limitations exist in applying the conventional microbial methods to the detection of human fecal contamination in water. Certain organic compounds such as caffeine, have been reported by the U.S. Geological Survey as a more suitable tracer. The employment of caffeine has been h...

  10. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  11. Struvite scale formation and control.

    PubMed

    Parsons, S A; Doyle, J D

    2004-01-01

    Struvite scale formation is a major operational issue at both conventional and biological nutrient removal wastewater treatment plants. Factors affecting the formation of struvite scales were investigated including supersaturation, pH and pipe material and roughness. A range of control methods have been investigated including low fouling materials, pH control, inhibitor and chemical dosing. Control methods exist to reduce scale formation although each has its advantages and disadvantages.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  13. Minamata Convention on Mercury. Reporting obligations of the Parties to the Convention and the sources of data existing in Poland

    NASA Astrophysics Data System (ADS)

    Strzelecka-Jastrząb, Ewa

    2018-01-01

    After that, when more than 60 years ago in the Japanese city of Minamata there was caused a mass poisoning of residents by seafood contaminated with mercury, Minamata Convention on Mercury came into force on August 16, 2017. To date, the Convention has been signed by 128 States, the signatories of the Convention and ratified by 83 States - Parties to the Convention. The Convention imposes a number of obligations on the Parties to the Convention, including the reporting obligation. The paper analyses the reporting obligations of the Parties to the Convention, which are in force after the entry into force of the Convention, pursuant to the provisions contained therein. In addition, the existing sources of quantitative data on mercury in Poland are characterized.

  14. Interpretation of ERTS-MSS images of a Savanna area in eastern Colombia

    NASA Technical Reports Server (NTRS)

    Elberson, G. W. W.

    1973-01-01

    The application of ERTS-1 imagery for extrapolating existing soil maps into unmapped areas of the Llanos Orientales of Colombia, South America is discussed. Interpretations of ERTS-1 data were made according to conventional photointerpretation techniques. Most units delineated in the existing reconnaissance soil map at a scale of 1:250,000 could be recognized and delineated in the ERTS image. The methods of interpretation are described and the results obtained for specific areas are analyzed.

  15. Defense Small Business Innovation Research Program (SBIR) FY 1984.

    DTIC Science & Technology

    1984-01-12

    nuclear submarine non-metallic, light weight, high strength piping . Includes the development of adequate fabrication procedures for attaching pipe ...waste heat economizer methods, require development. Improved conventional and hybrid heat pipes and/or two phase transport devices 149 IF are required...DESCRIPTION: A need exists to conceive, design, fabricate and test a method of adjusting the length of the individual legs of nylon or Kevlar rope sling

  16. Embedded WENO: A design strategy to improve existing WENO schemes

    NASA Astrophysics Data System (ADS)

    van Lith, Bart S.; ten Thije Boonkkamp, Jan H. M.; IJzerman, Wilbert L.

    2017-02-01

    Embedded WENO methods utilise all adjacent smooth substencils to construct a desirable interpolation. Conventional WENO schemes under-use this possibility close to large gradients or discontinuities. We develop a general approach for constructing embedded versions of existing WENO schemes. Embedded methods based on the WENO schemes of Jiang and Shu [1] and on the WENO-Z scheme of Borges et al. [2] are explicitly constructed. Several possible choices are presented that result in either better spectral properties or a higher order of convergence for sufficiently smooth solutions. However, these improvements carry over to discontinuous solutions. The embedded methods are demonstrated to be indeed improvements over their standard counterparts by several numerical examples. All the embedded methods presented have no added computational effort compared to their standard counterparts.

  17. Research on Swivel Construction Technology of 22,400 Tons in Zoucheng Thirty Meter Bridge

    NASA Astrophysics Data System (ADS)

    Han, Jun; Benlin, Xiao

    2018-05-01

    In recent years, with the rapid development of highways and railways in our country, there have been many new bridges that need to cross the existing routes. If the conventional construction methods are used, the existing traffic will be affected and the traffic will be built above the busy traffic lines, so there is a big security risk, the construction methods must be improved and innovated. In this paper, it intends to research and develop some key technologies of swivel construction. According to the construction features to use finite element method of swivel cable-stayed bridge to analyse the cable-stayed bridge . The swivel construction process is carried out to solve the technical problems and difficulties in the construction.

  18. The Effects of Kolb's Experiential Learning Model on Successful Intelligence in Secondary Agriculture Students

    ERIC Educational Resources Information Center

    Baker, Marshall A.; Robinson, J. Shane

    2016-01-01

    Experiential learning is an important pedagogical approach used in secondary agricultural education. Though anecdotal evidence supports the use of experiential learning, a paucity of empirical research exists supporting the effects of this approach when compared to a more conventional teaching method, such as direct instruction. Therefore, the…

  19. Robust backstepping control of an interlink converter in a hybrid AC/DC microgrid based on feedback linearisation method

    NASA Astrophysics Data System (ADS)

    Dehkordi, N. Mahdian; Sadati, N.; Hamzeh, M.

    2017-09-01

    This paper presents a robust dc-link voltage as well as a current control strategy for a bidirectional interlink converter (BIC) in a hybrid ac/dc microgrid. To enhance the dc-bus voltage control, conventional methods strive to measure and feedforward the load or source power in the dc-bus control scheme. However, the conventional feedforward-based approaches require remote measurement with communications. Moreover, conventional methods suffer from stability and performance issues, mainly due to the use of the small-signal-based control design method. To overcome these issues, in this paper, the power from DG units of the dc subgrid imposed on the BIC is considered an unmeasurable disturbance signal. In the proposed method, in contrast to existing methods, using the nonlinear model of BIC, a robust controller that does not need the remote measurement with communications effectively rejects the impact of the disturbance signal imposed on the BIC's dc-link voltage. To avoid communication links, the robust controller has a plug-and-play feature that makes it possible to add a DG/load to or remove it from the dc subgrid without distorting the hybrid microgrid stability. Finally, Monte Carlo simulations are conducted to confirm the effectiveness of the proposed control strategy in MATLAB/SimPowerSystems software environment.

  20. Method and apparatus for semi-solid material processing

    DOEpatents

    Han, Qingyou [Knoxville, TN; Jian, Xiaogang [Knoxville, TN; Xu, Hanbing [Knoxville, TN; Meek, Thomas T [Knoxville, TN

    2009-02-24

    A method of forming a material includes the steps of: vibrating a molten material at an ultrasonic frequency while cooling the material to a semi-solid state to form non-dendritic grains therein; forming the semi-solid material into a desired shape; and cooling the material to a solid state. The method makes semi-solid castings directly from molten materials (usually a metal), produces grain size usually in the range of smaller than 50 .mu.m, and can be easily retrofitted into existing conventional forming machine.

  1. Method and apparatus for semi-solid material processing

    DOEpatents

    Han, Qingyou [Knoxville, TN; Jian, Xiaogang [Knoxville, TN; Xu, Hanbing [Knoxville, TN; Meek, Thomas T [Knoxville, TN

    2009-11-24

    A method of forming a material includes the steps of: vibrating a molten material at an ultrasonic frequency while cooling the material to a semi-solid state to form non-dendritic grains therein; forming the semi-solid material into a desired shape; and cooling the material to a solid state. The method makes semi-solid castings directly from molten materials (usually a metal), produces grain size usually in the range of smaller than 50 .mu.m, and can be easily retrofitted into existing conventional forming maching.

  2. Method and apparatus for semi-solid material processing

    DOEpatents

    Han, Qingyou [Knoxville, TN; Jian, Xiaogang [Knoxville, TN; Xu, Hanbing [Knoxville, TN; Meek, Thomas T [Knoxville, TN

    2007-05-15

    A method of forming a material includes the steps of: vibrating a molten material at an ultrasonic frequency while cooling the material to a semi-solid state to form non-dendritic grains therein; forming the semi-solid material into a desired shape; and cooling the material to a solid state. The method makes semi-solid castings directly from molten materials (usually a metal), produces grain size usually in the range of smaller than 50 .mu.m, and can be easily retrofitted into existing conventional forming machine.

  3. Furfural Synthesis from d-Xylose in the Presence of Sodium Chloride: Microwave versus Conventional Heating.

    PubMed

    Xiouras, Christos; Radacsi, Norbert; Sturm, Guido; Stefanidis, Georgios D

    2016-08-23

    We investigate the existence of specific/nonthermal microwave effects for the dehydration reaction of xylose to furfural in the presence of NaCl. Such effects are reported for sugars dehydration reactions in several literature reports. To this end, we adopted three approaches that compare microwave-assisted experiments with a) conventional heating experiments from the literature; b) simulated conventional heating experiments using microwave-irradiated silicon carbide (SiC) vials; and at c) different power levels but the same temperature by using forced cooling. No significant differences in the reaction kinetics are observed using any of these methods. However, microwave heating still proves advantageous as it requires 30 % less forward power compared to conventional heating (SiC vial) to achieve the same furfural yield at a laboratory scale. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  5. Modeling the Relative GHG Emissions of Conventional and Shale Gas Production

    PubMed Central

    2011-01-01

    Recent reports show growing reserves of unconventional gas are available and that there is an appetite from policy makers, industry, and others to better understand the GHG impact of exploiting reserves such as shale gas. There is little publicly available data comparing unconventional and conventional gas production. Existing studies rely on national inventories, but it is not generally possible to separate emissions from unconventional and conventional sources within these totals. Even if unconventional and conventional sites had been listed separately, it would not be possible to eliminate site-specific factors to compare gas production methods on an equal footing. To address this difficulty, the emissions of gas production have instead been modeled. In this way, parameters common to both methods of production can be held constant, while allowing those parameters which differentiate unconventional gas and conventional gas production to vary. The results are placed into the context of power generation, to give a ″well-to-wire″ (WtW) intensity. It was estimated that shale gas typically has a WtW emissions intensity about 1.8–2.4% higher than conventional gas, arising mainly from higher methane releases in well completion. Even using extreme assumptions, it was found that WtW emissions from shale gas need be no more than 15% higher than conventional gas if flaring or recovery measures are used. In all cases considered, the WtW emissions of shale gas powergen are significantly lower than those of coal. PMID:22085088

  6. Modeling the relative GHG emissions of conventional and shale gas production.

    PubMed

    Stephenson, Trevor; Valle, Jose Eduardo; Riera-Palou, Xavier

    2011-12-15

    Recent reports show growing reserves of unconventional gas are available and that there is an appetite from policy makers, industry, and others to better understand the GHG impact of exploiting reserves such as shale gas. There is little publicly available data comparing unconventional and conventional gas production. Existing studies rely on national inventories, but it is not generally possible to separate emissions from unconventional and conventional sources within these totals. Even if unconventional and conventional sites had been listed separately, it would not be possible to eliminate site-specific factors to compare gas production methods on an equal footing. To address this difficulty, the emissions of gas production have instead been modeled. In this way, parameters common to both methods of production can be held constant, while allowing those parameters which differentiate unconventional gas and conventional gas production to vary. The results are placed into the context of power generation, to give a ″well-to-wire″ (WtW) intensity. It was estimated that shale gas typically has a WtW emissions intensity about 1.8-2.4% higher than conventional gas, arising mainly from higher methane releases in well completion. Even using extreme assumptions, it was found that WtW emissions from shale gas need be no more than 15% higher than conventional gas if flaring or recovery measures are used. In all cases considered, the WtW emissions of shale gas powergen are significantly lower than those of coal.

  7. A study of methods to estimate debris flow velocity

    USGS Publications Warehouse

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.

    2008-01-01

    Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.

  8. OPTiM: Optical projection tomography integrated microscope using open-source hardware and software

    PubMed Central

    Andrews, Natalie; Davis, Samuel; Bugeon, Laurence; Dallman, Margaret D.; McGinty, James

    2017-01-01

    We describe the implementation of an OPT plate to perform optical projection tomography (OPT) on a commercial wide-field inverted microscope, using our open-source hardware and software. The OPT plate includes a tilt adjustment for alignment and a stepper motor for sample rotation as required by standard projection tomography. Depending on magnification requirements, three methods of performing OPT are detailed using this adaptor plate: a conventional direct OPT method requiring only the addition of a limiting aperture behind the objective lens; an external optical-relay method allowing conventional OPT to be performed at magnifications >4x; a remote focal scanning and region-of-interest method for improved spatial resolution OPT (up to ~1.6 μm). All three methods use the microscope’s existing incoherent light source (i.e. arc-lamp) and all of its inherent functionality is maintained for day-to-day use. OPT acquisitions are performed on in vivo zebrafish embryos to demonstrate the implementations’ viability. PMID:28700724

  9. [Clinical and radiographic outcomes of navigation-assisted versus conventional total knee arthroplasty].

    PubMed

    Li, Xiaohui; Yu, Jianhua; Gong, Yuekun; Ren, Kaijing; Liu, Jun

    2015-04-21

    To assess the early postoperative clinical and radiographic outcomes after navigation-assisted or standard instrumentation total knee arthroplasty (TKA). From August 2007 to May 2008, 60 KSS-A type patients underwent 67 primary TKA operations by the same surgical team. Twenty-two operations were performed with the Image-free navigation system with an average age of 64.5 years while the remaining 45 underwent conventional manual procedures with an average age of 66 years. Their preoperative demographic and functional data had no statistical differences (P>0.05). The operative duration, blood loss volume and hospitalization days were compared for two groups. And radiographic data included coronal femoral component angle, coronal tibial component angle, sagittal femoral component angle, sagittal tibial component angle and coronal tibiofemoral angle after one month. And functional assessment scores were evaluated at 1, 3 and 6 months postoperatively. Operative duration was significantly longer for computer navigation (P<0.05). The average blood loss volume was 555.26 ml in computer navigation group and 647.56 ml in conventional manual method group (P<0.05). And hospitalization stay was shorter in computer navigation group than that in conventional method group (7.74 vs 8.68 days) (P=0.04). The alignment deviation was better in computer-assisted group than that in conventional manual method group (P<0.05). The percentage of patients with a coronal tibiofemoral angle within ±3 of ideal value was 95.45% for computer-assisted mini-invasive TKA group and 80% for conventional TKA group (P=0.003). The Knee Society Clinical Rating Score was higher in computer-assisted group than that in conventional manual method group at 1 and 3 montha post-operation. However, no statistical inter-group difference existed at 6 months post-operation. Navigation allows a surgeon to precisely implant the components for TKA. And it offers faster functional recovery and shorter hospitalization stay. At 6 months post-operation, there is no statistical inter-group difference in KSS scores.

  10. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  11. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.

    PubMed

    Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang

    2017-01-01

    Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  12. Biological Control of Introduced Weeds of Native Hawaiian Forests

    Treesearch

    George P. Markin; Roddy F. Nagata; Donald E. Gardner

    1992-01-01

    Among the many threats to the continued existence of the remaining native forests and other native ecosystems of the Hawaiian Islands, the most severe and the most difficult to control are the invasion and replacement by induced species of plants. Because conventional methods of plant management have faild to control this invasion, a multiagency, state and federal...

  13. Conventional Reduced Risk Pesticide Program

    EPA Pesticide Factsheets

    Find out about the Conventional Reduced Risk Pesticide Program, which expedites the review and regulatory decision-making process of conventional pesticides that pose less risk to human health and the environment than existing conventional alternatives.

  14. Neck pain assessment in a virtual environment.

    PubMed

    Sarig-Bahat, Hilla; Weiss, Patrice L Tamar; Laufer, Yocheved

    2010-02-15

    Neck-pain and control group comparative analysis of conventional and virtual reality (VR)-based assessment of cervical range of motion (CROM). To use a tracker-based VR system to compare CROM of individuals suffering from chronic neck pain with CROM of asymptomatic individuals; to compare VR system results with those obtained during conventional assessment; to present the diagnostic value of CROM measures obtained by both assessments; and to demonstrate the effect of a single VR session on CROM. Neck pain is a common musculoskeletal complaint with a reported annual prevalence of 30% to 50%. In the absence of a gold standard for CROM assessment, a variety of assessment devices and methodologies exist. Common to these methodologies, assessment of CROM is carried out by instructing subjects to move their head as far as possible. However, these elicited movements do not necessarily replicate functional movements which occur spontaneously in response to multiple stimuli. To achieve a more functional approach to cervical motion assessment, we have recently developed a VR environment in which electromagnetic tracking is used to monitor cervical motion while participants are involved in a simple yet engaging gaming scenario. CROM measures were collected from 25 symptomatic and 42 asymptomatic individuals using VR and conventional assessments. Analysis of variance was used to determine differences between groups and assessment methods. Logistic regression analysis, using a single predictor, compared the diagnostic ability of both methods. Results obtained by both methods demonstrated significant CROM limitations in the symptomatic group. The VR measures showed greater CROM and sensitivity while conventional measures showed greater specificity. A single session exposure to VR resulted in a significant increase in CROM. Neck pain is significantly associated with reduced CROM as demonstrated by both VR and conventional assessment methods. The VR method provides assessment of functional CROM and can be used for CROM enhancement. Assessment by VR has greater sensitivity than conventional assessment and can be used for the detection of true symptomatic individuals.

  15. Use of a modified GreenScreen tool to conduct a screening-level comparative hazard assessment of conventional silver and two forms of nanosilver.

    PubMed

    Sass, Jennifer; Heine, Lauren; Hwang, Nina

    2016-11-08

    Increased concern for potential health and environmental impacts of chemicals, including nanomaterials, in consumer products is driving demand for greater transparency regarding potential risks. Chemical hazard assessment is a powerful tool to inform product design, development and procurement and has been integrated into alternative assessment frameworks. The extent to which assessment methods originally designed for conventionally-sized materials can be used for nanomaterials, which have size-dependent physical and chemical properties, have not been well established. We contracted with a certified GreenScreen profiler to conduct three GreenScreen hazard assessments, for conventional silver and two forms of nanosilver. The contractor summarized publicly available literature, and used defined GreenScreen hazard criteria and expert judgment to assign and report hazard classification levels, along with indications of confidence in those assignments. Where data were not available, a data gap (DG) was assigned. Using the individual endpoint scores, an aggregated benchmark score (BM) was applied. Conventional silver and low-soluble nanosilver were assigned the highest possible hazard score and a silica-silver nanocomposite called AGS-20 could not be scored due to data gaps. AGS-20 is approved for use as antimicrobials by the US Environmental Protection Agency. An existing method for chemical hazard assessment and communication can be used - with minor adaptations- to compare hazards across conventional and nano forms of a substance. The differences in data gaps and in hazard profiles support the argument that each silver form should be considered unique and subjected to hazard assessment to inform regulatory decisions and decisions about product design and development. A critical limitation of hazard assessments for nanomaterials is the lack of nano-specific hazard data - where data are available, we demonstrate that existing hazard assessment systems can work. The work is relevant for risk assessors and regulators. We recommend that regulatory agencies and others require more robust data sets on each novel nanomaterial before granting market approval.

  16. Genetic improvement of olive (Olea europaea L.) by conventional and in vitro biotechnology methods.

    PubMed

    Rugini, E; Cristofori, V; Silvestri, C

    2016-01-01

    In olive (Olea europaea L.) traditional methods of genetic improvement have up to now produced limited results. Intensification of olive growing requires appropriate new cultivars for fully mechanized groves, but among the large number of the traditional varieties very few are suitable. High-density and super high-density hedge row orchards require genotypes with reduced size, reduced apical dominance, a semi-erect growth habit, easy to propagate, resistant to abiotic and biotic stresses, with reliably high productivity and quality of both fruits and oil. Innovative strategies supported by molecular and biotechnological techniques are required to speed up novel hybridisation methods. Among traditional approaches the Gene Pool Method seems a reasonable option, but it requires availability of widely diverse germplasm from both cultivated and wild genotypes, supported by a detailed knowledge of their genetic relationships. The practice of "gene therapy" for the most important existing cultivars, combined with conventional methods, could accelerate achievement of the main goals, but efforts to overcome some technical and ideological obstacles are needed. The present review describes the benefits that olive and its products may obtain from genetic improvement using state of the art of conventional and unconventional methods, and includes progress made in the field of in vitro techniques. The uses of both traditional and modern technologies are discussed with recommendations. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. DNA-conjugated gold nanoparticles based colorimetric assay to assess helicase activity: a novel route to screen potential helicase inhibitors

    NASA Astrophysics Data System (ADS)

    Deka, Jashmini; Mojumdar, Aditya; Parisse, Pietro; Onesti, Silvia; Casalis, Loredana

    2017-03-01

    Helicase are essential enzymes which are widespread in all life-forms. Due to their central role in nucleic acid metabolism, they are emerging as important targets for anti-viral, antibacterial and anti-cancer drugs. The development of easy, cheap, fast and robust biochemical assays to measure helicase activity, overcoming the limitations of the current methods, is a pre-requisite for the discovery of helicase inhibitors through high-throughput screenings. We have developed a method which exploits the optical properties of DNA-conjugated gold nanoparticles (AuNP) and meets the required criteria. The method was tested with the catalytic domain of the human RecQ4 helicase and compared with a conventional FRET-based assay. The AuNP-based assay produced similar results but is simpler, more robust and cheaper than FRET. Therefore, our nanotechnology-based platform shows the potential to provide a useful alternative to the existing conventional methods for following helicase activity and to screen small-molecule libraries as potential helicase inhibitors.

  18. Shadow-angle method for anisotropic and weakly absorbing films.

    PubMed

    Surdutovich, G; Vitlina, R; Baranauskas, V

    1999-07-01

    A method for determining the optical properties of a film on an isotropic substrate is proposed. The method is based on the existence of two specific incidence angles in the angular interference pattern of the p-polarized light where oscillations of the reflection coefficient cease. The first of these angles, theta(B1), is the well-known Abelès angle, i.e., the ambient-film Brewster angle, and the second angle theta(B2) is the film-substrate Brewster angle. In the conventional planar geometry and in a vacuum ambient there is a rigorous constraint epsilon(1) + epsilon > epsilon(1)epsilon on the film and the substrate dielectric permittivities epsilon(1) and epsilon, respectively, for the existence of the second angle theta(B2.) The limitation may be removed in an experiment by use of a cylindrical lens as an ambient with epsilon(0) > 1, so that both angles become observable. This, contrary to general belief, allows one to adopt the conventional Abelès method not only for films with epsilon(1) close to the substrate's value epsilon but also for any value of epsilon(1). The method, when applied to a wedge-shaped film or to any film of unknown variable thickness, permits one to determine (i) the refractive index of a film on an unknown substrate, (ii) the vertical and the horizontal optical anisotropies of a film on an isotropic substrate, (iii) the weak absorption of a moderately thick film on a transparent or an absorbing isotropic substrate.

  19. Reconstructing metastatic seeding patterns of human cancers

    PubMed Central

    Reiter, Johannes G.; Makohon-Moore, Alvin P.; Gerold, Jeffrey M.; Bozic, Ivana; Chatterjee, Krishnendu; Iacobuzio-Donahue, Christine A.; Vogelstein, Bert; Nowak, Martin A.

    2017-01-01

    Reconstructing the evolutionary history of metastases is critical for understanding their basic biological principles and has profound clinical implications. Genome-wide sequencing data has enabled modern phylogenomic methods to accurately dissect subclones and their phylogenies from noisy and impure bulk tumour samples at unprecedented depth. However, existing methods are not designed to infer metastatic seeding patterns. Here we develop a tool, called Treeomics, to reconstruct the phylogeny of metastases and map subclones to their anatomic locations. Treeomics infers comprehensive seeding patterns for pancreatic, ovarian, and prostate cancers. Moreover, Treeomics correctly disambiguates true seeding patterns from sequencing artifacts; 7% of variants were misclassified by conventional statistical methods. These artifacts can skew phylogenies by creating illusory tumour heterogeneity among distinct samples. In silico benchmarking on simulated tumour phylogenies across a wide range of sample purities (15–95%) and sequencing depths (25-800 × ) demonstrates the accuracy of Treeomics compared with existing methods. PMID:28139641

  20. Fast reconstruction of off-axis digital holograms based on digital spatial multiplexing.

    PubMed

    Sha, Bei; Liu, Xuan; Ge, Xiao-Lu; Guo, Cheng-Shan

    2014-09-22

    A method for fast reconstruction of off-axis digital holograms based on digital multiplexing algorithm is proposed. Instead of the existed angular multiplexing (AM), the new method utilizes a spatial multiplexing (SM) algorithm, in which four off-axis holograms recorded in sequence are synthesized into one SM function through multiplying each hologram with a tilted plane wave and then adding them up. In comparison with the conventional methods, the SM algorithm simplifies two-dimensional (2-D) Fourier transforms (FTs) of four N*N arrays into a 1.25-D FTs of one N*N arrays. Experimental results demonstrate that, using the SM algorithm, the computational efficiency can be improved and the reconstructed wavefronts keep the same quality as those retrieved based on the existed AM method. This algorithm may be useful in design of a fast preview system of dynamic wavefront imaging in digital holography.

  1. Channel estimation based on quantized MMP for FDD massive MIMO downlink

    NASA Astrophysics Data System (ADS)

    Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie

    2016-10-01

    In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.

  2. Exploring the notion of space coupling propulsion

    NASA Technical Reports Server (NTRS)

    Millis, Marc G.

    1990-01-01

    All existing methods of space propulsion are based on expelling a reaction mass (propellant) to induce motion. Alternatively, 'space coupling propulsion' refers to speculations about reacting with space-time itself to generate propulsive forces. Conceivably, the resulting increases in payload, range, and velocity would constitute a breakthrough in space propulsion. Such speculations are still considered science fiction for a number of reasons: (1) it appears to violate conservation of momentum; (2) no reactive media appear to exist in space; (3) no 'Grand Uniform Theories' exist to link gravity, an acceleration field, to other phenomena of nature such as electrodynamics. The rationale behind these objectives is the focus of interest. Various methods to either satisfy or explore these issues are presented along with secondary considerations. It is found that it may be useful to consider alternative conventions of science to further explore speculations of space coupling propulsion.

  3. Enhanced sampling simulations to construct free-energy landscape of protein-partner substrate interaction.

    PubMed

    Ikebe, Jinzen; Umezawa, Koji; Higo, Junichi

    2016-03-01

    Molecular dynamics (MD) simulations using all-atom and explicit solvent models provide valuable information on the detailed behavior of protein-partner substrate binding at the atomic level. As the power of computational resources increase, MD simulations are being used more widely and easily. However, it is still difficult to investigate the thermodynamic properties of protein-partner substrate binding and protein folding with conventional MD simulations. Enhanced sampling methods have been developed to sample conformations that reflect equilibrium conditions in a more efficient manner than conventional MD simulations, thereby allowing the construction of accurate free-energy landscapes. In this review, we discuss these enhanced sampling methods using a series of case-by-case examples. In particular, we review enhanced sampling methods conforming to trivial trajectory parallelization, virtual-system coupled multicanonical MD, and adaptive lambda square dynamics. These methods have been recently developed based on the existing method of multicanonical MD simulation. Their applications are reviewed with an emphasis on describing their practical implementation. In our concluding remarks we explore extensions of the enhanced sampling methods that may allow for even more efficient sampling.

  4. Modeling misidentification errors in capture-recapture studies using photographic identification of evolving marks

    USGS Publications Warehouse

    Yoshizaki, J.; Pollock, K.H.; Brownie, C.; Webster, R.A.

    2009-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) are used to identify individual animals in a capture-recapture study. Photographic identification (photoID) typically uses photographic images of animals' naturally existing features as tags (photographic tags) and is subject to two main causes of identification errors: those related to quality of photographs (non-evolving natural tags) and those related to changes in natural marks (evolving natural tags). The conventional methods for analysis of capture-recapture data do not account for identification errors, and to do so requires a detailed understanding of the misidentification mechanism. Focusing on the situation where errors are due to evolving natural tags, we propose a misidentification mechanism and outline a framework for modeling the effect of misidentification in closed population studies. We introduce methods for estimating population size based on this model. Using a simulation study, we show that conventional estimators can seriously overestimate population size when errors due to misidentification are ignored, and that, in comparison, our new estimators have better properties except in cases with low capture probabilities (<0.2) or low misidentification rates (<2.5%). ?? 2009 by the Ecological Society of America.

  5. A hybrid frame concealment algorithm for H.264/AVC.

    PubMed

    Yan, Bo; Gharavi, Hamid

    2010-01-01

    In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.

  6. Digital redesign of anti-wind-up controller for cascaded analog system.

    PubMed

    Chen, Y S; Tsai, J S H; Shieh, L S; Moussighi, M M

    2003-01-01

    The cascaded conventional anti-wind-up (CAW) design method for integral controller is discussed. Then, the prediction-based digital redesign methodology is utilized to find the new pulse amplitude modulated (PAM) digital controller for effective digital control of the analog plant with input saturation constraint. The desired digital controller is determined from existing or pre-designed CAW analog controller. The proposed method provides a novel methodology for indirect digital design of a continuous-time unity output-feedback system with a cascaded analog controller as in the case of PID controllers for industrial control processes with the presence of actuator saturations. It enables us to implement an existing or pre-designed cascaded CAW analog controller via a digital controller effectively.

  7. Bayesian evaluation of clinical diagnostic test characteristics of visual observations and remote monitoring to diagnose bovine respiratory disease in beef calves.

    PubMed

    White, Brad J; Goehl, Dan R; Amrine, David E; Booker, Calvin; Wildman, Brian; Perrett, Tye

    2016-04-01

    Accurate diagnosis of bovine respiratory disease (BRD) in beef cattle is a critical facet of therapeutic programs through promotion of prompt treatment of diseased calves in concert with judicious use of antimicrobials. Despite the known inaccuracies, visual observation (VO) of clinical signs is the conventional diagnostic modality for BRD diagnosis. Objective methods of remotely monitoring cattle wellness could improve diagnostic accuracy; however, little information exists describing the accuracy of this method compared to traditional techniques. The objective of this research is to employ Bayesian methodology to elicit diagnostic characteristics of conventional VO compared to remote early disease identification (REDI) to diagnose BRD. Data from previous literature on the accuracy of VO were combined with trial data consisting of direct comparison between VO and REDI for BRD in two populations. No true gold standard diagnostic test exists for BRD; therefore, estimates of diagnostic characteristics of each test were generated using Bayesian latent class analysis. Results indicate a 90.0% probability that the sensitivity of REDI (median 81.3%; 95% probability interval [PI]: 55.5, 95.8) was higher than VO sensitivity (64.5%; PI: 57.9, 70.8). The specificity of REDI (median 92.9%; PI: 88.2, 96.9) was also higher compared to VO (median 69.1%; PI: 66.3, 71.8). The differences in sensitivity and specificity resulted in REDI exhibiting higher positive and negative predictive values in both high (41.3%) and low (2.6%) prevalence situations. This research illustrates the potential of remote cattle monitoring to augment conventional methods of BRD diagnosis resulting in more accurate identification of diseased cattle. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Radiation shielding design of a new tomotherapy facility.

    PubMed

    Zacarias, Albert; Balog, John; Mills, Michael

    2006-10-01

    It is expected that intensity modulated radiation therapy (IMRT) and image guided radiation therapy (IGRT) will replace a large portion of radiation therapy treatments currently performed with conventional MLC-based 3D conformal techniques. IGRT may become the standard of treatment in the future for prostate and head and neck cancer. Many established facilities may convert existing vaults to perform this treatment method using new or upgraded equipment. In the future, more facilities undoubtedly will be considering de novo designs for their treatment vaults. A reevaluation of the design principles used in conventional vault design is of benefit to those considering this approach with a new tomotherapy facility. This is made more imperative as the design of the TomoTherapy system is unique in several aspects and does not fit well into the formalism of NCRP 49 for a conventional linear accelerator.

  9. Kinematic Determination of an Unmodeled Serial Manipulator by Means of an IMU

    NASA Astrophysics Data System (ADS)

    Ciarleglio, Constance A.

    Kinematic determination for an unmodeled manipulator is usually done through a-priori knowledge of the manipulator physical characteristics or external sensor information. The mathematics of the kinematic estimation, often based on Denavit- Hartenberg convention, are complex and have high computation requirements, in addition to being unique to the manipulator for which the method is developed. Analytical methods that can compute kinematics on-the fly have the potential to be highly beneficial in dynamic environments where different configurations and variable manipulator types are often required. This thesis derives a new screw theory based method of kinematic determination, using a single inertial measurement unit (IMU), for use with any serial, revolute manipulator. The method allows the expansion of reconfigurable manipulator design and simplifies the kinematic process for existing manipulators. A simulation is presented where the theory of the method is verified and characterized with error. The method is then implemented on an existing manipulator as a verification of functionality.

  10. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    PubMed

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  11. Aligning precisely polarization maintaining photonic crystal fiber and conventional single-mode fiber by online spectrum monitoring

    NASA Astrophysics Data System (ADS)

    Jiang, Ying; Zeng, Jie; Liang, Dakai; Ni, Xiaoyu; Luo, Wenyong

    2013-06-01

    The fibers aligning is very important in fusion splicing process. The core of polarization maintaining photonic crystal fiber(PM-PCF) can not be seen in the splicer due to microhole structure of its cross-section. So it is difficult to align precisely PM-PCF and conventional single-mode fiber(SMF).We demonstrate a novel method for aligning precisely PM-PCF and conventional SMF by online spectrum monitoring. Firstly, the light source of halogen lamp is connected to one end face of conventional SMF.Then align roughly one end face of PM-PCF and the other end face of conventional SMF by observing visible light in the other end face of PM-PCF. If there exists visible light, they are believed to align roughly. The other end face of PM-PCF and one end face of the other conventional SMF are aligned precisely in the other splicer by online spectrum monitoring. Now the light source of halogen lamp is changed into a broadband light source with 52nm wavelength range.The other end face of the other conventional SMF is connected to an optical spectrum analyzer.They are translationally and rotationally adjusted in the splicer by monitoring spectrum. When the transmission spectrum power is maximum, the aligning is precise.

  12. Evaluation of a conventional chip seal under an overlay to mitigate reflective cracking (informal).

    DOT National Transportation Integrated Search

    2015-03-01

    The Billings District initiated an experimental project in placing a conventional : chip seal (as an interlayer) on an existing pavement prior to an overlay : (composed of a 0.25 PMS thickness). The intent of the chip seal (CS) was to : seal exist...

  13. Conventional and improved cytotoxicity test methods of newly developed biodegradable magnesium alloys

    NASA Astrophysics Data System (ADS)

    Han, Hyung-Seop; Kim, Hee-Kyoung; Kim, Yu-Chan; Seok, Hyun-Kwang; Kim, Young-Yul

    2015-11-01

    Unique biodegradable property of magnesium has spawned countless studies to develop ideal biodegradable orthopedic implant materials in the last decade. However, due to the rapid pH change and extensive amount of hydrogen gas generated during biocorrosion, it is extremely difficult to determine the accurate cytotoxicity of newly developed magnesium alloys using the existing methods. Herein, we report a new method to accurately determine the cytotoxicity of magnesium alloys with varying corrosion rate while taking in-vivo condition into the consideration. For conventional method, extract quantities of each metal ion were determined using ICP-MS and the result showed that the cytotoxicity due to pH change caused by corrosion affected the cell viability rather than the intrinsic cytotoxicity of magnesium alloy. In physiological environment, pH is regulated and adjusted within normal pH (˜7.4) range by homeostasis. Two new methods using pH buffered extracts were proposed and performed to show that environmental buffering effect of pH, dilution of the extract, and the regulation of eluate surface area must be taken into consideration for accurate cytotoxicity measurement of biodegradable magnesium alloys.

  14. On existence of the σ(600) Its physical implications and related problems

    NASA Astrophysics Data System (ADS)

    Ishida, Shin

    1998-05-01

    We make a re-analysis of 1=0 ππ scattering phase shift δ00 through a new method of S-matrix parametrization (IA; interfering amplitude method), and show a result suggesting strongly for the existence of σ-particle-long-sought Chiral partner of π-meson. Furthermore, through the phenomenological analyses of typical production processes of the 2π-system, the pp-central collision and the J/Ψ→ωππ decay, by applying an intuitive formula as sum of Breit-Wigner amplitudes, (VMW; variant mass and width method), the other evidences for the σ-existence are given. The validity of the methods used in the above analyses is investigated, using a simple field theoretical model, from the general viewpoint of unitarity and the applicability of final state interaction (FSI-) theorem, especially in relation to the "universality" argument. It is shown that the IA and VMW are obtained as the physical state representations of scattering and production amplitudes, respectively. The VMW is shown to be an effective method to obtain the resonance properties from production processes, which generally have the unknown strong-phases. The conventional analyses based on the "universality" seem to be powerless for this purpose.

  15. RECOVERY ACT: MULTIMODAL IMAGING FOR SOLAR CELL MICROCRACK DETECTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janice Hudgings; Lawrence Domash

    2012-02-08

    Undetected microcracks in solar cells are a principal cause of failure in service due to subsequent weather exposure, mechanical flexing or diurnal temperature cycles. Existing methods have not been able to detect cracks early enough in the production cycle to prevent inadvertent shipment to customers. This program, sponsored under the DOE Photovoltaic Supply Chain and Cross-Cutting Technologies program, studied the feasibility of quantifying surface micro-discontinuities by use of a novel technique, thermoreflectance imaging, to detect surface temperature gradients with very high spatial resolution, in combination with a suite of conventional imaging methods such as electroluminescence. The project carried out laboratorymore » tests together with computational image analyses using sample solar cells with known defects supplied by industry sources or DOE National Labs. Quantitative comparisons between the effectiveness of the new technique and conventional methods were determined in terms of the smallest detectable crack. Also the robustness of the new technique for reliable microcrack detection was determined at various stages of processing such as before and after antireflectance treatments. An overall assessment is that the new technique compares favorably with existing methods such as lock-in thermography or ultrasonics. The project was 100% completed in Sept, 2010. A detailed report of key findings from this program was published as: Q.Zhou, X.Hu, K.Al-Hemyari, K.McCarthy, L.Domash and J.Hudgings, High spatial resolution characterization of silicon solar cells using thermoreflectance imaging, J. Appl. Phys, 110, 053108 (2011).« less

  16. System Identification of Mistuned Bladed Disks from Traveling Wave Response Measurements

    NASA Technical Reports Server (NTRS)

    Feiner, D. M.; Griffin, J. H.; Jones, K. W.; Kenyon, J. A.; Mehmed, O.; Kurkov, A. P.

    2003-01-01

    A new approach to modal analysis is presented. By applying this technique to bladed disk system identification methods, one can determine the mistuning in a rotor based on its response to a traveling wave excitation. This allows system identification to be performed under rotating conditions, and thus expands the applicability of existing mistuning identification techniques from integrally bladed rotors to conventional bladed disks.

  17. Thermal discharges and their role in pending power plant regulatory decisions

    NASA Technical Reports Server (NTRS)

    Miller, M. H.

    1978-01-01

    Federal and state laws require the imminent retrofit of offstream condenser cooling to the newer steam electric stations. Waiver can be granted based on sound experimental data, demonstrating that existing once-through cooling will not adversely affect aquatic ecosystems. Conventional methods for monitoring thermal plumes, and some remote sensing alternatives, are reviewed, using on going work at one Maryland power plant for illustration.

  18. Modelling the monetary value of a QALY: a new approach based on UK data.

    PubMed

    Mason, Helen; Jones-Lee, Michael; Donaldson, Cam

    2009-08-01

    Debate about the monetary value of a quality-adjusted life year (QALY) has existed in the health economics literature for some time. More recently, concern about such a value has arisen in UK health policy. This paper reports on an attempt to 'model' a willingness-to-pay-based value of a QALY from the existing value of preventing a statistical fatality (VPF) currently used in UK public sector decision making. Two methods of deriving the value of a QALY from the existing UK VPF are outlined: one conventional and one new. The advantages and disadvantages of each of the approaches are discussed as well as the implications of the results for policy and health economic evaluation methodology.

  19. Virtual reality treatment and assessments for post-stroke unilateral spatial neglect: A systematic literature review.

    PubMed

    Ogourtsova, Tatiana; Souza Silva, Wagner; Archambault, Philippe S; Lamontagne, Anouk

    2017-04-01

    Unilateral spatial neglect (USN) is a highly prevalent post-stroke deficit. Currently, there is no gold standard USN assessment which encompasses the heterogeneity of this disorder and that is sensitive to detect mild deficits. Similarly, there is a limited number of high quality studies suggesting that conventional USN treatments are effective in improving functional outcomes and reducing disability. Virtual reality (VR) provides enhanced methods for USN assessment and treatment. To establish best-practice recommendations with respect to its use, it is necessary to appraise the existing evidence. This systematic review aimed to identify and appraise existing VR-based USN assessments; and to determine whether VR is more effective than conventional therapy. Assessment tools were critically appraised using standard criteria. The methodological quality of the treatment trials was rated by two authors. The level of evidence according to stage of recovery was determined. Findings were compiled into a VR-based USN Assessment and Treatment Toolkit (VR-ATT). Twenty-three studies were identified. The proposed VR tools augmented the conventional assessment strategies. However, most studies lacked analysis of psychometric properties. There is limited evidence that VR is more effective than conventional therapy in improving USN symptoms in patients with stroke. It was concluded that VR-ATT could facilitate identification and decision-making as to the appropriateness of VR-based USN assessments and treatments across the continuum of stroke care, but more evidence is required on treatment effectiveness.

  20. Co-existence of GM, conventional and organic crops in developing countries: Main debates and concerns.

    PubMed

    Azadi, Hossein; Taube, Friedhelm; Taheri, Fatemeh

    2017-06-05

    The co-existence approach of GM crops with conventional agriculture and organic farming as a feasible agricultural farming system has recently been placed in the center of hot debates at the EU-level and become a source of anxiety in developing countries. The main promises of this approach is to ensure "food security" and "food safety" on the one hand, and to avoid the adventitious presence of GM crops in conventional and organic farming on the other, as well as to present concerns in many debates on implementing the approach in developing countries. Here, we discuss the main debates on ("what," "why," "who," "where," "which," and "how") applying this approach in developing countries and review the main considerations and tradeoffs in this regard. The paper concludes that a peaceful co-existence between GM, conventional, and organic farming is not easy but is still possible. The goal should be to implement rules that are well-established proportionately, efficiently and cost-effectively, using crop-case, farming system-based and should be biodiversity-focused ending up with "codes of good agricultural practice" for co-existence.

  1. Utilization of design data on conventional system to building information modeling (BIM)

    NASA Astrophysics Data System (ADS)

    Akbar, Boyke M.; Z. R., Dewi Larasati

    2017-11-01

    Nowadays infrastructure development becomes one of the main priorities in the developed country such as Indonesia. The use of conventional design system is considered no longer effectively support the infrastructure projects, especially for the high complexity building design, due to its fragmented system issues. BIM comes as one of the solutions in managing projects in an integrated manner. Despite of the all known BIM benefits, there are some obstacles on the migration process to BIM. The two main of the obstacles are; the BIM implementation unpreparedness of some project parties and a concerns to leave behind the existing database and create a new one on the BIM system. This paper discusses the utilization probabilities of the existing CAD data from the conventional design system for BIM system. The existing conventional CAD data's and BIM design system output was studied to examine compatibility issues between two subject and followed by an utilization scheme-strategy probabilities. The goal of this study is to add project parties' eagerness in migrating to BIM by maximizing the existing data utilization and hopefully could also increase BIM based project workflow quality.

  2. Decision Tree based Prediction and Rule Induction for Groundwater Trichloroethene (TCE) Pollution Vulnerability

    NASA Astrophysics Data System (ADS)

    Park, J.; Yoo, K.

    2013-12-01

    For groundwater resource conservation, it is important to accurately assess groundwater pollution sensitivity or vulnerability. In this work, we attempted to use data mining approach to assess groundwater pollution vulnerability in a TCE (trichloroethylene) contaminated Korean industrial site. The conventional DRASTIC method failed to describe TCE sensitivity data with a poor correlation with hydrogeological properties. Among the different data mining methods such as Artificial Neural Network (ANN), Multiple Logistic Regression (MLR), Case Base Reasoning (CBR), and Decision Tree (DT), the accuracy and consistency of Decision Tree (DT) was the best. According to the following tree analyses with the optimal DT model, the failure of the conventional DRASTIC method in fitting with TCE sensitivity data may be due to the use of inaccurate weight values of hydrogeological parameters for the study site. These findings provide a proof of concept that DT based data mining approach can be used in predicting and rule induction of groundwater TCE sensitivity without pre-existing information on weights of hydrogeological properties.

  3. Relation between scattering and production amplitude--Case of intermediate {sigma}-particle in {pi}{pi}-system--

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishida, Muneyuki; Ishida, Shin; Ishida, Taku

    1998-05-29

    The relation between scattering and production amplitudes are investigated, using a simple field theoretical model, from the general viewpoint of unitarity and the applicability of final state interaction (FSI-) theorem. The IA-method and VMW-method, which are applied to our phenomenological analyses [2,3] suggesting the {sigma}-existence, are obtained as the physical state representations of scattering and production amplitudes, respectively. Moreover, the VMW-method is shown to be an effective method to obtain the resonance properties from general production processes, while the conventional analyses based on the 'universality' of {pi}{pi}-scattering amplitude are powerless for this purpose.

  4. Relation between scattering and production amplitude—Case of intermediate σ-particle in ππ-system—

    NASA Astrophysics Data System (ADS)

    Ishida, Muneyuki; Ishida, Shin; Ishida, Taku

    1998-05-01

    The relation between scattering and production amplitudes are investigated, using a simple field theoretical model, from the general viewpoint of unitarity and the applicability of final state interaction (FSI-) theorem. The IA-method and VMW-method, which are applied to our phenomenological analyses [2,3] suggesting the σ-existence, are obtained as the physical state representations of scattering and production amplitudes, respectively. Moreover, the VMW-method is shown to be an effective method to obtain the resonance properties from general production processes, while the conventional analyses based on the "universality" of ππ-scattering amplitude are powerless for this purpose.

  5. A novel finite element analysis of three-dimensional circular crack

    NASA Astrophysics Data System (ADS)

    Ping, X. C.; Wang, C. G.; Cheng, L. P.

    2018-06-01

    A novel singular element containing a part of the circular crack front is established to solve the singular stress fields of circular cracks by using the numerical series eigensolutions of singular stress fields. The element is derived from the Hellinger-Reissner variational principle and can be directly incorporated into existing 3D brick elements. The singular stress fields are determined as the system unknowns appearing as displacement nodal values. The numerical studies are conducted to demonstrate the simplicity of the proposed technique in handling fracture problems of circular cracks. The usage of the novel singular element can avoid mesh refinement near the crack front domain without loss of calculation accuracy and velocity of convergence. Compared with the conventional finite element methods and existing analytical methods, the present method is more suitable for dealing with complicated structures with a large number of elements.

  6. An analysis method for two-dimensional transonic viscous flow

    NASA Technical Reports Server (NTRS)

    Bavitz, P. C.

    1975-01-01

    A method for the approximate calculation of transonic flow over airfoils, including shock waves and viscous effects, is described. Numerical solutions are obtained by use of a computer program which is discussed in the appendix. The importance of including the boundary layer in the analysis is clearly demonstrated, as well as the need to improve on existing procedures near the trailing edge. Comparisons between calculations and experimental data are presented for both conventional and supercritical airfoils, emphasis being on the surface pressure distribution, and good agreement is indicated.

  7. Surface-region context in optimal multi-object graph-based segmentation: robust delineation of pulmonary tumors.

    PubMed

    Song, Qi; Chen, Mingqing; Bai, Junjie; Sonka, Milan; Wu, Xiaodong

    2011-01-01

    Multi-object segmentation with mutual interaction is a challenging task in medical image analysis. We report a novel solution to a segmentation problem, in which target objects of arbitrary shape mutually interact with terrain-like surfaces, which widely exists in the medical imaging field. The approach incorporates context information used during simultaneous segmentation of multiple objects. The object-surface interaction information is encoded by adding weighted inter-graph arcs to our graph model. A globally optimal solution is achieved by solving a single maximum flow problem in a low-order polynomial time. The performance of the method was evaluated in robust delineation of lung tumors in megavoltage cone-beam CT images in comparison with an expert-defined independent standard. The evaluation showed that our method generated highly accurate tumor segmentations. Compared with the conventional graph-cut method, our new approach provided significantly better results (p < 0.001). The Dice coefficient obtained by the conventional graph-cut approach (0.76 +/- 0.10) was improved to 0.84 +/- 0.05 when employing our new method for pulmonary tumor segmentation.

  8. Mapping urban environmental noise: a land use regression method.

    PubMed

    Xie, Dan; Liu, Yi; Chen, Jining

    2011-09-01

    Forecasting and preventing urban noise pollution are major challenges in urban environmental management. Most existing efforts, including experiment-based models, statistical models, and noise mapping, however, have limited capacity to explain the association between urban growth and corresponding noise change. Therefore, these conventional methods can hardly forecast urban noise at a given outlook of development layout. This paper, for the first time, introduces a land use regression method, which has been applied for simulating urban air quality for a decade, to construct an urban noise model (LUNOS) in Dalian Municipality, Northwest China. The LUNOS model describes noise as a dependent variable of surrounding various land areas via a regressive function. The results suggest that a linear model performs better in fitting monitoring data, and there is no significant difference of the LUNOS's outputs when applied to different spatial scales. As the LUNOS facilitates a better understanding of the association between land use and urban environmental noise in comparison to conventional methods, it can be regarded as a promising tool for noise prediction for planning purposes and aid smart decision-making.

  9. Comparative study on deposition of fluorine-doped tin dioxide thin films by conventional and ultrasonic spray pyrolysis methods for dye-sensitized solar modules

    NASA Astrophysics Data System (ADS)

    Icli, Kerem Cagatay; Kocaoglu, Bahadir Can; Ozenbas, Macit

    2018-01-01

    Fluorine-doped tin dioxide (FTO) thin films were produced via conventional spray pyrolysis and ultrasonic spray pyrolysis (USP) methods using alcohol-based solutions. The prepared films were compared in terms of crystal structure, morphology, surface roughness, visible light transmittance, and electronic properties. Upon investigation of the grain structures and morphologies, the films prepared using ultrasonic spray method provided relatively larger grains and due to this condition, carrier mobilities of these films exhibited slightly higher values. Dye-sensitized solar cells and 10×10 cm modules were prepared using commercially available and USP-deposited FTO/glass substrates, and solar performances were compared. It is observed that there exists no remarkable efficiency difference for both cells and modules, where module efficiency of the USP-deposited FTO glass substrates is 3.06% compared to commercial substrate giving 2.85% under identical conditions. We demonstrated that USP deposition is a low cost and versatile method of depositing commercial quality FTO thin films on large substrates employed in large area dye-sensitized solar modules or other thin film technologies.

  10. Multi-domain boundary element method for axi-symmetric layered linear acoustic systems

    NASA Astrophysics Data System (ADS)

    Reiter, Paul; Ziegelwanger, Harald

    2017-12-01

    Homogeneous porous materials like rock wool or synthetic foam are the main tool for acoustic absorption. The conventional absorbing structure for sound-proofing consists of one or multiple absorbers placed in front of a rigid wall, with or without air-gaps in between. Various models exist to describe these so called multi-layered acoustic systems mathematically for incoming plane waves. However, there is no efficient method to calculate the sound field in a half space above a multi layered acoustic system for an incoming spherical wave. In this work, an axi-symmetric multi-domain boundary element method (BEM) for absorbing multi layered acoustic systems and incoming spherical waves is introduced. In the proposed BEM formulation, a complex wave number is used to model absorbing materials as a fluid and a coordinate transformation is introduced which simplifies singular integrals of the conventional BEM to non-singular radial and angular integrals. The radial and angular part are integrated analytically and numerically, respectively. The output of the method can be interpreted as a numerical half space Green's function for grounds consisting of layered materials.

  11. Modified ADALINE algorithm for harmonic estimation and selective harmonic elimination in inverters

    NASA Astrophysics Data System (ADS)

    Vasumathi, B.; Moorthi, S.

    2011-11-01

    In digital signal processing, algorithms are very well developed for the estimation of harmonic components. In power electronic applications, an objective like fast response of a system is of primary importance. An effective method for the estimation of instantaneous harmonic components, along with conventional harmonic elimination technique, is presented in this article. The primary function is to eliminate undesirable higher harmonic components from the selected signal (current or voltage) and it requires only the knowledge of the frequency of the component to be eliminated. A signal processing technique using modified ADALINE algorithm has been proposed for harmonic estimation. The proposed method stays effective as it converges to a minimum error and brings out a finer estimation. A conventional control based on pulse width modulation for selective harmonic elimination is used to eliminate harmonic components after its estimation. This method can be applied to a wide range of equipment. The validity of the proposed method to estimate and eliminate voltage harmonics is proved with a dc/ac inverter as a simulation example. Then, the results are compared with existing ADALINE algorithm for illustrating its effectiveness.

  12. An orientation measurement method based on Hall-effect sensors for permanent magnet spherical actuators with 3D magnet array.

    PubMed

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-10-24

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.

  13. Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.

    2015-12-01

    Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.

  14. Applications of rapid prototyping technology in maxillofacial prosthetics.

    PubMed

    Sykes, Leanne M; Parrott, Andrew M; Owen, C Peter; Snaddon, Donald R

    2004-01-01

    The purpose of this study was to compare the accuracy, required time, and potential advantages of rapid prototyping technology with traditional methods in the manufacture of wax patterns for two facial prostheses. Two clinical situations were investigated: the production of an auricular prosthesis and the duplication of an existing maxillary prosthesis, using a conventional and a rapid prototyping method for each. Conventional wax patterns were created from impressions taken of a patient's remaining ear and an oral prosthesis. For the rapid prototyping method, a cast of the ear and the original maxillary prosthesis were scanned, and rapid prototyping was used to construct the wax patterns. For the auricular prosthesis, both patterns were refined clinically and then flasked and processed in silicone using routine procedures. Twenty-six independent observers evaluated these patterns by comparing them to the cast of the patient's remaining ear. For the duplication procedure, both wax patterns were scanned and compared to scans of the original prosthesis by generating color error maps to highlight volumetric changes. There was a significant difference in opinions for the two auricular prostheses with regard to shape and esthetic appeal, where the hand-carved prosthesis was found to be of poorer quality. The color error maps showed higher errors with the conventional duplication process compared with the rapid prototyping method. The main advantage of rapid prototyping is the ability to produce physical models using digital methods instead of traditional impression techniques. The disadvantage of equipment costs could be overcome by establishing a centralized service.

  15. Polymer/Silicate Nanocomposites Developed for Improved Thermal Stability and Barrier Properties

    NASA Technical Reports Server (NTRS)

    Campbell, Sandi G.

    2001-01-01

    The nanoscale reinforcement of polymers is becoming an attractive means of improving the properties and stability of polymers. Polymer-silicate nanocomposites are a relatively new class of materials with phase dimensions typically on the order of a few nanometers. Because of their nanometer-size features, nanocomposites possess unique properties typically not shared by more conventional composites. Polymer-layered silicate nanocomposites can attain a certain degree of stiffness, strength, and barrier properties with far less ceramic content than comparable glass- or mineral-reinforced polymers. Reinforcement of existing and new polyimides by this method offers an opportunity to greatly improve existing polymer properties without altering current synthetic or processing procedures.

  16. Patch-based generation of a pseudo CT from conventional MRI sequences for MRI-only radiotherapy of the brain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreasen, Daniel, E-mail: dana@dtu.dk; Van Leemput, Koen; Hansen, Rasmus H.

    Purpose: In radiotherapy (RT) based on magnetic resonance imaging (MRI) as the only modality, the information on electron density must be derived from the MRI scan by creating a so-called pseudo computed tomography (pCT). This is a nontrivial task, since the voxel-intensities in an MRI scan are not uniquely related to electron density. To solve the task, voxel-based or atlas-based models have typically been used. The voxel-based models require a specialized dual ultrashort echo time MRI sequence for bone visualization and the atlas-based models require deformable registrations of conventional MRI scans. In this study, we investigate the potential of amore » patch-based method for creating a pCT based on conventional T{sub 1}-weighted MRI scans without using deformable registrations. We compare this method against two state-of-the-art methods within the voxel-based and atlas-based categories. Methods: The data consisted of CT and MRI scans of five cranial RT patients. To compare the performance of the different methods, a nested cross validation was done to find optimal model parameters for all the methods. Voxel-wise and geometric evaluations of the pCTs were done. Furthermore, a radiologic evaluation based on water equivalent path lengths was carried out, comparing the upper hemisphere of the head in the pCT and the real CT. Finally, the dosimetric accuracy was tested and compared for a photon treatment plan. Results: The pCTs produced with the patch-based method had the best voxel-wise, geometric, and radiologic agreement with the real CT, closely followed by the atlas-based method. In terms of the dosimetric accuracy, the patch-based method had average deviations of less than 0.5% in measures related to target coverage. Conclusions: We showed that a patch-based method could generate an accurate pCT based on conventional T{sub 1}-weighted MRI sequences and without deformable registrations. In our evaluations, the method performed better than existing voxel-based and atlas-based methods and showed a promising potential for RT of the brain based only on MRI.« less

  17. On existence of the {sigma}(600) Its physical implications and related problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishida, Shin

    1998-05-29

    We make a re-analysis of 1=0 {pi}{pi} scattering phase shift {delta}{sub 0}{sup 0} through a new method of S-matrix parametrization (IA; interfering amplitude method), and show a result suggesting strongly for the existence of {sigma}-particle-long-sought Chiral partner of {pi}-meson. Furthermore, through the phenomenological analyses of typical production processes of the 2{pi}-system, the pp-central collision and the J/{psi}{yields}{omega}{pi}{pi} decay, by applying an intuitive formula as sum of Breit-Wigner amplitudes, (VMW; variant mass and width method), the other evidences for the {sigma}-existence are given. The validity of the methods used in the above analyses is investigated, using a simple field theoretical model,more » from the general viewpoint of unitarity and the applicability of final state interaction (FSI-) theorem, especially in relation to the ''universality'' argument. It is shown that the IA and VMW are obtained as the physical state representations of scattering and production amplitudes, respectively. The VMW is shown to be an effective method to obtain the resonance properties from production processes, which generally have the unknown strong-phases. The conventional analyses based on the 'universality' seem to be powerless for this purpose.« less

  18. Counseling the Adult Student. Adult Student Personnel Association Inc. Convention Proceedings. Sixth Annual Convention.

    ERIC Educational Resources Information Center

    Adult Student Personnel Association, Inc.

    The theme of this convention was counseling the adult student. Jerrold I. Hirsch, the convention chairman, introduced the theme, and presented briefly a report of a six-year study on higher adult education calling for further expansion of existing educational opportunities for adults. Robert Moseley summarized the extent of student personnel…

  19. Evaluation of direct and indirect additive manufacture of maxillofacial prostheses.

    PubMed

    Eggbeer, Dominic; Bibb, Richard; Evans, Peter; Ji, Lu

    2012-09-01

    The efficacy of computer-aided technologies in the design and manufacture of maxillofacial prostheses has not been fully proven. This paper presents research into the evaluation of direct and indirect additive manufacture of a maxillofacial prosthesis against conventional laboratory-based techniques. An implant/magnet-retained nasal prosthesis case from a UK maxillofacial unit was selected as a case study. A benchmark prosthesis was fabricated using conventional laboratory-based techniques for comparison against additive manufactured prostheses. For the computer-aided workflow, photogrammetry, computer-aided design and additive manufacture (AM) methods were evaluated in direct prosthesis body fabrication and indirect production using an additively manufactured mould. Qualitative analysis of position, shape, colour and edge quality was undertaken. Mechanical testing to ISO standards was also used to compare the silicone rubber used in the conventional prosthesis with the AM material. Critical evaluation has shown that utilising a computer-aided work-flow can produce a prosthesis body that is comparable to that produced using existing best practice. Technical limitations currently prevent the direct fabrication method demonstrated in this paper from being clinically viable. This research helps prosthesis providers understand the application of a computer-aided approach and guides technology developers and researchers to address the limitations identified.

  20. A review of statistical issues with progression-free survival as an interval-censored time-to-event endpoint.

    PubMed

    Sun, Xing; Li, Xiaoyun; Chen, Cong; Song, Yang

    2013-01-01

    Frequent rise of interval-censored time-to-event data in randomized clinical trials (e.g., progression-free survival [PFS] in oncology) challenges statistical researchers in the pharmaceutical industry in various ways. These challenges exist in both trial design and data analysis. Conventional statistical methods treating intervals as fixed points, which are generally practiced by pharmaceutical industry, sometimes yield inferior or even flawed analysis results in extreme cases for interval-censored data. In this article, we examine the limitation of these standard methods under typical clinical trial settings and further review and compare several existing nonparametric likelihood-based methods for interval-censored data, methods that are more sophisticated but robust. Trial design issues involved with interval-censored data comprise another topic to be explored in this article. Unlike right-censored survival data, expected sample size or power for a trial with interval-censored data relies heavily on the parametric distribution of the baseline survival function as well as the frequency of assessments. There can be substantial power loss in trials with interval-censored data if the assessments are very infrequent. Such an additional dependency controverts many fundamental assumptions and principles in conventional survival trial designs, especially the group sequential design (e.g., the concept of information fraction). In this article, we discuss these fundamental changes and available tools to work around their impacts. Although progression-free survival is often used as a discussion point in the article, the general conclusions are equally applicable to other interval-censored time-to-event endpoints.

  1. The Testing Methods and Gender Differences in Multiple-Choice Assessment

    NASA Astrophysics Data System (ADS)

    Ng, Annie W. Y.; Chan, Alan H. S.

    2009-10-01

    This paper provides a comprehensive review of the multiple-choice assessment in the past two decades for facilitating people to conduct effective testing in various subject areas. It was revealed that a variety of multiple-choice test methods viz. conventional multiple-choice, liberal multiple-choice, elimination testing, confidence marking, probability testing, and order-of-preference scheme are available for use in assessing subjects' knowledge and decision ability. However, the best multiple-choice test method for use has not yet been identified. The review also indicated that the existence of gender differences in multiple-choice task performance might be due to the test area, instruction/scoring condition, and item difficulty.

  2. Programmable DNA-Mediated Multitasking Processor.

    PubMed

    Shu, Jian-Jun; Wang, Qi-Wen; Yong, Kian-Yan; Shao, Fangwei; Lee, Kee Jin

    2015-04-30

    Because of DNA appealing features as perfect material, including minuscule size, defined structural repeat and rigidity, programmable DNA-mediated processing is a promising computing paradigm, which employs DNAs as information storing and processing substrates to tackle the computational problems. The massive parallelism of DNA hybridization exhibits transcendent potential to improve multitasking capabilities and yield a tremendous speed-up over the conventional electronic processors with stepwise signal cascade. As an example of multitasking capability, we present an in vitro programmable DNA-mediated optimal route planning processor as a functional unit embedded in contemporary navigation systems. The novel programmable DNA-mediated processor has several advantages over the existing silicon-mediated methods, such as conducting massive data storage and simultaneous processing via much fewer materials than conventional silicon devices.

  3. Web-based surveys as an alternative to traditional mail methods.

    PubMed

    Fleming, Christopher M; Bowden, Mark

    2009-01-01

    Environmental economists have long used surveys to gather information about people's preferences. A recent innovation in survey methodology has been the advent of web-based surveys. While the Internet appears to offer a promising alternative to conventional survey administration modes, concerns exist over potential sampling biases associated with web-based surveys and the effect these may have on valuation estimates. This paper compares results obtained from a travel cost questionnaire of visitors to Fraser Island, Australia, that was conducted using two alternate survey administration modes; conventional mail and web-based. It is found that response rates and the socio-demographic make-up of respondents to the two survey modes are not statistically different. Moreover, both modes yield similar consumer surplus estimates.

  4. The Safety Course Design and Operations of Composite Overwrapped Pressure Vessels (COPV)

    NASA Technical Reports Server (NTRS)

    Saulsberry, Regor; Prosser, William

    2015-01-01

    Following a Commercial Launch Vehicle On-Pad COPV (Composite Overwrapped Pressure Vessels) failure, a request was received by the NESC (NASA Engineering and Safety Center) June 14, 2014. An assessment was approved July 10, 2014, to develop and assess the capability of scanning eddy current (EC) nondestructive evaluation (NDE) methods for mapping thickness and inspection for flaws. Current methods could not identify thickness reduction from necking and critical flaw detection was not possible with conventional dye penetrant (PT) methods, so sensitive EC scanning techniques were needed. Developmental methods existed, but had not been fully developed, nor had the requisite capability assessment (i.e., a POD (Probability of Detection) study) been performed.

  5. Real-Space Analysis of Scanning Tunneling Microscopy Topography Datasets Using Sparse Modeling Approach

    NASA Astrophysics Data System (ADS)

    Miyama, Masamichi J.; Hukushima, Koji

    2018-04-01

    A sparse modeling approach is proposed for analyzing scanning tunneling microscopy topography data, which contain numerous peaks originating from the electron density of surface atoms and/or impurities. The method, based on the relevance vector machine with L1 regularization and k-means clustering, enables separation of the peaks and peak center positioning with accuracy beyond the resolution of the measurement grid. The validity and efficiency of the proposed method are demonstrated using synthetic data in comparison with the conventional least-squares method. An application of the proposed method to experimental data of a metallic oxide thin-film clearly indicates the existence of defects and corresponding local lattice distortions.

  6. Invited review: organic and conventionally produced milk-an evaluation of factors influencing milk composition.

    PubMed

    Schwendel, B H; Wester, T J; Morel, P C H; Tavendale, M H; Deadman, C; Shadbolt, N M; Otter, D E

    2015-02-01

    Consumer perception of organic cow milk is associated with the assumption that organic milk differs from conventionally produced milk. The value associated with this difference justifies the premium retail price for organic milk. It includes the perceptions that organic dairy farming is kinder to the environment, animals, and people; that organic milk products are produced without the use of antibiotics, added hormones, synthetic chemicals, and genetic modification; and that they may have potential benefits for human health. Controlled studies investigating whether differences exist between organic and conventionally produced milk have so far been largely equivocal due principally to the complexity of the research question and the number of factors that can influence milk composition. A main complication is that farming practices and their effects differ depending on country, region, year, and season between and within organic and conventional systems. Factors influencing milk composition (e.g., diet, breed, and stage of lactation) have been studied individually, whereas interactions between multiple factors have been largely ignored. Studies that fail to consider that factors other than the farming system (organic vs. conventional) could have caused or contributed to the reported differences in milk composition make it impossible to determine whether a system-related difference exists between organic and conventional milk. Milk fatty acid composition has been a central research area when comparing organic and conventional milk largely because the milk fatty acid profile responds rapidly and is very sensitive to changes in diet. Consequently, the effect of farming practices (high input vs. low input) rather than farming system (organic vs. conventional) determines milk fatty acid profile, and similar results are seen between low-input organic and low-input conventional milks. This confounds our ability to develop an analytical method to distinguish organic from conventionally produced milk and provide product verification. Lack of research on interactions between several influential factors and differences in trial complexity and consistency between studies (e.g., sampling period, sample size, reporting of experimental conditions) complicate data interpretation and prevent us from making unequivocal conclusions. The first part of this review provides a detailed summary of individual factors known to influence milk composition. The second part presents an overview of studies that have compared organic and conventional milk and discusses their findings within the framework of the various factors presented in part one. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. Principles and limitations of stable isotopes in differentiating organic and conventional foodstuffs: 2. Animal products.

    PubMed

    Inácio, Caio T; Chalk, Phillip M

    2017-01-02

    In this review, we examine the variation in stable isotope signatures of the lighter elements (δ 2 H, δ 13 C, δ 15 N, δ 18 O, and δ 34 S) of tissues and excreta of domesticated animals, the factors affecting the isotopic composition of animal tissues, and whether stable isotopes may be used to differentiate organic and conventional modes of animal husbandry. The main factors affecting the δ 13 C signatures of livestock are the C3/C4 composition of the diet, the relative digestibility of the diet components, metabolic turnover, tissue and compound specificity, growth rate, and animal age. δ 15 N signatures of sheep and cattle products have been related mainly to diet signatures, which are quite variable among farms and between years. Although few data exist, a minor influence in δ 15 N signatures of animal products was attributed to N losses at the farm level, whereas stocking rate showed divergent findings. Correlations between mode of production and δ 2 H and δ 18 O have not been established, and only in one case of an animal product was δ 34 S a satisfactory marker for mode of production. While many data exist on diet-tissue isotopic discrimination values among domesticated animals, there is a paucity of data that allow a direct and statistically verifiable comparison of the differences in the isotopic signatures of organically and conventionally grown animal products. The few comparisons are confined to beef, milk, and egg yolk, with no data for swine or lamb products. δ 13 C appears to be the most promising isotopic marker to differentiate organic and conventional production systems when maize (C4) is present in the conventional animal diet. However, δ 13 C may be unsuitable under tropical conditions, where C4 grasses are abundant, and where grass-based husbandry is predominant in both conventional and organic systems. Presently, there is no universal analytical method that can be applied to differentiate organic and conventional animal products.

  8. Automatic coronary artery segmentation based on multi-domains remapping and quantile regression in angiographies.

    PubMed

    Li, Zhixun; Zhang, Yingtao; Gong, Huiling; Li, Weimin; Tang, Xianglong

    2016-12-01

    Coronary artery disease has become the most dangerous diseases to human life. And coronary artery segmentation is the basis of computer aided diagnosis and analysis. Existing segmentation methods are difficult to handle the complex vascular texture due to the projective nature in conventional coronary angiography. Due to large amount of data and complex vascular shapes, any manual annotation has become increasingly unrealistic. A fully automatic segmentation method is necessary in clinic practice. In this work, we study a method based on reliable boundaries via multi-domains remapping and robust discrepancy correction via distance balance and quantile regression for automatic coronary artery segmentation of angiography images. The proposed method can not only segment overlapping vascular structures robustly, but also achieve good performance in low contrast regions. The effectiveness of our approach is demonstrated on a variety of coronary blood vessels compared with the existing methods. The overall segmentation performances si, fnvf, fvpf and tpvf were 95.135%, 3.733%, 6.113%, 96.268%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. End of the chain? Rugosity and fine-scale bathymetry from existing underwater digital imagery using structure-from-motion (SfM) technology

    USGS Publications Warehouse

    Storlazzi, Curt; Dartnell, Peter; Hatcher, Gerry; Gibbs, Ann E.

    2016-01-01

    The rugosity or complexity of the seafloor has been shown to be an important ecological parameter for fish, algae, and corals. Historically, rugosity has been measured either using simple and subjective manual methods such as ‘chain-and-tape’ or complicated and expensive geophysical methods. Here, we demonstrate the application of structure-from-motion (SfM) photogrammetry to generate high-resolution, three-dimensional bathymetric models of a fringing reef from existing underwater video collected to characterize the seafloor. SfM techniques are capable of achieving spatial resolution that can be orders of magnitude greater than large-scale lidar and sonar mapping of coral reef ecosystems. The resulting data provide finer-scale measurements of bathymetry and rugosity that are more applicable to ecological studies of coral reefs than provided by the more expensive and time-consuming geophysical methods. Utilizing SfM techniques for characterizing the benthic habitat proved to be more effective and quantitatively powerful than conventional methods and thus might portend the end of the ‘chain-and-tape’ method for measuring benthic complexity.

  10. NOTE: Solving the ECG forward problem by means of a meshless finite element method

    NASA Astrophysics Data System (ADS)

    Li, Z. S.; Zhu, S. A.; He, Bin

    2007-07-01

    The conventional numerical computational techniques such as the finite element method (FEM) and the boundary element method (BEM) require laborious and time-consuming model meshing. The new meshless FEM only uses the boundary description and the node distribution and no meshing of the model is required. This paper presents the fundamentals and implementation of meshless FEM and the meshless FEM method is adapted to solve the electrocardiography (ECG) forward problem. The method is evaluated on a single-layer torso model, in which the analytical solution exists, and tested in a realistic geometry homogeneous torso model, with satisfactory results being obtained. The present results suggest that the meshless FEM may provide an alternative for ECG forward solutions.

  11. Creating objects and object categories for studying perception and perceptual learning.

    PubMed

    Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay

    2012-11-02

    In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties. Many innovative and useful methods currently exist for creating novel objects and object categories (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.

  12. Protocol for Determining Ultraviolet Light Emitting Diode (UV-LED) Fluence for Microbial Inactivation Studies.

    PubMed

    Kheyrandish, Ataollah; Mohseni, Madjid; Taghipour, Fariborz

    2018-06-15

    Determining fluence is essential to derive the inactivation kinetics of microorganisms and to design ultraviolet (UV) reactors for water disinfection. UV light emitting diodes (UV-LEDs) are emerging UV sources with various advantages compared to conventional UV lamps. Unlike conventional mercury lamps, no standard method is available to determine the average fluence of the UV-LEDs, and conventional methods used to determine the fluence for UV mercury lamps are not applicable to UV-LEDs due to the relatively low power output, polychromatic wavelength, and specific radiation profile of UV-LEDs. In this study, a method was developed to determine the average fluence inside a water suspension in a UV-LED experimental setup. In this method, the average fluence was estimated by measuring the irradiance at a few points for a collimated and uniform radiation on a Petri dish surface. New correction parameters were defined and proposed, and several of the existing parameters for determining the fluence of the UV mercury lamp apparatus were revised to measure and quantify the collimation and uniformity of the radiation. To study the effect of polychromatic output and radiation profile of the UV-LEDs, two UV-LEDs with peak wavelengths of 262 and 275 nm and different radiation profiles were selected as the representatives of typical UV-LEDs applied to microbial inactivation. The proper setup configuration for microorganism inactivation studies was also determined based on the defined correction factors.

  13. Finite-time synchronization control of a class of memristor-based recurrent neural networks.

    PubMed

    Jiang, Minghui; Wang, Shuangtao; Mei, Jun; Shen, Yanjun

    2015-03-01

    This paper presents a global and local finite-time synchronization control law for memristor neural networks. By utilizing the drive-response concept, differential inclusions theory, and Lyapunov functional method, we establish several sufficient conditions for finite-time synchronization between the master and corresponding slave memristor-based neural network with the designed controller. In comparison with the existing results, the proposed stability conditions are new, and the obtained results extend some previous works on conventional recurrent neural networks. Two numerical examples are provided to illustrate the effective of the design method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Structural design using equilibrium programming formulations

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1995-01-01

    Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.

  15. Reinforcement learning for resource allocation in LEO satellite networks.

    PubMed

    Usaha, Wipawee; Barria, Javier A

    2007-06-01

    In this paper, we develop and assess online decision-making algorithms for call admission and routing for low Earth orbit (LEO) satellite networks. It has been shown in a recent paper that, in a LEO satellite system, a semi-Markov decision process formulation of the call admission and routing problem can achieve better performance in terms of an average revenue function than existing routing methods. However, the conventional dynamic programming (DP) numerical solution becomes prohibited as the problem size increases. In this paper, two solution methods based on reinforcement learning (RL) are proposed in order to circumvent the computational burden of DP. The first method is based on an actor-critic method with temporal-difference (TD) learning. The second method is based on a critic-only method, called optimistic TD learning. The algorithms enhance performance in terms of requirements in storage, computational complexity and computational time, and in terms of an overall long-term average revenue function that penalizes blocked calls. Numerical studies are carried out, and the results obtained show that the RL framework can achieve up to 56% higher average revenue over existing routing methods used in LEO satellite networks with reasonable storage and computational requirements.

  16. An Orientation Measurement Method Based on Hall-effect Sensors for Permanent Magnet Spherical Actuators with 3D Magnet Array

    PubMed Central

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-01-01

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000

  17. Measuring carbon in forests: current status and future challenges.

    PubMed

    Brown, Sandra

    2002-01-01

    To accurately and precisely measure the carbon in forests is gaining global attention as countries seek to comply with agreements under the UN Framework Convention on Climate Change. Established methods for measuring carbon in forests exist, and are best based on permanent sample plots laid out in a statistically sound design. Measurements on trees in these plots can be readily converted to aboveground biomass using either biomass expansion factors or allometric regression equations. A compilation of existing root biomass data for upland forests of the world generated a significant regression equation that can be used to predict root biomass based on aboveground biomass only. Methods for measuring coarse dead wood have been tested in many forest types, but the methods could be improved if a non-destructive tool for measuring the density of dead wood was developed. Future measurements of carbon storage in forests may rely more on remote sensing data, and new remote data collection technologies are in development.

  18. Complementing or Conflicting Human Rights Conventions? Realising an Inclusive Approach to Families with a Young Person with a Disability and Challenging Behaviour

    ERIC Educational Resources Information Center

    Muir, Kristy; Goldblatt, Beth

    2011-01-01

    United Nation's conventions exist to help facilitate and protect vulnerable people's human rights: including people with disabilities (Convention on the Rights of Persons with Disabilities, 2006) and children (Convention on the Rights of the Child, 1989). However, for some families where a family member has a disability, there may be inherent…

  19. CNA/The Institute for Statecraft Meeting on Evaluating and Countering the Threat from the Islamic State

    DTIC Science & Technology

    2015-08-10

    Charlie Hebdo.  Mastery of social media enables IS to exist as a virtual global caliphate. Islamic State: From Jihadi Movement to Potential Global...military skills but also in classic terror methods , in propaganda techniques, and in deception. As a conventional military, IS has demonstrated...Salafism, a strict constructionist , originalist interpretation of the core texts of Islam. Their intention is to reform Islam and return to what they

  20. A conformal mapping based fractional order approach for sub-optimal tuning of PID controllers with guaranteed dominant pole placement

    NASA Astrophysics Data System (ADS)

    Saha, Suman; Das, Saptarshi; Das, Shantanu; Gupta, Amitava

    2012-09-01

    A novel conformal mapping based fractional order (FO) methodology is developed in this paper for tuning existing classical (Integer Order) Proportional Integral Derivative (PID) controllers especially for sluggish and oscillatory second order systems. The conventional pole placement tuning via Linear Quadratic Regulator (LQR) method is extended for open loop oscillatory systems as well. The locations of the open loop zeros of a fractional order PID (FOPID or PIλDμ) controller have been approximated in this paper vis-à-vis a LQR tuned conventional integer order PID controller, to achieve equivalent integer order PID control system. This approach eases the implementation of analog/digital realization of a FOPID controller with its integer order counterpart along with the advantages of fractional order controller preserved. It is shown here in the paper that decrease in the integro-differential operators of the FOPID/PIλDμ controller pushes the open loop zeros of the equivalent PID controller towards greater damping regions which gives a trajectory of the controller zeros and dominant closed loop poles. This trajectory is termed as "M-curve". This phenomena is used to design a two-stage tuning algorithm which reduces the existing PID controller's effort in a significant manner compared to that with a single stage LQR based pole placement method at a desired closed loop damping and frequency.

  1. Benzene construction via organocatalytic formal [3+3] cycloaddition reaction.

    PubMed

    Zhu, Tingshun; Zheng, Pengcheng; Mou, Chengli; Yang, Song; Song, Bao-An; Chi, Yonggui Robin

    2014-09-25

    The benzene unit, in its substituted forms, is a most common scaffold in natural products, bioactive molecules and polymer materials. Nearly 80% of the 200 best selling small molecule drugs contain at least one benzene moiety. Not surprisingly, the synthesis of substituted benzenes receives constant attentions. At present, the dominant methods use pre-existing benzene framework to install substituents by using conventional functional group manipulations or transition metal-catalyzed carbon-hydrogen bond activations. These otherwise impressive approaches require multiple synthetic steps and are ineffective from both economic and environmental perspectives. Here we report an efficient method for the synthesis of substituted benzene molecules. Instead of relying on pre-existing aromatic rings, here we construct the benzene core through a carbene-catalyzed formal [3+3] reaction. Given the simplicity and high efficiency, we expect this strategy to be of wide use especially for large scale preparation of biomedicals and functional materials.

  2. Application of polymerase chain reaction for detection of Vibrio parahaemolyticus associated with tropical seafoods and coastal environment.

    PubMed

    Dileep, V; Kumar, H S; Kumar, Y; Nishibuchi, M; Karunasagar, Indrani; Karunasagar, Iddya

    2003-01-01

    To study the incidence of Vibrio parahaemolyticus in seafoods, water and sediment by molecular techniques vs conventional microbiological methods. Of 86 samples analysed, 28 recorded positive for V. parahaemolyticus by conventional microbiological method, while 53 were positive by the toxR-targeted PCR, performed directly on enrichment broth lysates. While one sample of molluscan shellfish was positive for tdh gene, trh gene was detected in three enrichment broths of molluscan shellfish. Direct application of PCR to enrichment broths will be useful for the rapid and sensitive detection of potentially pathogenic strains of V. parahemolyticus in seafoods. Vibrio parahaemolyticus is an important human pathogen responsible for food-borne gastroenteritis world-wide. As, both pathogenic and non-pathogenic strains of V. parahaemolyticus exist in the seafood, application of PCR specific for the virulence genes (tdh & trh) will help in detection of pathogenic strains of V. parahaemolyticus and consequently reduce the risk of food-borne illness.

  3. Development of fuzzy air quality index using soft computing approach.

    PubMed

    Mandal, T; Gorai, A K; Pathak, G

    2012-10-01

    Proper assessment of air quality status in an atmosphere based on limited observations is an essential task for meeting the goals of environmental management. A number of classification methods are available for estimating the changing status of air quality. However, a discrepancy frequently arises from the quality criteria of air employed and vagueness or fuzziness embedded in the decision making output values. Owing to inherent imprecision, difficulties always exist in some conventional methodologies like air quality index when describing integrated air quality conditions with respect to various pollutants parameters and time of exposure. In recent years, the fuzzy logic-based methods have demonstrated to be appropriated to address uncertainty and subjectivity in environmental issues. In the present study, a methodology based on fuzzy inference systems (FIS) to assess air quality is proposed. This paper presents a comparative study to assess status of air quality using fuzzy logic technique and that of conventional technique. The findings clearly indicate that the FIS may successfully harmonize inherent discrepancies and interpret complex conditions.

  4. Improved design of constrained model predictive tracking control for batch processes against unknown uncertainties.

    PubMed

    Wu, Sheng; Jin, Qibing; Zhang, Ridong; Zhang, Junfeng; Gao, Furong

    2017-07-01

    In this paper, an improved constrained tracking control design is proposed for batch processes under uncertainties. A new process model that facilitates process state and tracking error augmentation with further additional tuning is first proposed. Then a subsequent controller design is formulated using robust stable constrained MPC optimization. Unlike conventional robust model predictive control (MPC), the proposed method enables the controller design to bear more degrees of tuning so that improved tracking control can be acquired, which is very important since uncertainties exist inevitably in practice and cause model/plant mismatches. An injection molding process is introduced to illustrate the effectiveness of the proposed MPC approach in comparison with conventional robust MPC. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. An efficient temporal database design method based on EER

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Huang, Jiping; Miao, Hua

    2007-12-01

    Many existing methods of modeling temporal information are based on logical model, which makes relational schema optimization more difficult and more complicated. In this paper, based on the conventional EER model, the author attempts to analyse and abstract temporal information in the phase of conceptual modelling according to the concrete requirement to history information. Then a temporal data model named BTEER is presented. BTEER not only retains all designing ideas and methods of EER which makes BTEER have good upward compatibility, but also supports the modelling of valid time and transaction time effectively at the same time. In addition, BTEER can be transformed to EER easily and automatically. It proves in practice, this method can model the temporal information well.

  6. Density scaling for multiplets

    NASA Astrophysics Data System (ADS)

    Nagy, Á.

    2011-02-01

    Generalized Kohn-Sham equations are presented for lowest-lying multiplets. The way of treating non-integer particle numbers is coupled with an earlier method of the author. The fundamental quantity of the theory is the subspace density. The Kohn-Sham equations are similar to the conventional Kohn-Sham equations. The difference is that the subspace density is used instead of the density and the Kohn-Sham potential is different for different subspaces. The exchange-correlation functional is studied using density scaling. It is shown that there exists a value of the scaling factor ζ for which the correlation energy disappears. Generalized OPM and Krieger-Li-Iafrate (KLI) methods incorporating correlation are presented. The ζKLI method, being as simple as the original KLI method, is proposed for multiplets.

  7. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

    NASA Astrophysics Data System (ADS)

    Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

    2018-04-01

    In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

  8. A New Method for Single-Epoch Ambiguity Resolution with Indoor Pseudolite Positioning.

    PubMed

    Li, Xin; Zhang, Peng; Guo, Jiming; Wang, Jinling; Qiu, Weining

    2017-04-21

    Ambiguity resolution (AR) is crucial for high-precision indoor pseudolite positioning. Due to the existing characteristics of the pseudolite positioning system, such as the geometry structure of the stationary pseudolite which is consistently invariant, the indoor signal is easy to interrupt and the first order linear truncation error cannot be ignored, and a new AR method based on the idea of the ambiguity function method (AFM) is proposed in this paper. The proposed method is a single-epoch and nonlinear method that is especially well-suited for indoor pseudolite positioning. Considering the very low computational efficiency of conventional AFM, we adopt an improved particle swarm optimization (IPSO) algorithm to search for the best solution in the coordinate domain, and variances of a least squares adjustment is conducted to ensure the reliability of the solving ambiguity. Several experiments, including static and kinematic tests, are conducted to verify the validity of the proposed AR method. Numerical results show that the IPSO significantly improved the computational efficiency of AFM and has a more elaborate search ability compared to the conventional grid searching method. For the indoor pseudolite system, which had an initial approximate coordinate precision better than 0.2 m, the AFM exhibited good performances in both static and kinematic tests. With the corrected ambiguity gained from our proposed method, indoor pseudolite positioning can achieve centimeter-level precision using a low-cost single-frequency software receiver.

  9. Design for a Crane Metallic Structure Based on Imperialist Competitive Algorithm and Inverse Reliability Strategy

    NASA Astrophysics Data System (ADS)

    Fan, Xiao-Ning; Zhi, Bo

    2017-07-01

    Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.

  10. Potential effect of cationic liposomes on interactions with oral bacterial cells and biofilms.

    PubMed

    Sugano, Marika; Morisaki, Hirobumi; Negishi, Yoichi; Endo-Takahashi, Yoko; Kuwata, Hirotaka; Miyazaki, Takashi; Yamamoto, Matsuo

    2016-01-01

    Although oral infectious diseases have been attributed to bacteria, drug treatments remain ineffective because bacteria and their products exist as biofilms. Cationic liposomes have been suggested to electrostatically interact with the negative charge on the bacterial surface, thereby improving the effects of conventional drug therapies. However, the electrostatic interaction between oral bacteria and cationic liposomes has not yet been examined in detail. The aim of the present study was to examine the behavior of cationic liposomes and Streptococcus mutans in planktonic cells and biofilms. Liposomes with or without cationic lipid were prepared using a reverse-phase evaporation method. The zeta potentials of conventional liposomes (without cationic lipid) and cationic liposomes were -13 and 8 mV, respectively, and both had a mean particle size of approximately 180 nm. We first assessed the interaction between liposomes and planktonic bacterial cells with a flow cytometer. We then used a surface plasmon resonance method to examine the binding of liposomes to biofilms. We confirmed the binding behavior of liposomes with biofilms using confocal laser scanning microscopy. The interactions between cationic liposomes and S. mutans cells and biofilms were stronger than those of conventional liposomes. Microscopic observations revealed that many cationic liposomes interacted with the bacterial mass and penetrated the deep layers of biofilms. In this study, we demonstrated that cationic liposomes had higher affinity not only to oral bacterial cells, but also biofilms than conventional liposomes. This electrostatic interaction may be useful as a potential drug delivery system to biofilms.

  11. A New SEYHAN's Approach in Case of Heterogeneity of Regression Slopes in ANCOVA.

    PubMed

    Ankarali, Handan; Cangur, Sengul; Ankarali, Seyit

    2018-06-01

    In this study, when the assumptions of linearity and homogeneity of regression slopes of conventional ANCOVA are not met, a new approach named as SEYHAN has been suggested to use conventional ANCOVA instead of robust or nonlinear ANCOVA. The proposed SEYHAN's approach involves transformation of continuous covariate into categorical structure when the relationship between covariate and dependent variable is nonlinear and the regression slopes are not homogenous. A simulated data set was used to explain SEYHAN's approach. In this approach, we performed conventional ANCOVA in each subgroup which is constituted according to knot values and analysis of variance with two-factor model after MARS method was used for categorization of covariate. The first model is a simpler model than the second model that includes interaction term. Since the model with interaction effect has more subjects, the power of test also increases and the existing significant difference is revealed better. We can say that linearity and homogeneity of regression slopes are not problem for data analysis by conventional linear ANCOVA model by helping this approach. It can be used fast and efficiently for the presence of one or more covariates.

  12. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  13. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  14. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  15. Ultrasonically controlled particle size distribution of explosives: a safe method.

    PubMed

    Patil, Mohan Narayan; Gore, G M; Pandit, Aniruddha B

    2008-03-01

    Size reduction of the high energy materials (HEM's) by conventional methods (mechanical means) is not safe as they are very sensitive to friction and impact. Modified crystallization techniques can be used for the same purpose. The solute is dissolved in the solvent and crystallized via cooling or is precipitated out using an antisolvent. The various crystallization parameters such as temperature, antisolvent addition rate and agitation are adjusted to get the required final crystal size and morphology. The solvent-antisolvent ratio, time of crystallization and yield of the product are the key factors for controlling antisolvent based precipitation process. The advantages of cavitationally induced nucleation can be coupled with the conventional crystallization process. This study includes the effect of the ultrasonically generated acoustic cavitation phenomenon on the solvent antisolvent based precipitation process. CL20, a high-energy explosive compound, is a polyazapolycyclic caged polynitramine. CL-20 has greater energy output than existing (in-use) energetic ingredients while having an acceptable level of insensitivity to shock and other external stimuli. The size control and size distribution manipulation of the high energy material (CL20) has been successfully carried out safely and quickly along with an increase in the final mass yield, compared to the conventional antisolvent based precipitation process.

  16. Predicting hot spots in protein interfaces based on protrusion index, pseudo hydrophobicity and electron-ion interaction pseudopotential features

    PubMed Central

    Xia, Junfeng; Yue, Zhenyu; Di, Yunqiang; Zhu, Xiaolei; Zheng, Chun-Hou

    2016-01-01

    The identification of hot spots, a small subset of protein interfaces that accounts for the majority of binding free energy, is becoming more important for the research of drug design and cancer development. Based on our previous methods (APIS and KFC2), here we proposed a novel hot spot prediction method. For each hot spot residue, we firstly constructed a wide variety of 108 sequence, structural, and neighborhood features to characterize potential hot spot residues, including conventional ones and new one (pseudo hydrophobicity) exploited in this study. We then selected 3 top-ranking features that contribute the most in the classification by a two-step feature selection process consisting of minimal-redundancy-maximal-relevance algorithm and an exhaustive search method. We used support vector machines to build our final prediction model. When testing our model on an independent test set, our method showed the highest F1-score of 0.70 and MCC of 0.46 comparing with the existing state-of-the-art hot spot prediction methods. Our results indicate that these features are more effective than the conventional features considered previously, and that the combination of our and traditional features may support the creation of a discriminative feature set for efficient prediction of hot spots in protein interfaces. PMID:26934646

  17. Chimpanzees prioritise social information over pre-existing behaviours in a group context but not in dyads.

    PubMed

    Watson, Stuart K; Lambeth, Susan P; Schapiro, Steven J; Whiten, Andrew

    2018-05-01

    How animal communities arrive at homogeneous behavioural preferences is a central question for studies of cultural evolution. Here, we investigated whether chimpanzees (Pan troglodytes) would relinquish a pre-existing behaviour to adopt an alternative demonstrated by an overwhelming majority of group mates; in other words, whether chimpanzees behave in a conformist manner. In each of five groups of chimpanzees (N = 37), one individual was trained on one method of opening a two-action puzzle box to obtain food, while the remaining individuals learned the alternative method. Over 5 h of open access to the apparatus in a group context, it was found that 4/5 'minority' individuals explored the majority method and three of these used this new method in the majority of trials. Those that switched did so after observing only a small subset of their group, thereby not matching conventional definitions of conformity. In a further 'Dyad' condition, six pairs of chimpanzees were trained on alternative methods and then given access to the task together. Only one of these individuals ever switched method. The number of observations that individuals in the minority and Dyad individuals made of their untrained method was not found to influence whether or not they themselves switched to use it. In a final 'Asocial' condition, individuals (N = 10) did not receive social information and did not deviate from their first-learned method. We argue that these results demonstrate an important influence of social context upon prioritisation of social information over pre-existing methods, which can result in group homogeneity of behaviour.

  18. A novel signal amplification technology for ELISA based on catalyzed reporter deposition. Demonstration of its applicability for measuring aflatoxin B(1).

    PubMed

    Bhattacharya, D; Bhattacharya, R; Dhar, T K

    1999-11-19

    In an earlier communication we have described a novel signal amplification technology termed Super-CARD, which is able to significantly improve antigen detection sensitivity in conventional Dot-ELISA by approximately 10(5)-fold. The method utilizes hitherto unreported synthesized electron rich proteins containing multiple phenolic groups which, when immobilized over a solid phase as blocking agent, markedly increases the signal amplification capability of the existing CARD method (Bhattacharya, R., Bhattacharya, D., Dhar, T.K., 1999. A novel signal amplification technology based on catalyzed reporter deposition and its application in a Dot-ELISA with ultra high sensitivity. J. Immunol. Methods 227, 31.). In this paper we describe the utilization of this Super-CARD amplification technique in ELISA and its applicability for the rapid determination of aflatoxin B(1) (AFB(1)) in infected seeds. Using this method under identical conditions, the increase in absorbance over the CARD method was approximately 400%. The limit of detection of AFB(1) by this method was 0.1 pg/well, the sensitivity enhancement being 5-fold over the optimized CARD ELISA. Furthermore, the total incubation time was reduced to 16 min compared to 50 min for the CARD method. Assay specificity was not adversely affected and the amount of AFB(1) measured in seed extracts correlated well with the values obtained by conventional ELISA.

  19. Evaluation of alternatives to sound barrier walls.

    DOT National Transportation Integrated Search

    2013-06-01

    The existing INDOTs noise wall specification was developed primarily on the basis of knowledge of the conventional precast concrete : panel systems. Currently, the constructed cost of conventional noise walls is approximately $2 million per linear...

  20. An ex vivo approach to botanical-drug interactions: A proof of concept study

    PubMed Central

    Wang, Xinwen; Zhu, Hao-Jie; Munoz, Juliana; Gurley, Bill J.; Markowitz, John S.

    2015-01-01

    Ethnopharmacological relevance Botanical medicines are frequently used in combination with therapeutic drugs, imposing a risk for harmful botanical-drug interactions (BDIs). Among the existing BDI evaluation methods, clinical studies are the most desirable, but due to their expense and protracted time-line for completion, conventional in vitro methodologies remain the most frequently used BDI assessment tools. However, many predictions generated from in vitro studies are inconsistent with clinical findings. Accordingly, the present study aimed to develop a novel ex vivo approach for BDI assessment and expand the safety evaluation methodoloy in applied ethnopharmacological research. Materials and Methods This approach differs from conventional in vitro methods in that rather than botanical extracts or individual phytochemicals being prepared in artificial buffers, human plasma/serum collected from a limited number of subjects administered botanical supplements was utilized to assess BDIs. To validate the methodology, human plasma/serum samples collected from healthy subjects administered either milk thistle or goldenseal extracts were utilized in incubation studies to determine their potential inhibitory effects on CYP2C9 and CYP3A4/5, respectively. Silybin A and B, two principal milk thistle phytochemicals, and hydrastine and berberine, the purported active constituents in goldenseal, were evaluated in both phosphate buffer and human plasma based in vitro incubation systems. Results Ex vivo study results were consistent with formal clinical study findings for the effect of milk thistle on the disposition of tolbutamide, a CYP2C9 substrate, and for goldenseal’s influence on the pharmacokinetics of midazolam, a widely accepted CYP3A4/5 substrate. Compared to conventional in vitro BDI methodologies of assessment, the introduction of human plasma into the in vitro study model changed the observed inhibitory effect of silybinA, silybin B and hydrastine and berberine on CYP2C9 and CYP3A4/5, respectively, results which more closely mirrored those generated in clinical study. Conclusions Data from conventional buffer-based in vitro studies were less predictive than the ex vivo assessments. Thus, this novel ex vivo approach may be more effective at predicting clinically relevant BDIs than conventional in vitro methods. PMID:25623616

  1. Distributive Distillation Enabled by Microchannel Process Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arora, Ravi

    The application of microchannel technology for distributive distillation was studied to achieve the Grand Challenge goals of 25% energy savings and 10% return on investment. In Task 1, a detailed study was conducted and two distillation systems were identified that would meet the Grand Challenge goals if the microchannel distillation technology was used. Material and heat balance calculations were performed to develop process flow sheet designs for the two distillation systems in Task 2. The process designs were focused on two methods of integrating the microchannel technology 1) Integrating microchannel distillation to an existing conventional column, 2) Microchannel distillation formore » new plants. A design concept for a modular microchannel distillation unit was developed in Task 3. In Task 4, Ultrasonic Additive Machining (UAM) was evaluated as a manufacturing method for microchannel distillation units. However, it was found that a significant development work would be required to develop process parameters to use UAM for commercial distillation manufacturing. Two alternate manufacturing methods were explored. Both manufacturing approaches were experimentally tested to confirm their validity. The conceptual design of the microchannel distillation unit (Task 3) was combined with the manufacturing methods developed in Task 4 and flowsheet designs in Task 2 to estimate the cost of the microchannel distillation unit and this was compared to a conventional distillation column. The best results were for a methanol-water separation unit for the use in a biodiesel facility. For this application microchannel distillation was found to be more cost effective than conventional system and capable of meeting the DOE Grand Challenge performance requirements.« less

  2. Data based abnormality detection

    NASA Astrophysics Data System (ADS)

    Purwar, Yashasvi

    Data based abnormality detection is a growing research field focussed on extracting information from feature rich data. They are considered to be non-intrusive and non-destructive in nature which gives them a clear advantage over conventional methods. In this study, we explore different streams of data based anomalies detection. We propose extension and revisions to existing valve stiction detection algorithm supported with industrial case study. We also explored the area of image analysis and proposed a complete solution for Malaria diagnosis. The proposed method is tested over images provided by pathology laboratory at Alberta Health Service. We also address the robustness and practicality of the solution proposed.

  3. Asymmetric design for Compound Elliptical Concentrators (CEC) and its geometric flux implications

    NASA Astrophysics Data System (ADS)

    Jiang, Lun; Winston, Roland

    2015-08-01

    The asymmetric compound elliptical concentrator (CEC) has been a less discussed subject in the nonimaging optics society. The conventional way of understanding an ideal concentrator is based on maximizing the concentration ratio based on a uniformed acceptance angle. Although such an angle does not exist in the case of CEC, the thermodynamic laws still hold and we can produce concentrators with the maximum concentration ratio allowed by them. Here we restate the problem and use the string method to solve this general problem. Built on the solution, we can discover groups of such ideal concentrators using geometric flux field, or flowline method.

  4. Development of an Improved Simulator for Chemical and Microbial EOR Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Gary A.; Sepehrnoori, Kamy; Delshad, Mojdeh

    2000-09-11

    The objective of this research was to extend the capability of an existing simulator (UTCHEM) to improved oil recovery methods that use surfactants, polymers, gels, alkaline chemicals, microorganisms and foam as well as various combinations of these in both conventional and naturally fractured oil reservoirs. Task 1 is the addition of a dual-porosity model for chemical improved of recovery processes in naturally fractured oil reservoirs. Task 2 is the addition of a foam model. Task 3 addresses several numerical and coding enhancements that will greatly improve the versatility and performance of UTCHEM. Task 4 is the enhancements of physical propertymore » models.« less

  5. Eutectic Formation During Solidification of Ni-Based Single-Crystal Superalloys with Additional Carbon

    NASA Astrophysics Data System (ADS)

    Wang, Fu; Ma, Dexin; Bührig-Polaczek, Andreas

    2017-11-01

    γ/ γ' eutectics' nucleation behavior during the solidification of a single-crystal superalloy with additional carbon was investigated by using directional solidification quenching method. The results show that the nucleation of the γ/ γ' eutectics can directly occur on the existing γ dendrites, directly in the remaining liquid, or on the primary MC-type carbides. The γ/γ' eutectics formed through the latter two mechanisms have different crystal orientations than that of the γ matrix. This suggests that the conventional Ni-based single-crystal superalloy castings with additional carbon only guarantee the monocrystallinity of the γ matrix and some γ/ γ' eutectics and, in addition to the carbides, there are other misoriented polycrystalline microstructures existing in macroscopically considered "single-crystal" superalloy castings.

  6. Initial Assessment of U.S. Refineries for Purposes of Potential Bio-Based Oil Insertions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Charles J.; Jones, Susanne B.; Padmaperuma, Asanga B.

    2013-04-01

    In order to meet U.S. biofuel objectives over the coming decade the conversion of a broad range of biomass feedstocks, using diverse processing options, will be required. Further, the production of both gasoline and diesel biofuels will employ biomass conversion methods that produce wide boiling range intermediate oils requiring treatment similar to conventional refining processes (i.e. fluid catalytic cracking, hydrocracking, and hydrotreating). As such, it is widely recognized that leveraging existing U.S. petroleum refining infrastructure is key to reducing overall capital demands. This study examines how existing U.S. refining location, capacities and conversion capabilities match in geography and processing capabilitiesmore » with the needs projected from anticipated biofuels production.« less

  7. Hybrid Technology of Hard Coal Mining from Seams Located at Great Depths

    NASA Astrophysics Data System (ADS)

    Czaja, Piotr; Kamiński, Paweł; Klich, Jerzy; Tajduś, Antoni

    2014-10-01

    Learning to control fire changed the life of man considerably. Learning to convert the energy derived from combustion of coal or hydrocarbons into another type of energy, such as steam pressure or electricity, has put him on the path of scientific and technological revolution, stimulating dynamic development. Since the dawn of time, fossil fuels have been serving as the mankind's natural reservoir of energy in an increasingly great capacity. A completely incomprehensible refusal to use fossil fuels causes some local populations, who do not possess a comprehensive knowledge of the subject, to protest and even generate social conflicts as an expression of their dislike for the extraction of minerals. Our times are marked by the search for more efficient ways of utilizing fossil fuels by introducing non-conventional technologies of exploiting conventional energy sources. During apartheid, South Africa demonstrated that cheap coal can easily satisfy total demand for liquid and gaseous fuels. In consideration of current high prices of hydrocarbon media (oil and gas), gasification or liquefaction of coal seems to be the innovative technology convergent with contemporary expectations of both energy producers as well as environmentalists. Known mainly from literature reports, underground coal gasification technologies can be brought down to two basic methods: - shaftless method - drilling, in which the gasified seam is uncovered using boreholes drilled from the surface, - shaft method, in which the existing infrastructure of underground mines is used to uncover the seams. This paper presents a hybrid shaft-drilling approach to the acquisition of primary energy carriers (methane and syngas) from coal seams located at great depths. A major advantage of this method is the fact that the use of conventional coal mining technology requires the seams located at great depths to be placed on the off-balance sheet, while the hybrid method of underground gasification enables them to become a source of additional energy for the economy. It should be noted, however, that the shaft-drilling method cannot be considered as an alternative to conventional methods of coal extraction, but rather as a complementary and cheaper way of utilizing resources located almost beyond the technical capabilities of conventional extraction methods due to the associated natural hazards and high costs of combating them. This article presents a completely different approach to the issue of underground coal gasification. Repurposing of the already fully depreciated mining infrastructure for the gasification process may result in a large value added of synthesis gas production and very positive economic effect.

  8. Creating Objects and Object Categories for Studying Perception and Perceptual Learning

    PubMed Central

    Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay

    2012-01-01

    In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis. PMID:23149420

  9. Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models

    PubMed Central

    2013-01-01

    Background Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as “practically impossible”, and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Methods Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Results Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Conclusions Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons. PMID:24168424

  10. A Deep Denoising Autoencoder Approach to Improving the Intelligibility of Vocoded Speech in Cochlear Implant Simulation.

    PubMed

    Lai, Ying-Hui; Chen, Fei; Wang, Syu-Siang; Lu, Xugang; Tsao, Yu; Lee, Chin-Hui

    2017-07-01

    In a cochlear implant (CI) speech processor, noise reduction (NR) is a critical component for enabling CI users to attain improved speech perception under noisy conditions. Identifying an effective NR approach has long been a key topic in CI research. Recently, a deep denoising autoencoder (DDAE) based NR approach was proposed and shown to be effective in restoring clean speech from noisy observations. It was also shown that DDAE could provide better performance than several existing NR methods in standardized objective evaluations. Following this success with normal speech, this paper further investigated the performance of DDAE-based NR to improve the intelligibility of envelope-based vocoded speech, which simulates speech signal processing in existing CI devices. We compared the performance of speech intelligibility between DDAE-based NR and conventional single-microphone NR approaches using the noise vocoder simulation. The results of both objective evaluations and listening test showed that, under the conditions of nonstationary noise distortion, DDAE-based NR yielded higher intelligibility scores than conventional NR approaches. This study confirmed that DDAE-based NR could potentially be integrated into a CI processor to provide more benefits to CI users under noisy conditions.

  11. Field evaluations of newly available "interference-free" monitors for nitrogen dioxide and ozone at near-road and conventional National Ambient Air Quality Standards compliance sites.

    PubMed

    Leston, Alan R; Ollison, Will M

    2017-11-01

    Long-standing measurement techniques for determining ground-level ozone (O 3 ) and nitrogen dioxide (NO 2 ) are known to be biased by interfering compounds that result in overestimates of high O 3 and NO 2 ambient concentrations under conducive conditions. An increasing near-ground O 3 gradient (NGOG) with increasing height above ground level is also known to exist. Both the interference bias and NGOG were investigated by comparing data from a conventional Federal Equivalent Method (FEM) O 3 photometer and an identical monitor upgraded with an "interference-free" nitric oxide O 3 scrubber that alternatively sampled at 2 m and 6.2 m inlet heights above ground level (AGL). Intercomparison was also made between a conventional nitrogen oxide (NO x ) chemiluminescence Federal Reference Method (FRM) monitor and a new "direct-measure" NO 2 NO x 405 nm photometer at a near-road air quality measurement site. Results indicate that the O 3 monitor with the upgraded scrubber recorded lower regulatory-oriented concentrations than the deployed conventional metal oxide-scrubbed monitor and that O 3 concentrations 6.2 m AGL were higher than concentrations 2.0 m AGL, the nominal nose height of outdoor populations. Also, a new direct-measure NO 2 photometer recorded generally lower NO 2 regulatory-oriented concentrations than the conventional FRM chemiluminescence monitor, reporting lower daily maximum hourly average concentrations than the conventional monitor about 3 of every 5 days. Employing bias-prone instruments for measurement of ambient ozone or nitrogen dioxide from inlets at inappropriate heights above ground level may result in collection of positively biased data. This paper discusses tests of new regulatory instruments, recent developments in bias-free ozone and nitrogen dioxide measurement technology, and the presence/extent of a near-ground O 3 gradient (NGOG). Collection of unbiased monitor inlet height-appropriate data is crucial for determining accurate design values and meeting National Ambient Air Quality Standards.

  12. Ultratrace level determination and quantitative analysis of kidney injury biomarkers in patient samples attained by zinc oxide nanorods

    NASA Astrophysics Data System (ADS)

    Singh, Manpreet; Alabanza, Anginelle; Gonzalez, Lorelis E.; Wang, Weiwei; Reeves, W. Brian; Hahm, Jong-In

    2016-02-01

    Determining ultratrace amounts of protein biomarkers in patient samples in a straightforward and quantitative manner is extremely important for early disease diagnosis and treatment. Here, we successfully demonstrate the novel use of zinc oxide nanorods (ZnO NRs) in the ultrasensitive and quantitative detection of two acute kidney injury (AKI)-related protein biomarkers, tumor necrosis factor (TNF)-α and interleukin (IL)-8, directly from patient samples. We first validate the ZnO NRs-based IL-8 results via comparison with those obtained from using a conventional enzyme-linked immunosorbent method in samples from 38 individuals. We further assess the full detection capability of the ZnO NRs-based technique by quantifying TNF-α, whose levels in human urine are often below the detection limits of conventional methods. Using the ZnO NR platforms, we determine the TNF-α concentrations of all 46 patient samples tested, down to the fg per mL level. Subsequently, we screen for TNF-α levels in approximately 50 additional samples collected from different patient groups in order to demonstrate a potential use of the ZnO NRs-based assay in assessing cytokine levels useful for further clinical monitoring. Our research efforts demonstrate that ZnO NRs can be straightforwardly employed in the rapid, ultrasensitive, quantitative, and simultaneous detection of multiple AKI-related biomarkers directly in patient urine samples, providing an unparalleled detection capability beyond those of conventional analysis methods. Additional key advantages of the ZnO NRs-based approach include a fast detection speed, low-volume assay condition, multiplexing ability, and easy automation/integration capability to existing fluorescence instrumentation. Therefore, we anticipate that our ZnO NRs-based detection method will be highly beneficial for overcoming the frequent challenges in early biomarker development and treatment assessment, pertaining to the facile and ultrasensitive quantification of hard-to-trace biomolecules.Determining ultratrace amounts of protein biomarkers in patient samples in a straightforward and quantitative manner is extremely important for early disease diagnosis and treatment. Here, we successfully demonstrate the novel use of zinc oxide nanorods (ZnO NRs) in the ultrasensitive and quantitative detection of two acute kidney injury (AKI)-related protein biomarkers, tumor necrosis factor (TNF)-α and interleukin (IL)-8, directly from patient samples. We first validate the ZnO NRs-based IL-8 results via comparison with those obtained from using a conventional enzyme-linked immunosorbent method in samples from 38 individuals. We further assess the full detection capability of the ZnO NRs-based technique by quantifying TNF-α, whose levels in human urine are often below the detection limits of conventional methods. Using the ZnO NR platforms, we determine the TNF-α concentrations of all 46 patient samples tested, down to the fg per mL level. Subsequently, we screen for TNF-α levels in approximately 50 additional samples collected from different patient groups in order to demonstrate a potential use of the ZnO NRs-based assay in assessing cytokine levels useful for further clinical monitoring. Our research efforts demonstrate that ZnO NRs can be straightforwardly employed in the rapid, ultrasensitive, quantitative, and simultaneous detection of multiple AKI-related biomarkers directly in patient urine samples, providing an unparalleled detection capability beyond those of conventional analysis methods. Additional key advantages of the ZnO NRs-based approach include a fast detection speed, low-volume assay condition, multiplexing ability, and easy automation/integration capability to existing fluorescence instrumentation. Therefore, we anticipate that our ZnO NRs-based detection method will be highly beneficial for overcoming the frequent challenges in early biomarker development and treatment assessment, pertaining to the facile and ultrasensitive quantification of hard-to-trace biomolecules. Electronic supplementary information (ESI) available: Typical SEM images of the ZnO NRs used in the biomarker assays are provided in Fig. S1. See DOI: 10.1039/c5nr08706f

  13. Do national drug control laws ensure the availability of opioids for medical and scientific purposes?

    PubMed Central

    Brown, Marty Skemp; Maurer, Martha A

    2014-01-01

    Abstract Objective To determine whether national drug control laws ensure that opioid drugs are available for medical and scientific purposes, as intended by the 1972 Protocol amendment to the 1961 Single Convention on Narcotic Drugs. Methods The authors examined whether the text of a convenience sample of drug laws from 15 countries: (i) acknowledged that opioid drugs are indispensable for the relief of pain and suffering; (ii) recognized that government was responsible for ensuring the adequate provision of such drugs for medical and scientific purposes; (iii) designated an administrative body for implementing international drug control conventions; and (iv) acknowledged a government’s intention to implement international conventions, including the Single Convention. Findings Most national laws were found not to contain measures that ensured adequate provision of opioid drugs for medical and scientific purposes. Moreover, the model legislation provided by the United Nations Office on Drugs and Crime did not establish an obligation on national governments to ensure the availability of these drugs for medical use. Conclusion To achieve consistency with the Single Convention, as well as with associated resolutions and recommendations of international bodies, national drug control laws and model policies should be updated to include measures that ensure drug availability to balance the restrictions imposed by the existing drug control measures needed to prevent the diversion and nonmedical use of such drugs. PMID:24623904

  14. A survey of hybrid Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Saeed, Adnan S.; Younes, Ahmad Bani; Cai, Chenxiao; Cai, Guowei

    2018-04-01

    This article presents a comprehensive overview on the recent advances of miniature hybrid Unmanned Aerial Vehicles (UAVs). For now, two conventional types, i.e., fixed-wing UAV and Vertical Takeoff and Landing (VTOL) UAV, dominate the miniature UAVs. Each type has its own inherent limitations on flexibility, payload, flight range, cruising speed, takeoff and landing requirements and endurance. Enhanced popularity and interest are recently gained by the newer type, named hybrid UAV, that integrates the beneficial features of both conventional ones. In this survey paper, a systematic categorization method for the hybrid UAV's platform designs is introduced, first presenting the technical features and representative examples. Next, the hybrid UAV's flight dynamics model and flight control strategies are explained addressing several representative modeling and control work. In addition, key observations, existing challenges and conclusive remarks based on the conducted review are discussed accordingly.

  15. Application of LANDSAT and Skylab data for land use mapping in Italy. [emphasizing the Alps Mountains

    NASA Technical Reports Server (NTRS)

    Bodechtel, J.; Nithack, J.; Dibernardo, G.; Hiller, K.; Jaskolla, F.; Smolka, A.

    1975-01-01

    Utilizing LANDSAT and Skylab multispectral imagery of 1972 and 1973, a land use map of the mountainous regions of Italy was evaluated at a scale of 1:250,000. Seven level I categories were identified by conventional methods of photointerpretation. Images of multispectral scanner (MSS) bands 5 and 7, or equivalents were mainly used. Areas of less than 200 by 200 m were classified and standard procedures were established for interpretation of multispectral satellite imagery. Land use maps were produced for central and southern Europe indicating that the existing land use maps could be updated and optimized. The complexity of European land use patterns, the intensive morphology of young mountain ranges, and time-cost calculations are the reasons that the applied conventional techniques are superior to automatic evaluation.

  16. [Alternative medicine: faith or science?].

    PubMed

    Pletscher, A

    1990-04-21

    For the success of both alternative and scientific (conventional) medicine, factors such as the psychological influence of the doctor, loving care, human affection, the patient's belief in the treatment, the suggestive power of attractive (even unproven) theories, dogmas and chance events (e.g. spontaneous remissions) etc. play a major role. Some practices of alternative medicine have a particularly strong appeal to the non-rational side of the human being. Conventional medicine includes a component which is based on scientific and statistical methods. The possibility that in alternative medicine principles and effects exist which are not (yet) known to scientific medicine, but which match up to scientific criteria, cannot be excluded. However, up to now this has not been convincingly proven. The difficulties which arise in the elucidation of this problem are discussed in the light of examples from the literature and some experiments of our own.

  17. Vitrification of radioactive contaminated soil by means of microwave energy

    NASA Astrophysics Data System (ADS)

    Yuan, Xun; Qing, Qi; Zhang, Shuai; Lu, Xirui

    2017-03-01

    Simulated radioactive contaminated soil was successfully vitrified by microwave sintering technology and the solidified body were systematically studied by Raman, XRD and SEM-EDX. The Raman results show that the solidified body transformed to amorphous structure better at higher temperature (1200 °C). The XRD results show that the metamictization has been significantly enhanced by the prolonged holding time at 1200 °C by microwave sintering, while by conventional sintering technology other crystal diffraction peaks, besides of silica at 2θ = 27.830°, still exist after being treated at 1200 °C for much longer time. The SEM-EDX discloses the micro-morphology of the sample and the uniform distribution of Nd element. All the results show that microwave technology performs vitrification better than the conventional sintering method in solidifying radioactive contaminated soil.

  18. Three-dimensional fluorescent microscopy via simultaneous illumination and detection at multiple planes.

    PubMed

    Ma, Qian; Khademhosseinieh, Bahar; Huang, Eric; Qian, Haoliang; Bakowski, Malina A; Troemel, Emily R; Liu, Zhaowei

    2016-08-16

    The conventional optical microscope is an inherently two-dimensional (2D) imaging tool. The objective lens, eyepiece and image sensor are all designed to capture light emitted from a 2D 'object plane'. Existing technologies, such as confocal or light sheet fluorescence microscopy have to utilize mechanical scanning, a time-multiplexing process, to capture a 3D image. In this paper, we present a 3D optical microscopy method based upon simultaneously illuminating and detecting multiple focal planes. This is implemented by adding two diffractive optical elements to modify the illumination and detection optics. We demonstrate that the image quality of this technique is comparable to conventional light sheet fluorescent microscopy with the advantage of the simultaneous imaging of multiple axial planes and reduced number of scans required to image the whole sample volume.

  19. A Kernel-Based Low-Rank (KLR) Model for Low-Dimensional Manifold Recovery in Highly Accelerated Dynamic MRI.

    PubMed

    Nakarmi, Ukash; Wang, Yanhua; Lyu, Jingyuan; Liang, Dong; Ying, Leslie

    2017-11-01

    While many low rank and sparsity-based approaches have been developed for accelerated dynamic magnetic resonance imaging (dMRI), they all use low rankness or sparsity in input space, overlooking the intrinsic nonlinear correlation in most dMRI data. In this paper, we propose a kernel-based framework to allow nonlinear manifold models in reconstruction from sub-Nyquist data. Within this framework, many existing algorithms can be extended to kernel framework with nonlinear models. In particular, we have developed a novel algorithm with a kernel-based low-rank model generalizing the conventional low rank formulation. The algorithm consists of manifold learning using kernel, low rank enforcement in feature space, and preimaging with data consistency. Extensive simulation and experiment results show that the proposed method surpasses the conventional low-rank-modeled approaches for dMRI.

  20. Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme

    NASA Astrophysics Data System (ADS)

    Liu, Xianglin; Wang, Yang; Eisenbach, Markus; Stocks, G. Malcolm

    2018-03-01

    The Green function plays an essential role in the Korringa-Kohn-Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn-Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). The pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. By using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.

  1. A New Finite Difference Q-compensated RTM Algorithm in Tilted Transverse Isotropic (TTI) Media

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Hu, W.; Ning, J.

    2017-12-01

    Attenuating anisotropic geological body is difficult to image with conventional migration methods. In such kind of scenarios, recorded seismic data suffer greatly from both amplitude decay and phase distortion, resulting in degraded resolution, poor illumination and incorrect migration depth in imaging results. To efficiently obtain high quality images, we propose a novel TTI QRTM algorithm based on Generalized Standard Linear Solid model combined with a unique multi-stage optimization technique to simultaneously correct the decayed amplitude and the distorted phase velocity. Numerical tests (shown in the figure) demonstrate that our TTI QRTM algorithm effectively corrects migration depth, significantly improves illumination, and enhances resolution within and below the low Q regions. The result of our new method is very close to the reference RTM image, while QRTM without TTI cannot get a correct image. Compared to the conventional QRTM method based on a pseudo-spectral operator for fractional Laplacian evaluation, our method is more computationally efficient for large scale applications and more suitable for GPU acceleration. With the current multi-stage dispersion optimization scheme, this TTI QRTM method best performs in the frequency range 10-70 Hz, and could be used in a wider frequency range. Furthermore, as this method can also handle frequency dependent Q, it has potential to be applied in imaging deep structures where low Q exists, such as subduction zones, volcanic zones or fault zones with passive source observations.

  2. Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xianglin; Wang, Yang; Eisenbach, Markus

    The Green function plays an essential role in the Korringa–Kohn–Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn–Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). Themore » pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. Here, by using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.« less

  3. Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme

    DOE PAGES

    Liu, Xianglin; Wang, Yang; Eisenbach, Markus; ...

    2017-10-28

    The Green function plays an essential role in the Korringa–Kohn–Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn–Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). Themore » pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. Here, by using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.« less

  4. D Reconstruction from Multi-View Medical X-Ray Images - Review and Evaluation of Existing Methods

    NASA Astrophysics Data System (ADS)

    Hosseinian, S.; Arefi, H.

    2015-12-01

    The 3D concept is extremely important in clinical studies of human body. Accurate 3D models of bony structures are currently required in clinical routine for diagnosis, patient follow-up, surgical planning, computer assisted surgery and biomechanical applications. However, 3D conventional medical imaging techniques such as computed tomography (CT) scan and magnetic resonance imaging (MRI) have serious limitations such as using in non-weight-bearing positions, costs and high radiation dose(for CT). Therefore, 3D reconstruction methods from biplanar X-ray images have been taken into consideration as reliable alternative methods in order to achieve accurate 3D models with low dose radiation in weight-bearing positions. Different methods have been offered for 3D reconstruction from X-ray images using photogrammetry which should be assessed. In this paper, after demonstrating the principles of 3D reconstruction from X-ray images, different existing methods of 3D reconstruction of bony structures from radiographs are classified and evaluated with various metrics and their advantages and disadvantages are mentioned. Finally, a comparison has been done on the presented methods with respect to several metrics such as accuracy, reconstruction time and their applications. With regards to the research, each method has several advantages and disadvantages which should be considered for a specific application.

  5. A View of Children in a Global Age: Concerning the Convention of Children's Rights

    ERIC Educational Resources Information Center

    Horio, Teruhisa

    2006-01-01

    After the establishment of the Convention of the Rights of the Child, the implementation of the Convention became the obligation of the government of each country and the responsibility of every society. However, in reality, many infringements on the rights of children, both visible and invisible, exist not only due to starvation, insecurity and…

  6. Evaluation of maternal serum alpha-foetoprotein assay using dry blood spot samples.

    PubMed

    González, C; Guerrero, J M; Elorza, F L; Molinero, P; Goberna, R

    1988-02-01

    The quantification of alpha-foetoprotein in dry blood spots from pregnant women was evaluated, using a conventional radioimmunoassay (RIA) with a monospecific antibody. The stability of alpha-foetoprotein in dry blood spots on filter paper was evaluated with respect to mailing, distances travelled, and the existence of high summer temperatures in our region. The results obtained show that the blood alpha-foetoprotein is stable on dry filter spots sent by mail and is stable for up to four weeks at 4, 25 and 37 degrees C. The analytical method used has a minimal detectable concentration of 10 +/- 1.9 international kilo-units/l. Both inter- and intra-assay variabilities are smaller than 10% and this method can provide results comparable with those of conventional serum assays. Results from dry blood spots and serum samples (the latter analysed by both RIA and two-site enzyme immunoassay) exhibited a good correlation (r = 0.98 and r = 0.97, p less than 0.001). The design of the assay and the nature of the samples make this method suitable for a screening programmes for the antenatal detection of open neural tube defects.

  7. Hydroxylapatite nanoparticles: fabrication methods and medical applications

    NASA Astrophysics Data System (ADS)

    Okada, Masahiro; Furuzono, Tsutomu

    2012-12-01

    Hydroxylapatite (or hydroxyapatite, HAp) exhibits excellent biocompatibility with various kinds of cells and tissues, making it an ideal candidate for tissue engineering, orthopedic and dental applications. Nanosized materials offer improved performances compared with conventional materials due to their large surface-to-volume ratios. This review summarizes existing knowledge and recent progress in fabrication methods of nanosized (or nanostructured) HAp particles, as well as their recent applications in medical and dental fields. In section 1, we provide a brief overview of HAp and nanoparticles. In section 2, fabrication methods of HAp nanoparticles are described based on the particle formation mechanisms. Recent applications of HAp nanoparticles are summarized in section 3. The future perspectives in this active research area are given in section 4.

  8. One-step synthesis and structural features of CdS/montmorillonite nanocomposites.

    PubMed

    Han, Zhaohui; Zhu, Huaiyong; Bulcock, Shaun R; Ringer, Simon P

    2005-02-24

    A novel synthesis method was introduced for the nanocomposites of cadmium sulfide and montmorillonite. This method features the combination of an ion exchange process and an in situ hydrothermal decomposition process of a complex precursor, which is simple in contrast to the conventional synthesis methods that comprise two separate steps for similar nanocomposite materials. Cadmium sulfide species in the composites exist in the forms of pillars and nanoparticles, the crystallized sulfide particles are in the hexagonal phase, and the sizes change when the amount of the complex for the synthesis is varied. Structural features of the nanocomposites are similar to those of the clay host but changed because of the introduction of the sulfide into the clay.

  9. The least-squares finite element method for low-mach-number compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao

    1994-01-01

    The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.

  10. TIMSS 2011 Student and Teacher Predictors for Mathematics Achievement Explored and Identified via Elastic Net.

    PubMed

    Yoo, Jin Eun

    2018-01-01

    A substantial body of research has been conducted on variables relating to students' mathematics achievement with TIMSS. However, most studies have employed conventional statistical methods, and have focused on selected few indicators instead of utilizing hundreds of variables TIMSS provides. This study aimed to find a prediction model for students' mathematics achievement using as many TIMSS student and teacher variables as possible. Elastic net, the selected machine learning technique in this study, takes advantage of both LASSO and ridge in terms of variable selection and multicollinearity, respectively. A logistic regression model was also employed to predict TIMSS 2011 Korean 4th graders' mathematics achievement. Ten-fold cross-validation with mean squared error was employed to determine the elastic net regularization parameter. Among 162 TIMSS variables explored, 12 student and 5 teacher variables were selected in the elastic net model, and the prediction accuracy, sensitivity, and specificity were 76.06, 70.23, and 80.34%, respectively. This study showed that the elastic net method can be successfully applied to educational large-scale data by selecting a subset of variables with reasonable prediction accuracy and finding new variables to predict students' mathematics achievement. Newly found variables via machine learning can shed light on the existing theories from a totally different perspective, which in turn propagates creation of a new theory or complement of existing ones. This study also examined the current scale development convention from a machine learning perspective.

  11. TIMSS 2011 Student and Teacher Predictors for Mathematics Achievement Explored and Identified via Elastic Net

    PubMed Central

    Yoo, Jin Eun

    2018-01-01

    A substantial body of research has been conducted on variables relating to students' mathematics achievement with TIMSS. However, most studies have employed conventional statistical methods, and have focused on selected few indicators instead of utilizing hundreds of variables TIMSS provides. This study aimed to find a prediction model for students' mathematics achievement using as many TIMSS student and teacher variables as possible. Elastic net, the selected machine learning technique in this study, takes advantage of both LASSO and ridge in terms of variable selection and multicollinearity, respectively. A logistic regression model was also employed to predict TIMSS 2011 Korean 4th graders' mathematics achievement. Ten-fold cross-validation with mean squared error was employed to determine the elastic net regularization parameter. Among 162 TIMSS variables explored, 12 student and 5 teacher variables were selected in the elastic net model, and the prediction accuracy, sensitivity, and specificity were 76.06, 70.23, and 80.34%, respectively. This study showed that the elastic net method can be successfully applied to educational large-scale data by selecting a subset of variables with reasonable prediction accuracy and finding new variables to predict students' mathematics achievement. Newly found variables via machine learning can shed light on the existing theories from a totally different perspective, which in turn propagates creation of a new theory or complement of existing ones. This study also examined the current scale development convention from a machine learning perspective. PMID:29599736

  12. A real-time PCR diagnostic method for detection of Naegleria fowleri.

    PubMed

    Madarová, Lucia; Trnková, Katarína; Feiková, Sona; Klement, Cyril; Obernauerová, Margita

    2010-09-01

    Naegleria fowleri is a free-living amoeba that can cause primary amoebic meningoencephalitis (PAM). While, traditional methods for diagnosing PAM still rely on culture, more current laboratory diagnoses exist based on conventional PCR methods; however, only a few real-time PCR processes have been described as yet. Here, we describe a real-time PCR-based diagnostic method using hybridization fluorescent labelled probes, with a LightCycler instrument and accompanying software (Roche), targeting the Naegleria fowleriMp2Cl5 gene sequence. Using this method, no cross reactivity with other tested epidemiologically relevant prokaryotic and eukaryotic organisms was found. The reaction detection limit was 1 copy of the Mp2Cl5 DNA sequence. This assay could become useful in the rapid laboratory diagnostic assessment of the presence or absence of Naegleria fowleri. Copyright 2009 Elsevier Inc. All rights reserved.

  13. Can reliable sage-grouse lek counts be obtained using aerial infrared technology

    USGS Publications Warehouse

    Gillette, Gifford L.; Coates, Peter S.; Petersen, Steven; Romero, John P.

    2013-01-01

    More effective methods for counting greater sage-grouse (Centrocercus urophasianus) are needed to better assess population trends through enumeration or location of new leks. We describe an aerial infrared technique for conducting sage-grouse lek counts and compare this method with conventional ground-based lek count methods. During the breeding period in 2010 and 2011, we surveyed leks from fixed-winged aircraft using cryogenically cooled mid-wave infrared cameras and surveyed the same leks on the same day from the ground following a standard lek count protocol. We did not detect significant differences in lek counts between surveying techniques. These findings suggest that using a cryogenically cooled mid-wave infrared camera from an aerial platform to conduct lek surveys is an effective alternative technique to conventional ground-based methods, but further research is needed. We discuss multiple advantages to aerial infrared surveys, including counting in remote areas, representing greater spatial variation, and increasing the number of counted leks per season. Aerial infrared lek counts may be a valuable wildlife management tool that releases time and resources for other conservation efforts. Opportunities exist for wildlife professionals to refine and apply aerial infrared techniques to wildlife monitoring programs because of the increasing reliability and affordability of this technology.

  14. Fabrication of crown restoration retrofitting to existing clasps using CAD/CAM: fitness accuracy and retentive force.

    PubMed

    Ozawa, Daisuke; Suzuki, Yasunori; Kawamura, Noboru; Ohkubo, Chikahiro

    2015-04-01

    A crown restoration engaged by a clasp as an abutment tooth for a removable partial denture (RPD) occasionally might be removed and eliminated due to secondary caries or apical lesions. However, if the RPD is clinically acceptable without any problems and refabricating the RPD is not recommended, the new crown must be made to retrofit to the existing clasp of the RPD. This in vitro study evaluated the conventional and CAD/CAM procedures for retrofitting crown restorations to the existing clasps by measuring the fitness accuracy and the retentive forces. The crown restoration on #44 was fabricated with CP titanium and zirconium on the plaster model with #45 and #46 teeth missing to retrofit to the existing clasp using conventional thin coping and CAD/CAM procedures. The gap distance between the clasp (tip, shoulder, and rest regions) and the fabricated crown was measured using silicone impression material. The retentive force of the clasp was also measured, using an autograph at a crosshead speed of 50mm/min. The obtained data were analyzed by one-way ANOVA/Tukey's multiple comparison test (α=0.05). The CAD/CAM procedure caused significantly smaller gap distances in all of the clasp regions, as compared to the conventional procedure (p<0.05). The retentive force of the CAD/CAM crown was significantly higher than for the conventional one (p<0.05). When a crown restoration must be remade to retrofit an existing clasp, CAD/CAM fabrication can be recommended so that both appropriate fitness and retentive force are obtained. Copyright © 2015 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  15. First experiences with an accelerated CMV antigenemia test: CMV Brite Turbo assay.

    PubMed

    Visser, C E; van Zeijl, C J; de Klerk, E P; Schillizi, B M; Beersma, M F; Kroes, A C

    2000-06-01

    Cytomegalovirus disease is still a major problem in immunocompromised patients, such as bone marrow or kidney transplantation patients. The detection of viral antigen in leukocytes (antigenemia) has proven to be a clinically relevant marker of CMV activity and has found widespread application. Because most existing assays are rather time-consuming and laborious, an accelerated version (Brite Turbo) of an existing method (Brite) has been developed. The major modification is in the direct lysis of erythrocytes instead of separation by sedimentation. In this study the Brite Turbo method has been compared with the conventional Brite method to detect CMV antigen pp65 in peripheral blood leukocytes of 107 consecutive immunocompromised patients. Both tests produced similar results. Discrepancies were limited to the lowest positive range and sensitivity and specificity were comparable for both tests. Two major advantages of the Brite Turbo method could be observed in comparison to the original method: assay-time was reduced by more than 50% and only 2 ml of blood was required. An additional advantage was the higher number of positive nuclei in the Brite Turbo method attributable to the increased number of granulocytes in the assay. Early detection of CMV infection or reactivation has become faster and easier with this modified assay.

  16. A preliminary study of muscular artifact cancellation in single-channel EEG.

    PubMed

    Chen, Xun; Liu, Aiping; Peng, Hu; Ward, Rabab K

    2014-10-01

    Electroencephalogram (EEG) recordings are often contaminated with muscular artifacts that strongly obscure the EEG signals and complicates their analysis. For the conventional case, where the EEG recordings are obtained simultaneously over many EEG channels, there exists a considerable range of methods for removing muscular artifacts. In recent years, there has been an increasing trend to use EEG information in ambulatory healthcare and related physiological signal monitoring systems. For practical reasons, a single EEG channel system must be used in these situations. Unfortunately, there exist few studies for muscular artifact cancellation in single-channel EEG recordings. To address this issue, in this preliminary study, we propose a simple, yet effective, method to achieve the muscular artifact cancellation for the single-channel EEG case. This method is a combination of the ensemble empirical mode decomposition (EEMD) and the joint blind source separation (JBSS) techniques. We also conduct a study that compares and investigates all possible single-channel solutions and demonstrate the performance of these methods using numerical simulations and real-life applications. The proposed method is shown to significantly outperform all other methods. It can successfully remove muscular artifacts without altering the underlying EEG activity. It is thus a promising tool for use in ambulatory healthcare systems.

  17. Parameterizing the Variability and Uncertainty of Wind and Solar in CEMs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany

    We present current and improved methods for estimating the capacity value and curtailment impacts from variable generation (VG) in capacity expansion models (CEMs). The ideal calculation of these variability metrics is through an explicit co-optimized investment-dispatch model using multiple years of VG and load data. Because of data and computational limitations, existing CEMs typically approximate these metrics using a subset of all hours from a single year and/or using statistical methods, which often do not capture the tail-event impacts or the broader set of interactions between VG, storage, and conventional generators. In our proposed new methods, we use hourly generationmore » and load values across all hours of the year to characterize the (1) contribution of VG to system capacity during high load hours, (2) the curtailment level of VG, and (3) the reduction in VG curtailment due to storage and shutdown of select thermal generators. Using CEM model outputs from a preceding model solve period, we apply these methods to exogenously calculate capacity value and curtailment metrics for the subsequent model solve period. Preliminary results suggest that these hourly methods offer improved capacity value and curtailment representations of VG in the CEM from existing approximation methods without additional computational burdens.« less

  18. Comparison of four USEPA digestion methods for trace metal analysis using certified and Florida soils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M.; Ma, L.Q.

    1998-11-01

    It is critical to compare existing sample digestion methods for evaluating soil contamination and remediation. USEPA Methods 3050, 3051, 3051a, and 3052 were used to digest standard reference materials and representative Florida surface soils. Fifteen trace metals (Ag, As, Ba, Be, Cd, Cr, Cu, Hg, Mn, Mo, Ni, Pb, Sb, Se, and Za), and six macro elements (Al, Ca, Fe, K, Mg, and P) were analyzed. Precise analysis was achieved for all elements except for Cd, Mo, Se, and Sb in NIST SRMs 2704 and 2709 by USEPA Methods 3050 and 3051, and for all elements except for As, Mo,more » Sb, and Se in NIST SRM 2711 by USEPA Method 3052. No significant differences were observed for the three NIST SRMs between the microwave-assisted USEPA Methods 3051 and 3051A and the conventional USEPA Method 3050 Methods 3051 and 3051a and the conventional USEPA Method 3050 except for Hg, Sb, and Se. USEPA Method 3051a provided comparable values for NIST SRMs certified using USEPA Method 3050. However, for method correlation coefficients and elemental recoveries in 40 Florida surface soils, USEPA Method 3051a was an overall better alternative for Method 3050 than was Method 3051. Among the four digestion methods, the microwave-assisted USEPA Method 3052 achieved satisfactory recoveries for all elements except As and Mg using NIST SRM 2711. This total-total digestion method provided greater recoveries for 12 elements Ag, Be, Cr, Fe, K, Mn, Mo, Ni, Pb, Sb, Se, and Zn, but lower recoveries for Mg in Florida soils than did the total-recoverable digestion methods.« less

  19. Quartz Crystal Microbalance Electronic Interfacing Systems: A Review.

    PubMed

    Alassi, Abdulrahman; Benammar, Mohieddine; Brett, Dan

    2017-12-05

    Quartz Crystal Microbalance (QCM) sensors are actively being implemented in various fields due to their compatibility with different operating conditions in gaseous/liquid mediums for a wide range of measurements. This trend has been matched by the parallel advancement in tailored electronic interfacing systems for QCM sensors. That is, selecting the appropriate electronic circuit is vital for accurate sensor measurements. Many techniques were developed over time to cover the expanding measurement requirements (e.g., accommodating highly-damping environments). This paper presents a comprehensive review of the various existing QCM electronic interfacing systems. Namely, impedance-based analysis, oscillators (conventional and lock-in based techniques), exponential decay methods and the emerging phase-mass based characterization. The aforementioned methods are discussed in detail and qualitatively compared in terms of their performance for various applications. In addition, some theoretical improvements and recommendations are introduced for adequate systems implementation. Finally, specific design considerations of high-temperature microbalance systems (e.g., GaPO₄ crystals (GCM) and Langasite crystals (LCM)) are introduced, while assessing their overall system performance, stability and quality compared to conventional low-temperature applications.

  20. SHARP: A multi-mission artificial intelligence system for spacecraft telemetry monitoring and diagnosis

    NASA Technical Reports Server (NTRS)

    Lawson, Denise L.; James, Mark L.

    1989-01-01

    The Spacecraft Health Automated Reasoning Prototype (SHARP) is a system designed to demonstrate automated health and status analysis for multi-mission spacecraft and ground data systems operations. Telecommunications link analysis of the Voyager 2 spacecraft is the initial focus for the SHARP system demonstration which will occur during Voyager's encounter with the planet Neptune in August, 1989, in parallel with real time Voyager operations. The SHARP system combines conventional computer science methodologies with artificial intelligence techniques to produce an effective method for detecting and analyzing potential spacecraft and ground systems problems. The system performs real time analysis of spacecraft and other related telemetry, and is also capable of examining data in historical context. A brief introduction is given to the spacecraft and ground systems monitoring process at the Jet Propulsion Laboratory. The current method of operation for monitoring the Voyager Telecommunications subsystem is described, and the difficulties associated with the existing technology are highlighted. The approach taken in the SHARP system to overcome the current limitations is also described, as well as both the conventional and artificial intelligence solutions developed in SHARP.

  1. Development of a novel controllable, multidirectional, reusable metallic port with a wide working space.

    PubMed

    Hosaka, Seiji; Ohdaira, Takeshi; Umemoto, Satoshi; Hashizume, Makoto; Kawamoto, Shunji

    2013-12-01

    Endoscopic surgery is currently a standard procedure in many countries. Furthermore, conventional four-port laparoscopic cholecystectomy is developing into a single-port procedure. However, in many developing countries, disposable medical products are expensive and adequate medical waste disposable facilities are absent. Advanced medical treatments such as laparoscopic or single-port surgeries are not readily available in many areas of developing countries, and there are often no other sterilization methods besides autoclaving. Moreover, existing reusable metallic ports are impractical and are thus not widely used. We developed a novel controllable, multidirectional single-port device that can be autoclaved, and with a wide working space, which was employed in five patients. In all patients, laparoscopic cholecystectomy was accomplished without complications. Our device facilitates single-port surgery in areas of the world with limited sterilization methods and offers a novel alternative to conventional tools for creating a smaller incision, decrease postoperative pain, and improve cosmesis. This novel device can also lower the cost of medical treatment and offers a promising tool for major surgeries requiring a wide working space.

  2. Acceleration of plates using non-conventional explosives heavily-loaded with inert materials

    NASA Astrophysics Data System (ADS)

    Loiseau, J.; Petel, O. E.; Huneault, J.; Serge, M.; Frost, D. L.; Higgins, A. J.

    2014-05-01

    The detonation behavior of high explosives containing quantities of dense additives has been previously investigated with the observation that such systems depart dramatically from the approximately "gamma law" behavior typical of conventional explosives due to momentum transfer and thermalization between particles and detonation products. However, the influence of this non-ideal detonation behavior on the divergence speed of plates has been less thoroughly studied and existing literature suggests that the effect of dense additives cannot be explained solely through the straightforward application of the Gurney method with energy and density averaging of the explosive. In the current study, the acceleration history and terminal velocity of aluminum flyers launched by packed beds of granular material saturated by amine-sensitized nitromethane is reported. It was observed that terminal flyer velocity scales primarily with the ratio of flyer mass to mass of the explosive component; a fundamental feature of the Gurney method. Velocity decrement from the addition of particles was only 20%-30% compared to the resulting velocity if propelled by an equivalent quantity of neat explosive.

  3. Green remediation. Tool for safe and sustainable environment: a review

    NASA Astrophysics Data System (ADS)

    Singh, Mamta; Pant, Gaurav; Hossain, Kaizar; Bhatia, A. K.

    2017-10-01

    Nowadays, the bioremediation of toxic pollutants is a subject of interest in terms of health issues and environmental cleaning. In the present review, an eco-friendly, cost-effective approach is discussed for the detoxification of environmental pollutants by the means of natural purifier, i.e., blue-green algae over the conventional methods. Industrial wastes having toxic pollutants are not able to eliminate completely by existing the conventional techniques; in fact, these methods can only change their form rather than the entire degradation. These pollutants have an adverse effect on aquatic life, such as fauna and flora, and finally harm human life directly or indirectly. Cyanobacterial approach for the removal of this contaminant is an efficient tool for sustainable development and pollution control. Cyanobacteria are the primary consumers of food chain which absorbed complex toxic compounds from environments and convert them to simple nontoxic compounds which finally protect higher food chain consumer and eliminate risk of pollution. In addition, these organisms have capability to solve secondary pollution, as they can remediate radioactive compound, petroleum waste and degrade toxins from pesticides.

  4. Quartz Crystal Microbalance Electronic Interfacing Systems: A Review

    PubMed Central

    Benammar, Mohieddine; Brett, Dan

    2017-01-01

    Quartz Crystal Microbalance (QCM) sensors are actively being implemented in various fields due to their compatibility with different operating conditions in gaseous/liquid mediums for a wide range of measurements. This trend has been matched by the parallel advancement in tailored electronic interfacing systems for QCM sensors. That is, selecting the appropriate electronic circuit is vital for accurate sensor measurements. Many techniques were developed over time to cover the expanding measurement requirements (e.g., accommodating highly-damping environments). This paper presents a comprehensive review of the various existing QCM electronic interfacing systems. Namely, impedance-based analysis, oscillators (conventional and lock-in based techniques), exponential decay methods and the emerging phase-mass based characterization. The aforementioned methods are discussed in detail and qualitatively compared in terms of their performance for various applications. In addition, some theoretical improvements and recommendations are introduced for adequate systems implementation. Finally, specific design considerations of high-temperature microbalance systems (e.g., GaPO4 crystals (GCM) and Langasite crystals (LCM)) are introduced, while assessing their overall system performance, stability and quality compared to conventional low-temperature applications. PMID:29206212

  5. Measuring Surface Tension of a Flowing Soap Film

    NASA Astrophysics Data System (ADS)

    Sane, Aakash; Kim, Ildoo; Mandre, Shreyas

    2016-11-01

    It is well known that surface tension is sensitive to the presence of surfactants and many conventional methods exist to measure it. These techniques measure surface tension either by intruding into the system or by changing its geometry. Use of conventional methods in the case of a flowing soap film is not feasible because intruding the soap film changes surface tension due to Marangoni effect. We present a technique in which we measure the surface tension in situ of a flowing soap film without intruding into the film. A flowing soap film is created by letting soap solution drip between two wires. The interaction of the soap film with the wires causes the wires to deflect which can be measured. Surface tension is calculated using a relation between curvature of the wires and the surface tension. Our measurements indicate that the surface tension of the flowing soap film for our setup is around 0.05 N/m. The nature of this technique makes it favorable for measuring surface tension of flowing soap films whose properties change on intrusion.

  6. SHARP: A multi-mission AI system for spacecraft telemetry monitoring and diagnosis

    NASA Technical Reports Server (NTRS)

    Lawson, Denise L.; James, Mark L.

    1989-01-01

    The Spacecraft Health Automated Reasoning Prototype (SHARP) is a system designed to demonstrate automated health and status analysis for multi-mission spacecraft and ground data systems operations. Telecommunications link analysis of the Voyager II spacecraft is the initial focus for the SHARP system demonstration which will occur during Voyager's encounter with the planet Neptune in August, 1989, in parallel with real-time Voyager operations. The SHARP system combines conventional computer science methodologies with artificial intelligence techniques to produce an effective method for detecting and analyzing potential spacecraft and ground systems problems. The system performs real-time analysis of spacecraft and other related telemetry, and is also capable of examining data in historical context. A brief introduction is given to the spacecraft and ground systems monitoring process at the Jet Propulsion Laboratory. The current method of operation for monitoring the Voyager Telecommunications subsystem is described, and the difficulties associated with the existing technology are highlighted. The approach taken in the SHARP system to overcome the current limitations is also described, as well as both the conventional and artificial intelligence solutions developed in SHARP.

  7. Novel Method For Low-Rate Ddos Attack Detection

    NASA Astrophysics Data System (ADS)

    Chistokhodova, A. A.; Sidorov, I. D.

    2018-05-01

    The relevance of the work is associated with an increasing number of advanced types of DDoS attacks, in particular, low-rate HTTP-flood. Last year, the power and complexity of such attacks increased significantly. The article is devoted to the analysis of DDoS attacks detecting methods and their modifications with the purpose of increasing the accuracy of DDoS attack detection. The article details low-rate attacks features in comparison with conventional DDoS attacks. During the analysis, significant shortcomings of the available method for detecting low-rate DDoS attacks were found. Thus, the result of the study is an informal description of a new method for detecting low-rate denial-of-service attacks. The architecture of the stand for approbation of the method is developed. At the current stage of the study, it is possible to improve the efficiency of an already existing method by using a classifier with memory, as well as additional information.

  8. Crowd density estimation based on convolutional neural networks with mixed pooling

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Zheng, Hong; Zhang, Ying; Zhang, Dongming

    2017-09-01

    Crowd density estimation is an important topic in the fields of machine learning and video surveillance. Existing methods do not provide satisfactory classification accuracy; moreover, they have difficulty in adapting to complex scenes. Therefore, we propose a method based on convolutional neural networks (CNNs). The proposed method improves performance of crowd density estimation in two key ways. First, we propose a feature pooling method named mixed pooling to regularize the CNNs. It replaces deterministic pooling operations with a parameter that, by studying the algorithm, could combine the conventional max pooling with average pooling methods. Second, we present a classification strategy, in which an image is divided into two cells and respectively categorized. The proposed approach was evaluated on three datasets: two ground truth image sequences and the University of California, San Diego, anomaly detection dataset. The results demonstrate that the proposed approach performs more effectively and easily than other methods.

  9. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-10-01

    In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.

  10. Flexible pavement rehabilitation using pulverization.

    DOT National Transportation Integrated Search

    2008-06-01

    Pulverization is a roadway rehabilitation strategy that involves in-place recycling of the entire existing flexible pavement layer and some of the existing granular base layer (Figure 1). Pavement pulverization provides an alternative to conventional...

  11. Human health implications of organic food and organic agriculture: a comprehensive review.

    PubMed

    Mie, Axel; Andersen, Helle Raun; Gunnarsson, Stefan; Kahl, Johannes; Kesse-Guyot, Emmanuelle; Rembiałkowska, Ewa; Quaglio, Gianluca; Grandjean, Philippe

    2017-10-27

    This review summarises existing evidence on the impact of organic food on human health. It compares organic vs. conventional food production with respect to parameters important to human health and discusses the potential impact of organic management practices with an emphasis on EU conditions. Organic food consumption may reduce the risk of allergic disease and of overweight and obesity, but the evidence is not conclusive due to likely residual confounding, as consumers of organic food tend to have healthier lifestyles overall. However, animal experiments suggest that identically composed feed from organic or conventional production impacts in different ways on growth and development. In organic agriculture, the use of pesticides is restricted, while residues in conventional fruits and vegetables constitute the main source of human pesticide exposures. Epidemiological studies have reported adverse effects of certain pesticides on children's cognitive development at current levels of exposure, but these data have so far not been applied in formal risk assessments of individual pesticides. Differences in the composition between organic and conventional crops are limited, such as a modestly higher content of phenolic compounds in organic fruit and vegetables, and likely also a lower content of cadmium in organic cereal crops. Organic dairy products, and perhaps also meats, have a higher content of omega-3 fatty acids compared to conventional products. However, these differences are likely of marginal nutritional significance. Of greater concern is the prevalent use of antibiotics in conventional animal production as a key driver of antibiotic resistance in society; antibiotic use is less intensive in organic production. Overall, this review emphasises several documented and likely human health benefits associated with organic food production, and application of such production methods is likely to be beneficial within conventional agriculture, e.g., in integrated pest management.

  12. High intensity exercise or conventional exercise for patients with rheumatoid arthritis? Outcome expectations of patients, rheumatologists, and physiotherapists

    PubMed Central

    Munneke, M; de Jong, Z; Zwinderman, A; Ronday, H; van den Ende, C H M; Vliet, V; Hazes, J

    2004-01-01

    Objective: To examine the outcome expectations of RA patients, rheumatologists, and physiotherapists regarding high intensity exercise programmes compared with conventional exercise programmes. Methods: An exercise outcome expectations questionnaire was administered to 807 RA patients, 153 rheumatologists, and 624 physiotherapists. The questionnaire consisted of four statements regarding positive and negative outcomes of high intensity exercise programmes and four similar statements for conventional exercise programmes. A total expectation score for both conventional and high intensity exercise was calculated, ranging from –2 (very negative expectation) to 2 (very positive expectation). Results: The questionnaire was returned by 662 RA patients (82%), 132 rheumatologists (86%), and 467 physiotherapists (75%). The mean (95% confidence interval) scores for high intensity exercise programmes were 0.30 (0.25 to 0.34), 0.68 (0.62 to 0.74), and –0.06 (–0.15 to 0.02), and for conventional exercise programmes were 0.99 (0.96 to 1.02), 1.13 (1.09 to 1.17), and 1.27 (1.21 to 1.34) for RA patients, rheumatologists, and physiotherapists, respectively. In all three respondent groups, the outcome expectations of high intensity exercise were significantly less positive than those of conventional exercise programme. Conclusions: Despite the existing evidence regarding the effectiveness and safety of high intensity exercise programmes, RA patients, rheumatologists, and physiotherapists have more positive expectations of conventional exercise programmes than of high intensity exercise programmes. Physiotherapists were the least positive about outcomes of high intensity exercise programmes while rheumatologists were the most positive. To help the implementation of new insights in the effectiveness of physical therapy modalities in rheumatology, the need for continuous education of patients, rheumatologists and physiotherapists is emphasised. PMID:15194575

  13. Directional virtual backbone based data aggregation scheme for Wireless Visual Sensor Networks.

    PubMed

    Zhang, Jing; Liu, Shi-Jian; Tsai, Pei-Wei; Zou, Fu-Min; Ji, Xiao-Rong

    2018-01-01

    Data gathering is a fundamental task in Wireless Visual Sensor Networks (WVSNs). Features of directional antennas and the visual data make WVSNs more complex than the conventional Wireless Sensor Network (WSN). The virtual backbone is a technique, which is capable of constructing clusters. The version associating with the aggregation operation is also referred to as the virtual backbone tree. In most of the existing literature, the main focus is on the efficiency brought by the construction of clusters that the existing methods neglect local-balance problems in general. To fill up this gap, Directional Virtual Backbone based Data Aggregation Scheme (DVBDAS) for the WVSNs is proposed in this paper. In addition, a measurement called the energy consumption density is proposed for evaluating the adequacy of results in the cluster-based construction problems. Moreover, the directional virtual backbone construction scheme is proposed by considering the local-balanced factor. Furthermore, the associated network coding mechanism is utilized to construct DVBDAS. Finally, both the theoretical analysis of the proposed DVBDAS and the simulations are given for evaluating the performance. The experimental results prove that the proposed DVBDAS achieves higher performance in terms of both the energy preservation and the network lifetime extension than the existing methods.

  14. An a priori study of different tabulation methods for turbulent pulverised coal combustion

    NASA Astrophysics Data System (ADS)

    Luo, Yujuan; Wen, Xu; Wang, Haiou; Luo, Kun; Jin, Hanhui; Fan, Jianren

    2018-05-01

    In many practical pulverised coal combustion systems, different oxidiser streams exist, e.g. the primary- and secondary-air streams in the power plant boilers, which makes the modelling of these systems challenging. In this work, three tabulation methods for modelling pulverised coal combustion are evaluated through an a priori study. Pulverised coal flames stabilised in a three-dimensional turbulent counterflow, consisting of different oxidiser streams, are simulated with detailed chemistry first. Then, the thermo-chemical quantities calculated with different tabulation methods are compared to those from detailed chemistry solutions. The comparison shows that the conventional two-stream flamelet model with a fixed oxidiser temperature cannot predict the flame temperature correctly. The conventional two-stream flamelet model is then modified to set the oxidiser temperature equal to the fuel temperature, both of which are varied in the flamelets. By this means, the variations of oxidiser temperature can be considered. It is found that this modified tabulation method performs very well on prediction of the flame temperature. The third tabulation method is an extended three-stream flamelet model that was initially proposed for gaseous combustion. The results show that the reference gaseous temperature profile can be overall reproduced by the extended three-stream flamelet model. Interestingly, it is found that the predictions of major species mass fractions are not sensitive to the oxidiser temperature boundary conditions for the flamelet equations in the a priori analyses.

  15. Rapid preparation of nuclei-depleted detergent-resistant membrane fractions suitable for proteomics analysis.

    PubMed

    Adam, Rosalyn M; Yang, Wei; Di Vizio, Dolores; Mukhopadhyay, Nishit K; Steen, Hanno

    2008-06-05

    Cholesterol-rich membrane microdomains known as lipid rafts have been implicated in diverse physiologic processes including lipid transport and signal transduction. Lipid rafts were originally defined as detergent-resistant membranes (DRMs) due to their relative insolubility in cold non-ionic detergents. Recent findings suggest that, although DRMs are not equivalent to lipid rafts, the presence of a given protein within DRMs strongly suggests its potential for raft association in vivo. Therefore, isolation of DRMs represents a useful starting point for biochemical analysis of lipid rafts. The physicochemical properties of DRMs present unique challenges to analysis of their protein composition. Existing methods of isolating DRM-enriched fractions involve flotation of cell extracts in a sucrose density gradient, which, although successful, can be labor intensive, time consuming and results in dilute sucrose-containing fractions with limited utility for direct proteomic analysis. In addition, several studies describing the proteomic characterization of DRMs using this and other approaches have reported the presence of nuclear proteins in such fractions. It is unclear whether these results reflect trafficking of nuclear proteins to DRMs or whether they arise from nuclear contamination during isolation. To address these issues, we have modified a published differential detergent extraction method to enable rapid DRM isolation that minimizes nuclear contamination and yields fractions compatible with mass spectrometry. DRM-enriched fractions isolated using the conventional or modified extraction methods displayed comparable profiles of known DRM-associated proteins, including flotillins, GPI-anchored proteins and heterotrimeric G-protein subunits. Thus, the modified procedure yielded fractions consistent with those isolated by existing methods. However, we observed a marked reduction in the percentage of nuclear proteins identified in DRM fractions isolated with the modified method (15%) compared to DRMs isolated by conventional means (36%). Furthermore, of the 21 nuclear proteins identified exclusively in modified DRM fractions, 16 have been reported to exist in other subcellular sites, with evidence to suggest shuttling of these species between the nucleus and other organelles. We describe a modified DRM isolation procedure that generates DRMs that are largely free of nuclear contamination and that is compatible with downstream proteomic analyses with minimal additional processing. Our findings also imply that identification of nuclear proteins in DRMs is likely to reflect legitimate movement of proteins between compartments, and is not a result of contamination during extraction.

  16. Improved shear wave group velocity estimation method based on spatiotemporal peak and thresholding motion search

    PubMed Central

    Amador, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F.; Urban, Matthew W.

    2017-01-01

    Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocities values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index (BMI), ultrasound scanners, scanning protocols, ultrasound image quality, etc. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this study, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time (spatiotemporal peak, STP); the second method applies an amplitude filter (spatiotemporal thresholding, STTH) to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared to TTP in phantom. Moreover, in a cohort of 14 healthy subjects STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared to conventional TTP. PMID:28092532

  17. Improved Shear Wave Group Velocity Estimation Method Based on Spatiotemporal Peak and Thresholding Motion Search.

    PubMed

    Amador Carrascal, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F; Urban, Matthew W

    2017-04-01

    Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocity values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index, ultrasound scanners, scanning protocols, and ultrasound image quality. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this paper, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time [spatiotemporal peak (STP)]; the second method applies an amplitude filter [spatiotemporal thresholding (STTH)] to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared with TTP in phantom. Moreover, in a cohort of 14 healthy subjects, STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared with conventional TTP.

  18. Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models.

    PubMed

    Nicholl, Jon; Jacques, Richard M; Campbell, Michael J

    2013-10-29

    Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as "practically impossible", and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons.

  19. Practical implementation of spectral-intensity dispersion-canceled optical coherence tomography with artifact suppression

    NASA Astrophysics Data System (ADS)

    Shirai, Tomohiro; Friberg, Ari T.

    2018-04-01

    Dispersion-canceled optical coherence tomography (OCT) based on spectral intensity interferometry was devised as a classical counterpart of quantum OCT to enhance the basic performance of conventional OCT. In this paper, we demonstrate experimentally that an alternative method of realizing this kind of OCT by means of two optical fiber couplers and a single spectrometer is a more practical and reliable option than the existing methods proposed previously. Furthermore, we develop a recipe for reducing multiple artifacts simultaneously on the basis of simple averaging and verify experimentally that it works successfully in the sense that all the artifacts are mitigated effectively and only the true signals carrying structural information about the sample survive.

  20. Remote sensing of wet lands in irrigated areas

    NASA Technical Reports Server (NTRS)

    Ham, H. H.

    1972-01-01

    The use of airborne remote sensing techniques to: (1) detect drainage problem areas, (2) delineate the problem in terms of areal extent, depth to the water table, and presence of excessive salinity, and (3) evaluate the effectiveness of existing subsurface drainage facilities, is discussed. Experimental results show that remote sensing, as demonstrated in this study and as presently constituted and priced, does not represent a practical alternative as a management tool to presently used visual and conventional photographic methods in the systematic and repetitive detection and delineation of wetlands.

  1. Performance Improvement of Power Analysis Attacks on AES with Encryption-Related Signals

    NASA Astrophysics Data System (ADS)

    Lee, You-Seok; Lee, Young-Jun; Han, Dong-Guk; Kim, Ho-Won; Kim, Hyoung-Nam

    A power analysis attack is a well-known side-channel attack but the efficiency of the attack is frequently degraded by the existence of power components, irrelative to the encryption included in signals used for the attack. To enhance the performance of the power analysis attack, we propose a preprocessing method based on extracting encryption-related parts from the measured power signals. Experimental results show that the attacks with the preprocessed signals detect correct keys with much fewer signals, compared to the conventional power analysis attacks.

  2. [Burning mouth syndrome - a joint biopsychosocial approach].

    PubMed

    Arpone, Francesca; Combremont, Florian; Weber, Kerstin; Scolozzi, Paolo

    2016-02-10

    Burning mouth syndrome (BMS) is a medical condition that is often refractory to conventional diagnostic and therapeutic methods. Patients suffering from BMS can benefit from a biopsychosocial approach in a joint, medical-psychological consultation model. Such a consultation exists at Geneva University Hospitals, involving the collaboration of the maxillo-facial and oral surgery division and the division of liaison psychiatry and crisis intervention, in order to take into account the multiple factors involved in BMS onset and persistence. This article will describe BMS clinical presentation, and present an integrate approach to treat these patients.

  3. Band structure and unconventional electronic topology of CoSi

    NASA Astrophysics Data System (ADS)

    Pshenay-Severin, D. A.; Ivanov, Y. V.; Burkov, A. A.; Burkov, A. T.

    2018-04-01

    Semimetals with certain crystal symmetries may possess unusual electronic structure topology, distinct from that of the conventional Weyl and Dirac semimetals. Characteristic property of these materials is the existence of band-touching points with multiple (higher than two-fold) degeneracy and nonzero Chern number. CoSi is a representative of this group of materials exhibiting the so-called ‘new fermions’. We report on an ab initio calculation of the electronic structure of CoSi using density functional methods, taking into account the spin-orbit interactions. The linearized \

  4. Three-dimensional x-ray inspection of food products

    NASA Astrophysics Data System (ADS)

    Graves, Mark; Batchelor, Bruce G.; Palmer, Stephen C.

    1994-09-01

    Modern food production techniques operate at high speed and sometimes fill several containers simultaneously; individual containers never become available for inspection by conventional x- ray systems. There is a constant demand for improved methods for detecting foreign bodies, such as glass, plastic, wood, stone, animal remains, etc. These requirements lead to significant problems with existing inspection techniques, which are susceptible to noise and are unable to detect long thin contaminants reliably. Experimental results demonstrate these points. The paper proposes the use of two x-ray inspection systems, with orthogonal beams to overcome these difficulties.

  5. The development of additive manufacturing technique for nickel-base alloys: A review

    NASA Astrophysics Data System (ADS)

    Zadi-Maad, Ahmad; Basuki, Arif

    2018-04-01

    Nickel-base alloys are an attractive alloy due to its excellent mechanical properties, a high resistance to creep deformation, corrosion, and oxidation. However, it is a hard task to control performance when casting or forging for this material. In recent years, additive manufacturing (AM) process has been implemented to replace the conventional directional solidification process for the production of nickel-base alloys. Due to its potentially lower cost and flexibility manufacturing process, AM is considered as a substitute technique for the existing. This paper provides a comprehensive review of the previous work related to the AM techniques for Ni-base alloys while highlighting current challenges and methods to solving them. The properties of conventionally manufactured Ni-base alloys are also compared with the AM fabricated alloys. The mechanical properties obtained from tension, hardness and fatigue test are included, along with discussions of the effect of post-treatment process. Recommendations for further work are also provided.

  6. Direction of Arrival Estimation with a Novel Single-Port Smart Antenna

    NASA Astrophysics Data System (ADS)

    Sun, Chen; Karmakar, Nemai C.

    2004-12-01

    A novel direction of arrival (DOA) estimation technique that uses the conventional multiple-signal classification (MUSIC) algorithm with periodic signals is applied to a single-port smart antenna. Results show that the proposed method gives a high-resolution (1 degree) DOA estimation in an uncorrelated signal environment. The novelty lies in that the MUSIC algorithm is applied to a simplified antenna configuration. Only 1 analogue-to-digital converter (ADC) is used in this antenna, which features low power consumption, low cost, and ease of fabrication. Modifications to the conventional MUSIC algorithm do not bring much additional complexity. The proposed technique is also free from the negative influence by the mutual coupling among antenna elements. Therefore, it offers an economical way to extensively implement smart antennas into the existing wireless mobile communications systems, especially at the power consumption limited mobile terminals such as laptops in wireless networks.

  7. Improving building performance using smart building concept: Benefit cost ratio comparison

    NASA Astrophysics Data System (ADS)

    Berawi, Mohammed Ali; Miraj, Perdana; Sayuti, Mustika Sari; Berawi, Abdur Rohim Boy

    2017-11-01

    Smart building concept is an implementation of technology developed in the construction industry throughout the world. However, the implementation of this concept is still below expectations due to various obstacles such as higher initial cost than a conventional concept and existing regulation siding with the lowest cost in the tender process. This research aims to develop intelligent building concept using value engineering approach to obtain added value regarding quality, efficiency, and innovation. The research combined quantitative and qualitative approach using questionnaire survey and value engineering method to achieve the research objectives. The research output will show additional functions regarding technology innovation that may increase the value of a building. This study shows that smart building concept requires higher initial cost, but produces lower operational and maintenance costs. Furthermore, it also confirms that benefit-cost ratio on the smart building was much higher than a conventional building, that is 1.99 to 0.88.

  8. Malaria control in Tanzania

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yhdego, M.; Majura, P.

    A review of the malaria control programs and the problem encountered in the United Republic of Tanzania since 1945 to the year 1986 is discussed. Buguruni, one of the squatter areas in the city of Dar es Salaam, is chosen as a case study in order to evaluate the economic advantage of engineering methods for the control of malaria infection. Although the initial capital cost of engineering methods may be high, the cost effectiveness requires a much lower financial burden of only about Tshs. 3 million compared with the conventional methods of larviciding and insecticiding which requires more than Tshs.more » 10 million. Finally, recommendations for the adoption of engineering methods are made concerning the upgrading of existing roads and footpaths in general with particular emphasis on drainage of large pools of water which serve as breeding sites for mosquitoes.« less

  9. Supplementary Education: Global Growth, Japan's Experience, Canada's Future

    ERIC Educational Resources Information Center

    Dierkes, Julian

    2008-01-01

    Supplementary education is on the rise globally, taking many different forms, from private tutors to small schools and large corporations. These providers exist outside conventional public and private school systems, offering remedial education and tutoring, parallel instruction to conventional schools, and accelerated or more advanced…

  10. A nanocryotron comparator can connect single-flux-quantum circuits to conventional electronics

    NASA Astrophysics Data System (ADS)

    Zhao, Qing-Yuan; McCaughan, Adam N.; Dane, Andrew E.; Berggren, Karl K.; Ortlepp, Thomas

    2017-04-01

    Integration with conventional electronics offers a straightforward and economical approach to upgrading existing superconducting technologies, such as scaling up superconducting detectors into large arrays and combining single flux quantum (SFQ) digital circuits with semiconductor logic gates and memories. However, direct output signals from superconducting devices (e.g., Josephson junctions) are usually not compatible with the input requirements of conventional devices (e.g., transistors). Here, we demonstrate the use of a single three-terminal superconducting-nanowire device, called the nanocryotron (nTron), as a digital comparator to combine SFQ circuits with mature semiconductor circuits such as complementary metal oxide semiconductor (CMOS) circuits. Since SFQ circuits can digitize output signals from general superconducting devices and CMOS circuits can interface existing CMOS-compatible electronics, our results demonstrate the feasibility of a general architecture that uses an nTron as an interface to realize a ‘super-hybrid’ system consisting of superconducting detectors, superconducting quantum electronics, CMOS logic gates and memories, and other conventional electronics.

  11. Application of Extended Kalman Filter in Persistant Scatterer Interferometry to Enhace the Accuracy of Unwrapping Process

    NASA Astrophysics Data System (ADS)

    Tavakkoli Estahbanat, A.; Dehghani, M.

    2017-09-01

    In interferometry technique, phases have been modulated between 0-2π. Finding the number of integer phases missed when they were wrapped is the main goal of unwrapping algorithms. Although the density of points in conventional interferometry is high, this is not effective in some cases such as large temporal baselines or noisy interferograms. Due to existing noisy pixels, not only it does not improve results, but also it leads to some unwrapping errors during interferogram unwrapping. In PS technique, because of the sparse PS pixels, scientists are confronted with a problem to unwrap phases. Due to the irregular data separation, conventional methods are sterile. Unwrapping techniques are divided in to path-independent and path-dependent in the case of unwrapping paths. A region-growing method which is a path-dependent technique has been used to unwrap PS data. In this paper an idea of EKF has been generalized on PS data. This algorithm is applied to consider the nonlinearity of PS unwrapping problem as well as conventional unwrapping problem. A pulse-pair method enhanced with singular value decomposition (SVD) has been used to estimate spectral shift from interferometric power spectral density in 7*7 local windows. Furthermore, a hybrid cost-map is used to manage the unwrapping path. This algorithm has been implemented on simulated PS data. To form a sparse dataset, A few points from regular grid are randomly selected and the RMSE of results and true unambiguous phases in presented to validate presented approach. The results of this algorithm and true unwrapped phases were completely identical.

  12. Non-equilibrium Green's functions method: Non-trivial and disordered leads

    NASA Astrophysics Data System (ADS)

    He, Yu; Wang, Yu; Klimeck, Gerhard; Kubis, Tillmann

    2014-11-01

    The non-equilibrium Green's function algorithm requires contact self-energies to model charge injection and extraction. All existing approaches assume infinitely periodic leads attached to a possibly quite complex device. This contradicts today's realistic devices in which contacts are spatially inhomogeneous, chemically disordered, and impacting the overall device characteristics. This work extends the complex absorbing potentials method for arbitrary, ideal, or non-ideal leads in atomistic tight binding representation. The algorithm is demonstrated on a Si nanowire with periodic leads, a graphene nanoribbon with trumpet shape leads, and devices with leads of randomly alloyed Si0.5Ge0.5. It is found that alloy randomness in the leads can reduce the predicted ON-state current of Si0.5Ge0.5 transistors by 45% compared to conventional lead methods.

  13. Method and system for controlling the position of a beam of light

    DOEpatents

    Steinkraus, Jr., Robert F.; Johnson, Gary W [Livermore, CA; Ruggiero, Anthony J [Livermore, CA

    2011-08-09

    An method and system for laser beam tracking and pointing is based on a conventional position sensing detector (PSD) or quadrant cell but with the use of amplitude-modulated light. A combination of logarithmic automatic gain control, filtering, and synchronous detection offers high angular precision with exceptional dynamic range and sensitivity, while maintaining wide bandwidth. Use of modulated light enables the tracking of multiple beams simultaneously through the use of different modulation frequencies. It also makes the system resistant to interfering light sources such as ambient light. Beam pointing is accomplished by feeding back errors in the measured beam position to a beam steering element, such as a steering mirror. Closed-loop tracking performance is superior to existing methods, especially under conditions of atmospheric scintillation.

  14. Revealing the first uridyl peptide antibiotic biosynthetic gene cluster and probing pacidamycin biosynthesis.

    PubMed

    Rackham, Emma J; Grüschow, Sabine; Goss, Rebecca J M

    2011-01-01

    There is an urgent need for new antibiotics with resistance continuing to emerge toward existing classes. The pacidamycin antibiotics possess a novel scaffold and exhibit unexploited bioactivity rendering them attractive research targets. We recently reported the first identification of a biosynthetic cluster encoding uridyl peptide antibiotic assembly and the engineering of pacidamycin biosynthesis into a heterologous host. We report here our methods toward identifying the biosynthetic cluster. Our initial experiments employed conventional methods of probing a cosmid library using PCR and Southern blotting, however it became necessary to adopt a state-of-the-art genome scanning  and in silico hybridization approach  to pin point the cluster. Here we describe our "real" and "virtual" probing methods and contrast the benefits and pitfalls of each approach. 

  15. An indirect approach to the extensive calculation of relationship coefficients

    PubMed Central

    Colleau, Jean-Jacques

    2002-01-01

    A method was described for calculating population statistics on relationship coefficients without using corresponding individual data. It relied on the structure of the inverse of the numerator relationship matrix between individuals under investigation and ancestors. Computation times were observed on simulated populations and were compared to those incurred with a conventional direct approach. The indirect approach turned out to be very efficient for multiplying the relationship matrix corresponding to planned matings (full design) by any vector. Efficiency was generally still good or very good for calculating statistics on these simulated populations. An extreme implementation of the method is the calculation of inbreeding coefficients themselves. Relative performances of the indirect method were good except when many full-sibs during many generations existed in the population. PMID:12270102

  16. An ex vivo approach to botanical-drug interactions: a proof of concept study.

    PubMed

    Wang, Xinwen; Zhu, Hao-Jie; Munoz, Juliana; Gurley, Bill J; Markowitz, John S

    2015-04-02

    Botanical medicines are frequently used in combination with therapeutic drugs, imposing a risk for harmful botanical-drug interactions (BDIs). Among the existing BDI evaluation methods, clinical studies are the most desirable, but due to their expense and protracted time-line for completion, conventional in vitro methodologies remain the most frequently used BDI assessment tools. However, many predictions generated from in vitro studies are inconsistent with clinical findings. Accordingly, the present study aimed to develop a novel ex vivo approach for BDI assessment and expand the safety evaluation methodology in applied ethnopharmacological research. This approach differs from conventional in vitro methods in that rather than botanical extracts or individual phytochemicals being prepared in artificial buffers, human plasma/serum collected from a limited number of subjects administered botanical supplements was utilized to assess BDIs. To validate the methodology, human plasma/serum samples collected from healthy subjects administered either milk thistle or goldenseal extracts were utilized in incubation studies to determine their potential inhibitory effects on CYP2C9 and CYP3A4/5, respectively. Silybin A and B, two principal milk thistle phytochemicals, and hydrastine and berberine, the purported active constituents in goldenseal, were evaluated in both phosphate buffer and human plasma based in vitro incubation systems. Ex vivo study results were consistent with formal clinical study findings for the effect of milk thistle on the disposition of tolbutamide, a CYP2C9 substrate, and for goldenseal׳s influence on the pharmacokinetics of midazolam, a widely accepted CYP3A4/5 substrate. Compared to conventional in vitro BDI methodologies of assessment, the introduction of human plasma into the in vitro study model changed the observed inhibitory effect of silybin A, silybin B and hydrastine and berberine on CYP2C9 and CYP3A4/5, respectively, results which more closely mirrored those generated in clinical study. Data from conventional buffer-based in vitro studies were less predictive than the ex vivo assessments. Thus, this novel ex vivo approach may be more effective at predicting clinically relevant BDIs than conventional in vitro methods. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Unstructured viscous grid generation by advancing-front method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1993-01-01

    A new method of generating unstructured triangular/tetrahedral grids with high-aspect-ratio cells is proposed. The method is based on new grid-marching strategy referred to as 'advancing-layers' for construction of highly stretched cells in the boundary layer and the conventional advancing-front technique for generation of regular, equilateral cells in the inviscid-flow region. Unlike the existing semi-structured viscous grid generation techniques, the new procedure relies on a totally unstructured advancing-front grid strategy resulting in a substantially enhanced grid flexibility and efficiency. The method is conceptually simple but powerful, capable of producing high quality viscous grids for complex configurations with ease. A number of two-dimensional, triangular grids are presented to demonstrate the methodology. The basic elements of the method, however, have been primarily designed with three-dimensional problems in mind, making it extendible for tetrahedral, viscous grid generation.

  18. A fully automatable enzymatic method for DNA extraction from plant tissues

    PubMed Central

    Manen, Jean-François; Sinitsyna, Olga; Aeschbach, Lorène; Markov, Alexander V; Sinitsyn, Arkady

    2005-01-01

    Background DNA extraction from plant tissues, unlike DNA isolation from mammalian tissues, remains difficult due to the presence of a rigid cell wall around the plant cells. Currently used methods inevitably require a laborious mechanical grinding step, necessary to disrupt the cell wall for the release of DNA. Results Using a cocktail of different carbohydrases, a method was developed that enables a complete digestion of the plant cell walls and subsequent DNA release. Optimized conditions for the digestion reaction minimize DNA shearing and digestion, and maximize DNA release from the plant cell. The method gave good results in 125 of the 156 tested species. Conclusion In combination with conventional DNA isolation techniques, the new enzymatic method allows to obtain high-yield, high-molecular weight DNA, which can be used for many applications, including genome characterization by AFLP, RAPD and SSR. Automation of the protocol (from leaf disks to DNA) is possible with existing workstations. PMID:16269076

  19. Microwave Crystallization of Lithium Aluminum Germanium Phosphate Solid-State Electrolyte.

    PubMed

    Mahmoud, Morsi M; Cui, Yuantao; Rohde, Magnus; Ziebert, Carlos; Link, Guido; Seifert, Hans Juergen

    2016-06-23

    Lithium aluminum germanium phosphate (LAGP) glass-ceramics are considered as promising solid-state electrolytes for Li-ion batteries. LAGP glass was prepared via the regular conventional melt-quenching method. Thermal, chemical analyses and X-ray diffraction (XRD) were performed to characterize the prepared glass. The crystallization of the prepared LAGP glass was done using conventional heating and high frequency microwave (MW) processing. Thirty GHz microwave (MW) processing setup were used to convert the prepared LAGP glass into glass-ceramics and compared with the conventionally crystallized LAGP glass-ceramics that were heat-treated in an electric conventional furnace. The ionic conductivities of the LAGP samples obtained from the two different routes were measured using impedance spectroscopy. These samples were also characterized using XRD and scanning electron microscopy (SEM). Microwave processing was successfully used to crystallize LAGP glass into glass-ceramic without the aid of susceptors. The MW treated sample showed higher total, grains and grain boundary ionic conductivities values, lower activation energy and relatively larger-grained microstructure with less porosity compared to the corresponding conventionally treated sample at the same optimized heat-treatment conditions. The enhanced total, grains and grain boundary ionic conductivities values along with the reduced activation energy that were observed in the MW treated sample was considered as an experimental evidence for the existence of the microwave effect in LAGP crystallization process. MW processing is a promising candidate technology for the production of solid-state electrolytes for Li-ion battery.

  20. Manifold Regularized Experimental Design for Active Learning.

    PubMed

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  1. Values of molecular markers in the differential diagnosis of thyroid abnormalities.

    PubMed

    Tennakoon, T M P B; Rushdhi, M; Ranasinghe, A D C U; Dassanayake, R S

    2017-06-01

    Thyroid cancer (TC), follicular adenoma (FA) and Hashimoto's thyroiditis (HT) are three of the most frequently reported abnormalities that affect the thyroid gland. A frequent co-occurrence along with similar histopathological features is observed between TC and FA as well as between TC and HT. The conventional diagnostic methods such as histochemical analysis present complications in differential diagnosis when these abnormalities occur simultaneously. Hence, the authors recognize novel methods based on screening genetic defects of thyroid abnormalities as viable diagnostic and prognostic methods that could complement the conventional methods. We have extensively reviewed the existing literature on TC, FA and HT and also on three genes, namely braf, nras and ret/ptc, that could be used to differentially diagnose the three abnormalities. Emphasis was also given to the screening methods available to detect the said molecular markers. It can be conferred from the analysis of the available data that the utilization of braf, nras and ret/ptc as markers for the therapeutic evaluation of FA and HT is debatable. However, molecular screening for braf, nras and ret/ptc mutations proves to be a conclusive method that could be employed to differentially diagnose TC from HT and FA in the instance of a suspected co-occurrence. Thyroid cancer patients can be highly benefited from the screening for the said genetic markers, especially the braf gene due to its diagnostic value as well as due to the availability of personalized medicine targeted specifically for braf mutants.

  2. Subspace-based interference removal methods for a multichannel biomagnetic sensor array.

    PubMed

    Sekihara, Kensuke; Nagarajan, Srikantan S

    2017-10-01

    In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.

  3. Subspace-based interference removal methods for a multichannel biomagnetic sensor array

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Nagarajan, Srikantan S.

    2017-10-01

    Objective. In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. Approach. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.

  4. Catholics in the REA, 1903-1953

    ERIC Educational Resources Information Center

    Elias, John L.

    2004-01-01

    This article describes the involvement of Roman Catholics in the Religious Education Association during the first 50 years of its existence. It examines attitudes of Protestants toward Catholics expressed in journal articles, convention speeches, and archival material. It presents the contributions of Roman Catholics at conventions and in journal…

  5. Efficacy of some non-conventional herbal medications (sulforaphane, tanshinone IIA, and tetramethylpyrazine) in inducing neuroprotection in comparison with interleukin-10 after spinal cord injury: A meta-analysis

    PubMed Central

    Koushki, Davood; Latifi, Sahar; Norouzi Javidan, Abbas; Matin, Marzieh

    2015-01-01

    Context Inflammation after spinal cord injury (SCI) may be responsible for further neural damages and therefore inhibition of inflammatory processes may exert a neuroprotection effect. Objectives To assess the efficacy of some non-conventional herbal medications including sulforaphane, tanshinone IIA, and tetramethylpyrazine in reducing inflammation and compare them with a known effective anti-inflammatory agent (interleukin-10 (IL-10)). Methods We searched relevant articles in Ovid database, Medline (PubMed) EMBASE, Google Scholar, Cochrane, and Scopus up to June 2013. The efficacy of each treatment and study powers were compared using random effects model of meta-analysis. To our knowledge, no conflict of interest exists. Results Eighteen articles entered into the study. The meta-analysis revealed that exogenous IL-10 was more effective in comparison with the mentioned herbal extracts. The proposed pathways for each medication's effect on reducing the inflammation process are complex and many overlaps may exist. Conclusion IL-10 has a strong effect in the induction of neuroprotection and neurorecovery after SCI by multiple pathways. Tetramethylpyrazine has an acceptable influence in reducing inflammation through the up-regulation of IL-10. Outcomes of sulforaphane and tanshinone IIA administration are acceptable but still weaker than IL-10. PMID:24969510

  6. Application of Fracture Distribution Prediction Model in Xihu Depression of East China Sea

    NASA Astrophysics Data System (ADS)

    Yan, Weifeng; Duan, Feifei; Zhang, Le; Li, Ming

    2018-02-01

    There are different responses on each of logging data with the changes of formation characteristics and outliers caused by the existence of fractures. For this reason, the development of fractures in formation can be characterized by the fine analysis of logging curves. The well logs such as resistivity, sonic transit time, density, neutron porosity and gamma ray, which are classified as conventional well logs, are more sensitive to formation fractures. In view of traditional fracture prediction model, using the simple weighted average of different logging data to calculate the comprehensive fracture index, are more susceptible to subjective factors and exist a large deviation, a statistical method is introduced accordingly. Combining with responses of conventional logging data on the development of formation fracture, a prediction model based on membership function is established, and its essence is to analyse logging data with fuzzy mathematics theory. The fracture prediction results in a well formation in NX block of Xihu depression through two models are compared with that of imaging logging, which shows that the accuracy of fracture prediction model based on membership function is better than that of traditional model. Furthermore, the prediction results are highly consistent with imaging logs and can reflect the development of cracks much better. It can provide a reference for engineering practice.

  7. Comparison of the lysis centrifugation method with the conventional blood culture method in cases of sepsis in a tertiary care hospital.

    PubMed

    Parikh, Harshal R; De, Anuradha S; Baveja, Sujata M

    2012-07-01

    Physicians and microbiologists have long recognized that the presence of living microorganisms in the blood of a patient carries with it considerable morbidity and mortality. Hence, blood cultures have become critically important and frequently performed test in clinical microbiology laboratories for diagnosis of sepsis. To compare the conventional blood culture method with the lysis centrifugation method in cases of sepsis. Two hundred nonduplicate blood cultures from cases of sepsis were analyzed using two blood culture methods concurrently for recovery of bacteria from patients diagnosed clinically with sepsis - the conventional blood culture method using trypticase soy broth and the lysis centrifugation method using saponin by centrifuging at 3000 g for 30 minutes. Overall bacteria recovered from 200 blood cultures were 17.5%. The conventional blood culture method had a higher yield of organisms, especially Gram positive cocci. The lysis centrifugation method was comparable with the former method with respect to Gram negative bacilli. The sensitivity of lysis centrifugation method in comparison to conventional blood culture method was 49.75% in this study, specificity was 98.21% and diagnostic accuracy was 89.5%. In almost every instance, the time required for detection of the growth was earlier by lysis centrifugation method, which was statistically significant. Contamination by lysis centrifugation was minimal, while that by conventional method was high. Time to growth by the lysis centrifugation method was highly significant (P value 0.000) as compared to time to growth by the conventional blood culture method. For the diagnosis of sepsis, combination of the lysis centrifugation method and the conventional blood culture method with trypticase soy broth or biphasic media is advocable, in order to achieve faster recovery and a better yield of microorganisms.

  8. Real-time fluorescence loop mediated isothermal amplification for the diagnosis of malaria.

    PubMed

    Lucchi, Naomi W; Demas, Allison; Narayanan, Jothikumar; Sumari, Deborah; Kabanywanyi, Abdunoor; Kachur, S Patrick; Barnwell, John W; Udhayakumar, Venkatachalam

    2010-10-29

    Molecular diagnostic methods can complement existing tools to improve the diagnosis of malaria. However, they require good laboratory infrastructure thereby restricting their use to reference laboratories and research studies. Therefore, adopting molecular tools for routine use in malaria endemic countries will require simpler molecular platforms. The recently developed loop-mediated isothermal amplification (LAMP) method is relatively simple and can be improved for better use in endemic countries. In this study, we attempted to improve this method for malaria diagnosis by using a simple and portable device capable of performing both the amplification and detection (by fluorescence) of LAMP in one platform. We refer to this as the RealAmp method. Published genus-specific primers were used to test the utility of this method. DNA derived from different species of malaria parasites was used for the initial characterization. Clinical samples of P. falciparum were used to determine the sensitivity and specificity of this system compared to microscopy and a nested PCR method. Additionally, directly boiled parasite preparations were compared with a conventional DNA isolation method. The RealAmp method was found to be simple and allowed real-time detection of DNA amplification. The time to amplification varied but was generally less than 60 minutes. All human-infecting Plasmodium species were detected. The sensitivity and specificity of RealAmp in detecting P. falciparum was 96.7% and 91.7% respectively, compared to microscopy and 98.9% and 100% respectively, compared to a standard nested PCR method. In addition, this method consistently detected P. falciparum from directly boiled blood samples. This RealAmp method has great potential as a field usable molecular tool for diagnosis of malaria. This tool can provide an alternative to conventional PCR based diagnostic methods for field use in clinical and operational programs.

  9. Chlorine measurement in the jet singlet oxygen generator considering the effects of the droplets.

    PubMed

    Goodarzi, Mohamad S; Saghafifar, Hossein

    2016-09-01

    A new method is presented to measure chlorine concentration more accurately than conventional method in exhaust gases of a jet-type singlet oxygen generator. One problem in this measurement is the existence of micrometer-sized droplets. In this article, an empirical method is reported to eliminate the effects of the droplets. Two wavelengths from a fiber coupled LED are adopted and the measurement is made on both selected wavelengths. Chlorine is measured by the two-wavelength more accurately than the one-wavelength method by eliminating the droplet term in the equations. This method is validated without the basic hydrogen peroxide injection in the reactor. In this case, a pressure meter value in the diagnostic cell is compared with the optically calculated pressure, which is obtained by the one-wavelength and the two-wavelength methods. It is found that chlorine measurement by the two-wavelength method and pressure meter is nearly the same, while the one-wavelength method has a significant error due to the droplets.

  10. Global optimization method based on ray tracing to achieve optimum figure error compensation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin

    2017-02-01

    Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.

  11. A work study of the CAD/CAM method and conventional manual method in the fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis.

    PubMed

    Wong, M S; Cheng, J C Y; Wong, M W; So, S F

    2005-04-01

    A study was conducted to compare the CAD/CAM method with the conventional manual method in fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis. Ten subjects were recruited for this study. Efficiency analyses of the two methods were performed from cast filling/ digitization process to completion of cast/image rectification. The dimensional changes of the casts/ models rectified by the two cast rectification methods were also investigated. The results demonstrated that the CAD/CAM method was faster than the conventional manual method in the studied processes. The mean rectification time of the CAD/CAM method was shorter than that of the conventional manual method by 108.3 min (63.5%). This indicated that the CAD/CAM method took about 1/3 of the time of the conventional manual to finish cast rectification. In the comparison of cast/image dimensional differences between the conventional manual method and the CAD/CAM method, five major dimensions in each of the five rectified regions namely the axilla, thoracic, lumbar, abdominal and pelvic regions were involved. There were no significant dimensional differences (p < 0.05) in 19 out of the 25 studied dimensions. This study demonstrated that the CAD/CAM system could save the time in the rectification process and offer a relatively high resemblance in cast rectification as compared with the conventional manual method.

  12. Traceability in hardness measurements: from the definition to industry

    NASA Astrophysics Data System (ADS)

    Germak, Alessandro; Herrmann, Konrad; Low, Samuel

    2010-04-01

    The measurement of hardness has been and continues to be of significant importance to many of the world's manufacturing industries. Conventional hardness testing is the most commonly used method for acceptance testing and production quality control of metals and metallic products. Instrumented indentation is one of the few techniques available for obtaining various property values for coatings and electronic products in the micrometre and nanometre dimensional scales. For these industries to be successful, it is critical that measurements made by suppliers and customers agree within some practical limits. To help assure this measurement agreement, a traceability chain for hardness measurement traceability from the hardness definition to industry has developed and evolved over the past 100 years, but its development has been complicated. A hardness measurement value not only requires traceability of force, length and time measurements but also requires traceability of the hardness values measured by the hardness machine. These multiple traceability paths are needed because a hardness measurement is affected by other influence parameters that are often difficult to identify, quantify and correct. This paper describes the current situation of hardness measurement traceability that exists for the conventional hardness methods (i.e. Rockwell, Brinell, Vickers and Knoop hardness) and for special-application hardness and indentation methods (i.e. elastomer, dynamic, portables and instrumented indentation).

  13. Slump sitting X-ray of the lumbar spine is superior to the conventional flexion view in assessing lumbar spine instability.

    PubMed

    Hey, Hwee Weng Dennis; Lau, Eugene Tze-Chun; Lim, Joel-Louis; Choong, Denise Ai-Wen; Tan, Chuen-Seng; Liu, Gabriel Ka-Po; Wong, Hee-Kit

    2017-03-01

    Flexion radiographs have been used to identify cases of spinal instability. However, current methods are not standardized and are not sufficiently sensitive or specific to identify instability. This study aimed to introduce a new slump sitting method for performing lumbar spine flexion radiographs and comparison of the angular range of motions (ROMs) and displacements between the conventional method and this new method. This study used is a prospective study on radiological evaluation of the lumbar spine flexion ROMs and displacements using dynamic radiographs. Sixty patients were recruited from a single spine tertiary center. Angular and displacement measurements of lumbar spine flexion were carried out. Participants were randomly allocated into two groups: those who did the new method first, followed by the conventional method versus those who did the conventional method first, followed by the new method. A comparison of the angular and displacement measurements of lumbar spine flexion between the conventional method and the new method was performed and tested for superiority and non-inferiority. The measurements of global lumbar angular ROM were, on average, 17.3° larger (p<.0001) using the new slump sitting method compared with the conventional method. They were most significant at the levels of L3-L4, L4-L5, and L5-S1 (p<.0001, p<.0001 and p=.001, respectively). There was no significant difference between both methods when measuring lumbar displacements (p=.814). The new method of slump sitting dynamic radiograph was shown to be superior to the conventional method in measuring the angular ROM and non-inferior to the conventional method in the measurement of displacement. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Magnetic properties in polycrystalline and single crystal Ca-doped LaCoO3

    NASA Astrophysics Data System (ADS)

    Zeng, R.; Debnath, J. C.; Chen, D. P.; Shamba, P.; Wang, J. L.; Kennedy, S. J.; Campbell, S. J.; Silver, T.; Dou, S. X.

    2011-04-01

    Polycrystalline (PC) and single crystalline (SC) Ca-doped LaCoO3 (LCCO) samples with the perovskite structure were synthesized by conventional solid-state reaction and the floating-zone growth method. We present the results of a comprehensive investigation of the magnetic properties of the LCCO system. Systematic measurements have been conducted on dc magnetization, ac susceptibility, exchange-bias, and the magnetocaloric effect. These findings suggest that complex structural phases, ferromagnetic (FM), and spin-glass/cluster-spin-glass (CSG), and their transitions exist in PC samples, while there is a much simpler magnetic phase in SC samples. It was also of interest to discover that the CSG induced a magnetic field memory effect and an exchange-bias-like effect, and that a large inverse irreversible magnetocaloric effect exists in this system.

  15. Orbital stability and energy estimate of ground states of saturable nonlinear Schrödinger equations with intensity functions in R2

    NASA Astrophysics Data System (ADS)

    Lin, Tai-Chia; Wang, Xiaoming; Wang, Zhi-Qiang

    2017-10-01

    Conventionally, the existence and orbital stability of ground states of nonlinear Schrödinger (NLS) equations with power-law nonlinearity (subcritical case) can be proved by an argument using strict subadditivity of the ground state energy and the concentration compactness method of Cazenave and Lions [4]. However, for saturable nonlinearity, such an argument is not applicable because strict subadditivity of the ground state energy fails in this case. Here we use a convexity argument to prove the existence and orbital stability of ground states of NLS equations with saturable nonlinearity and intensity functions in R2. Besides, we derive the energy estimate of ground states of saturable NLS equations with intensity functions using the eigenvalue estimate of saturable NLS equations without intensity function.

  16. Somatic coliphages as surrogates for enteroviruses in sludge hygienization treatments.

    PubMed

    Martín-Díaz, Julia; Casas-Mangas, Raquel; García-Aljaro, Cristina; Blanch, Anicet R; Lucena, Francisco

    2016-01-01

    Conventional bacterial indicators present serious drawbacks giving information about viral pathogens persistence during sludge hygienization treatments. This calls for the search of alternative viral indicators. Somatic coliphages' (SOMCPH) ability for acting as surrogates for enteroviruses was assessed in 47 sludge samples subjected to novel treatment processes. SOMCPH, infectious enteroviruses and genome copies of enteroviruses were monitored. Only one of these groups, the bacteriophages, was present in the sludge at concentrations that allowed the evaluation of treatment's performance. An indicator/pathogen relationship of 4 log10 (PFU/g dw) was found between SOMCPH and infective enteroviruses and their detection accuracy was assessed. The obtained results and the existence of rapid and standardized methods encourage the inclusion of SOMCPH quantification in future sludge directives. In addition, an existing real-time quantitative polymerase chain reaction (RT-qPCR) for enteroviruses was adapted and applied.

  17. Design of a steganographic virtual operating system

    NASA Astrophysics Data System (ADS)

    Ashendorf, Elan; Craver, Scott

    2015-03-01

    A steganographic file system is a secure file system whose very existence on a disk is concealed. Customarily, these systems hide an encrypted volume within unused disk blocks, slack space, or atop conventional encrypted volumes. These file systems are far from undetectable, however: aside from their ciphertext footprint, they require a software or driver installation whose presence can attract attention and then targeted surveillance. We describe a new steganographic operating environment that requires no visible software installation, launching instead from a concealed bootstrap program that can be extracted and invoked with a chain of common Unix commands. Our system conceals its payload within innocuous files that typically contain high-entropy data, producing a footprint that is far less conspicuous than existing methods. The system uses a local web server to provide a file system, user interface and applications through a web architecture.

  18. The bias and signal attenuation present in conventional pollen-based climate reconstructions as assessed by early climate data from Minnesota, USA.

    PubMed

    St Jacques, Jeannine-Marie; Cumming, Brian F; Sauchyn, David J; Smol, John P

    2015-01-01

    The inference of past temperatures from a sedimentary pollen record depends upon the stationarity of the pollen-climate relationship. However, humans have altered vegetation independent of changes to climate, and consequently modern pollen deposition is a product of landscape disturbance and climate, which is different from the dominance of climate-derived processes in the past. This problem could cause serious signal distortion in pollen-based reconstructions. In the north-central United States, direct human impacts have strongly altered the modern vegetation and hence the pollen rain since Euro-American settlement in the mid-19th century. Using instrumental temperature data from the early 1800 s from Fort Snelling (Minnesota), we assessed the signal distortion and bias introduced by using the conventional method of inferring temperature from pollen assemblages in comparison to a calibration set from pre-settlement pollen assemblages and the earliest instrumental climate data. The early post-settlement calibration set provides more accurate reconstructions of the 19th century instrumental record, with less bias, than the modern set does. When both modern and pre-industrial calibration sets are used to reconstruct past temperatures since AD 1116 from pollen counts from a varve-dated record from Lake Mina, Minnesota, the conventional inference method produces significant low-frequency (centennial-scale) signal attenuation and positive bias of 0.8-1.7 °C, resulting in an overestimation of Little Ice Age temperature and likely an underestimation of the extent and rate of anthropogenic warming in this region. However, high-frequency (annual-scale) signal attenuation exists with both methods. Hence, we conclude that any past pollen spectra from before Euro-American settlement in this region should be interpreted using a pre-Euro-American settlement pollen set, paired to the earliest instrumental climate records. It remains to be explored how widespread this problem is when conventional pollen-based inference methods are used, and consequently how seriously regional manifestations of global warming have been underestimated with traditional pollen-based techniques.

  19. Compression and Transmission of RF Signals for Telediagnosis

    NASA Astrophysics Data System (ADS)

    Seko, Toshihiro; Doi, Motonori; Oshiro, Osamu; Chihara, Kunihiro

    2000-05-01

    Health care is a critical issue nowadays. Much emphasis is given to quality care for all people. Telediagnosis has attracted public attention. We propose a new method of ultrasound image transmission for telediagnosis. In conventional methods, video image signals are transmitted. In our method, the RF signals which are acquired by an ultrasound probe, are transmitted. The RF signals can be transformed to color Doppler images or high-resolution images by a receiver. Because a stored form is adopted, the proposed system can be realized with existent technology such as hyper text transfer protocol (HTTP) and file transfer protocol (FTP). In this paper, we describe two lossless compression methods which specialize in the transmission of RF signals. One of the methods uses the characteristics of the RF signal. In the other method, the amount of the data is reduced. Measurements were performed in water targeting an iron block and triangular Styrofoam. Additionally, abdominal fat measurement was performed. Our method achieved a compression rate of 13% with 8 bit data.

  20. Single-molecule diffusion and conformational dynamics by spatial integration of temporal fluctuations

    PubMed Central

    Serag, Maged F.; Abadi, Maram; Habuchi, Satoshi

    2014-01-01

    Single-molecule localization and tracking has been used to translate spatiotemporal information of individual molecules to map their diffusion behaviours. However, accurate analysis of diffusion behaviours and including other parameters, such as the conformation and size of molecules, remain as limitations to the method. Here, we report a method that addresses the limitations of existing single-molecular localization methods. The method is based on temporal tracking of the cumulative area occupied by molecules. These temporal fluctuations are tied to molecular size, rates of diffusion and conformational changes. By analysing fluorescent nanospheres and double-stranded DNA molecules of different lengths and topological forms, we demonstrate that our cumulative-area method surpasses the conventional single-molecule localization method in terms of the accuracy of determined diffusion coefficients. Furthermore, the cumulative-area method provides conformational relaxation times of structurally flexible chains along with diffusion coefficients, which together are relevant to work in a wide spectrum of scientific fields. PMID:25283876

  1. Comparison of anatomical, functional and regression methods for estimating the rotation axes of the forearm.

    PubMed

    Fraysse, François; Thewlis, Dominic

    2014-11-07

    Numerous methods exist to estimate the pose of the axes of rotation of the forearm. These include anatomical definitions, such as the conventions proposed by the ISB, and functional methods based on instantaneous helical axes, which are commonly accepted as the modelling gold standard for non-invasive, in-vivo studies. We investigated the validity of a third method, based on regression equations, to estimate the rotation axes of the forearm. We also assessed the accuracy of both ISB methods. Axes obtained from a functional method were considered as the reference. Results indicate a large inter-subject variability in the axes positions, in accordance with previous studies. Both ISB methods gave the same level of accuracy in axes position estimations. Regression equations seem to improve estimation of the flexion-extension axis but not the pronation-supination axis. Overall, given the large inter-subject variability, the use of regression equations cannot be recommended. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Comparison of Maraging Steel Micro- and Nanostructure Produced Conventionally and by Laser Additive Manufacturing.

    PubMed

    Jägle, Eric A; Sheng, Zhendong; Kürnsteiner, Philipp; Ocylok, Sörn; Weisheit, Andreas; Raabe, Dierk

    2016-12-24

    Maraging steels are used to produce tools by Additive Manufacturing (AM) methods such as Laser Metal Deposition (LMD) and Selective Laser Melting (SLM). Although it is well established that dense parts can be produced by AM, the influence of the AM process on the microstructure-in particular the content of retained and reversed austenite as well as the nanostructure, especially the precipitate density and chemistry, are not yet explored. Here, we study these features using microhardness measurements, Optical Microscopy, Electron Backscatter Diffraction (EBSD), Energy Dispersive Spectroscopy (EDS), and Atom Probe Tomography (APT) in the as-produced state and during ageing heat treatment. We find that due to microsegregation, retained austenite exists in the as-LMD- and as-SLM-produced states but not in the conventionally-produced material. The hardness in the as-LMD-produced state is higher than in the conventionally and SLM-produced materials, however, not in the uppermost layers. By APT, it is confirmed that this is due to early stages of precipitation induced by the cyclic re-heating upon further deposition-i.e., the intrinsic heat treatment associated with LMD. In the peak-aged state, which is reached after a similar time in all materials, the hardness of SLM- and LMD-produced material is slightly lower than in conventionally-produced material due to the presence of retained austenite and reversed austenite formed during ageing.

  3. Comparison of Maraging Steel Micro- and Nanostructure Produced Conventionally and by Laser Additive Manufacturing

    PubMed Central

    Jägle, Eric A.; Sheng, Zhendong; Kürnsteiner, Philipp; Ocylok, Sörn; Weisheit, Andreas; Raabe, Dierk

    2016-01-01

    Maraging steels are used to produce tools by Additive Manufacturing (AM) methods such as Laser Metal Deposition (LMD) and Selective Laser Melting (SLM). Although it is well established that dense parts can be produced by AM, the influence of the AM process on the microstructure—in particular the content of retained and reversed austenite as well as the nanostructure, especially the precipitate density and chemistry, are not yet explored. Here, we study these features using microhardness measurements, Optical Microscopy, Electron Backscatter Diffraction (EBSD), Energy Dispersive Spectroscopy (EDS), and Atom Probe Tomography (APT) in the as-produced state and during ageing heat treatment. We find that due to microsegregation, retained austenite exists in the as-LMD- and as-SLM-produced states but not in the conventionally-produced material. The hardness in the as-LMD-produced state is higher than in the conventionally and SLM-produced materials, however, not in the uppermost layers. By APT, it is confirmed that this is due to early stages of precipitation induced by the cyclic re-heating upon further deposition—i.e., the intrinsic heat treatment associated with LMD. In the peak-aged state, which is reached after a similar time in all materials, the hardness of SLM- and LMD-produced material is slightly lower than in conventionally-produced material due to the presence of retained austenite and reversed austenite formed during ageing. PMID:28772369

  4. Yield of glyphosate-resistant sugar beets and efficiency of weed management systems with glyphosate and conventional herbicides under German and Polish crop production.

    PubMed

    Nichterlein, Henrike; Matzk, Anja; Kordas, Leszek; Kraus, Josef; Stibbe, Carsten

    2013-08-01

    In sugar beet production, weed control is one of the most important and most expensive practices to ensure yield. Since glyphosate-resistant sugar beets are not yet approved for cultivation in the EU, little commercial experience exists with these sugar beets in Europe. Experimental field trials were conducted at five environments (Germany, Poland, 2010, 2011) to compare the effects of glyphosate with the effects of conventional weed control programs on the development of weeds, weed control efficiency and yield. The results show that the glyphosate weed control programs compared to the conventional methods decreased not only the number of herbicide applications but equally in magnitude decreased the dosage of active ingredients. The results also showed effective weed control with glyphosate when the weed covering was greater and sugar beets had a later growth stage of four true leaves. Glyphosate-resistant sugar beets applied with the glyphosate herbicide two or three times had an increase in white sugar yield from 4 to 18 % in comparison to the high dosage conventional herbicide systems. In summary, under glyphosate management sugar beets can positively contribute to the increasingly demanding requirements regarding efficient sugar beet cultivation and to the demands by society and politics to reduce the use of chemical plant protection products in the environment.

  5. SU-E-T-293: Simplifying Assumption for Determining Sc and Sp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, R; Cheung, A; Anderson, R

    Purpose: Scp(mlc,jaw) is a two-dimensional function of collimator field size and effective field size. Conventionally, Scp(mlc,jaw) is treated as separable into components Sc(jaw) and Sp(mlc). Scp(mlc=jaw) is measured in phantom and Sc(jaw) is measured in air with Sp=Scp/Sc. Ideally, Sc and Sp would be able to predict measured values of Scp(mlc,jaw) for all combinations of mlc and jaw. However, ideal Sc and Sp functions do not exist and a measured two-dimensional Scp dataset cannot be decomposed into a unique pair of one-dimensional functions.If the output functions Sc(jaw) and Sp(mlc) were equal to each other and thus each equal to Scp(mlc=jaw){supmore » 0.5}, this condition would lead to a simpler measurement process by eliminating the need for in-air measurements. Without the distorting effect of the buildup-cap, small-field measurement would be limited only by the dimensions of the detector and would thus be improved by this simplification of the output functions. The goal of the present study is to evaluate an assumption that Sc=Sp. Methods: For a 6 MV x-ray beam, Sc and Sp were determined both by the conventional method and as Scp(mlc=jaw){sup 0.5}. Square field benchmark values of Scp(mlc,jaw) were then measured across the range from 2×2 to 29×29. Both Sc and Sp functions were then evaluated as to their ability to predict these measurements. Results: Both methods produced qualitatively similar results with <4% error for all cases and >3% error in 1 case. The conventional method produced 2 cases with >2% error, while the squareroot method produced only 1 such case. Conclusion: Though it would need to be validated for any specific beam to which it might be applied, under the conditions studied, the simplifying assumption that Sc = Sp is justified.« less

  6. Optical methods for non-contact measurements of membranes

    NASA Astrophysics Data System (ADS)

    Roose, S.; Stockman, Y.; Rochus, P.; Kuhn, T.; Lang, M.; Baier, H.; Langlois, S.; Casarosa, G.

    2009-11-01

    Structures for space applications very often suffer stringent mass constraints. Lightweight structures are developed for this purpose, through the use of deployable and/or inflatable beams, and thin-film membranes. Their inherent properties (low mass and small thickness) preclude the use of conventional measurement methods (accelerometers and displacement transducers for example) during on-ground testing. In this context, innovative non-contact measurement methods need to be investigated for these stretched membranes. The object of the present project is to review existing measurement systems capable of measuring characteristics of membrane space-structures such as: dot-projection videogrammetry (static measurements), stereo-correlation (dynamic and static measurements), fringe projection (wrinkles) and 3D laser scanning vibrometry (dynamic measurements). Therefore, minimum requirements were given for the study in order to have representative test articles covering a wide range of applications. We present test results obtained with the different methods on our test articles.

  7. LCAMP: Location Constrained Approximate Message Passing for Compressed Sensing MRI

    PubMed Central

    Sung, Kyunghyun; Daniel, Bruce L; Hargreaves, Brian A

    2016-01-01

    Iterative thresholding methods have been extensively studied as faster alternatives to convex optimization methods for solving large-sized problems in compressed sensing. A novel iterative thresholding method called LCAMP (Location Constrained Approximate Message Passing) is presented for reducing computational complexity and improving reconstruction accuracy when a nonzero location (or sparse support) constraint can be obtained from view shared images. LCAMP modifies the existing approximate message passing algorithm by replacing the thresholding stage with a location constraint, which avoids adjusting regularization parameters or thresholding levels. This work is first compared with other conventional reconstruction methods using random 1D signals and then applied to dynamic contrast-enhanced breast MRI to demonstrate the excellent reconstruction accuracy (less than 2% absolute difference) and low computation time (5 - 10 seconds using Matlab) with highly undersampled 3D data (244 × 128 × 48; overall reduction factor = 10). PMID:23042658

  8. I-SonReb: an improved NDT method to evaluate the in situ strength of carbonated concrete

    NASA Astrophysics Data System (ADS)

    Breccolotti, Marco; Bonfigli, Massimo F.

    2015-10-01

    Concrete strength evaluated in situ by means of the conventional SonReb method can be highly overestimated in presence of carbonation. This latter, in fact, is responsible for the physical and chemical alteration of the outer layer of concrete. As most of the existing concrete structures are subjected to carbonation, it is of high importance to overcome this problem. In this paper, an Improved SonReb method (I-SonReb) for carbonated concretes is proposed. It relies on the definition of a correction coefficient of the measured Rebound index as a function of the carbonated concrete cover thickness, an additional parameter to be measured during in situ testing campaigns. The usefulness of the method has been validated showing the improvement in the accuracy of concrete strength estimation from two sets of NDT experimental data collected from investigations on real structures.

  9. Fully Printable Organic and Perovskite Solar Cells with Transfer-Printed Flexible Electrodes.

    PubMed

    Li, Xianqiang; Tang, Xiaohong; Ye, Tao; Wu, Dan; Wang, Hong; Wang, Xizu

    2017-06-07

    The perovskite solar cells (PSCs) and organic solar cells (OSCs) with high performance were fabricated with transfer-printed top metal electrodes. We have demonstrated that PSCs and OSCs with the top Au electrodes fabricated by using the transfer printing method have comparable or better performance than the devices with the top Au electrodes fabricated by using the conventional thermal evaporation method. The highest PCE of the PSCs and OSCs with the top electrodes fabricated using the transfer printing method achieved 13.72% and 2.35%, respectively. It has been investigated that fewer defects between the organic thin films and Au electrodes exist by using the transfer printing method which improved the device stability. After storing the PSCs and OSCs with the transfer-printed electrodes in a nitrogen environment for 97 and 103 days without encapsulation, the PSCs and OSCs still retained 71% and 91% of their original PCEs, respectively.

  10. Intelligent Automatic Classification of True and Counterfeit Notes Based on Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Matsunaga, Shohei; Omatu, Sigeru; Kosaka, Toshohisa

    The purpose of this paper is to classify bank notes into “true” or “counterfeit” ones faster and more precisely compared with a conventional method. We note that thin lines are represented by direct lines in the images of true notes while they are represented in the counterfeit notes by dotted lines. This is due to properties of dot printers or scanner levels. To use the properties, we propose two method to classify a note into true or counterfeited one by checking whether there exist thin lines or dotted lines of the note. First, we use Fourier transform of the note to find quantity of features for classification and we classify a note into true or counterfeit one by using the features by Fourier transform. Then we propose a classification method by using wavelet transform in place of Fourier transform. Finally, some classification results are illustrated to show the effectiveness of the proposed methods.

  11. Aerodynamics and performance verifications of test methods for laboratory fume cupboards.

    PubMed

    Tseng, Li-Ching; Huang, Rong Fung; Chen, Chih-Chieh; Chang, Cheng-Ping

    2007-03-01

    The laser-light-sheet-assisted smoke flow visualization technique is performed on a full-size, transparent, commercial grade chemical fume cupboard to diagnose the flow characteristics and to verify the validity of several current containment test methods. The visualized flow patterns identify the recirculation areas that would inevitably exist in the conventional fume cupboards because of the fundamental configurations and structures. The large-scale vortex structures exist around the side walls, the doorsill of the cupboard and in the vicinity of the near-wake region of the manikin. The identified recirculation areas are taken as the 'dangerous' regions where the risk of turbulent dispersion of contaminants may be high. Several existing tracer gas containment test methods (BS 7258:1994, prEN 14175-3:2003 and ANSI/ASHRAE 110:1995) are conducted to verify the effectiveness of these methods in detecting the contaminant leakage. By comparing the results of the flow visualization and the tracer gas tests, it is found that the local recirculation regions are more prone to contaminant leakage because of the complex interaction between the shear layers and the smoke movement through the mechanism of turbulent dispersion. From the point of view of aerodynamics, the present study verifies that the methodology of the prEN 14175-3:2003 protocol can produce more reliable and consistent results because it is based on the region-by-region measurement and encompasses the most area of the entire recirculation zone of the cupboard. A modified test method combined with the region-by-region approach at the presence of the manikin shows substantially different results of the containment. A better performance test method which can describe an operator's exposure and the correlation between flow characteristics and the contaminant leakage properties is therefore suggested.

  12. Porphyrin-induced photogeneration of hydrogen peroxide determined using the luminol chemiluminescence method in aqueous solution: A structure-activity relationship study related to the aggregation of porphyrin.

    PubMed

    Komagoe, Keiko; Katsu, Takashi

    2006-02-01

    A luminol chemiluminescence method was used to evaluate the porphyrin-induced photogeneration of hydrogen peroxide (H2O2). This method enabled us to detect H202 in the presence of a high concentration of porphyrin, which was not possible using conventional colorimetry. The limit of detection was about 1 microM. We compared the ability to generate H2O2, using uroporphyrin (UP), hexacarboxylporphyrin (HCP), coproporphyrin (CP), hematoporphyrin (HP), mesoporphyrin (MP), and protoporphyrin (PP). The amount of H2O2 photoproduced was strongly related to the state of the porphyrin in the aqueous solution. UP and HCP, which existed predominantly in a monomeric form, had a good ability to produce H2O2. HP and MP, existing as dimers, showed weak activity. CP, forming a mixture of monomer and dimer, had a moderate ability to produce H2O2. PP, which was highly aggregated, had a good ability. These results demonstrated that the efficiency of porphyrins to produce H2O2 was strongly dependent on their aggregated form, and the dimer suppressed the production of H2O2.

  13. 3D bubble reconstruction using multiple cameras and space carving method

    NASA Astrophysics Data System (ADS)

    Fu, Yucheng; Liu, Yang

    2018-07-01

    An accurate measurement of bubble shape and size has a significant value in understanding the behavior of bubbles that exist in many engineering applications. Past studies usually use one or two cameras to estimate bubble volume, surface area, among other parameters. The 3D bubble shape and rotation angle are generally not available in these studies. To overcome this challenge and obtain more detailed information of individual bubbles, a 3D imaging system consisting of four high-speed cameras is developed in this paper, and the space carving method is used to reconstruct the 3D bubble shape based on the recorded high-speed images from different view angles. The proposed method can reconstruct the bubble surface with minimal assumptions. A benchmarking test is performed in a 3 cm  ×  1 cm rectangular channel with stagnant water. The results show that the newly proposed method can measure the bubble volume with an error of less than 2% compared with the syringe reading. The conventional two-camera system has an error around 10%. The one-camera system has an error greater than 25%. The visualization of a 3D bubble rising demonstrates the wall influence on bubble rotation angle and aspect ratio. This also explains the large error that exists in the single camera measurement.

  14. Drug Delivery Systems, CNS Protection, and the Blood Brain Barrier

    PubMed Central

    Upadhyay, Ravi Kant

    2014-01-01

    Present review highlights various drug delivery systems used for delivery of pharmaceutical agents mainly antibiotics, antineoplastic agents, neuropeptides, and other therapeutic substances through the endothelial capillaries (BBB) for CNS therapeutics. In addition, the use of ultrasound in delivery of therapeutic agents/biomolecules such as proline rich peptides, prodrugs, radiopharmaceuticals, proteins, immunoglobulins, and chimeric peptides to the target sites in deep tissue locations inside tumor sites of brain has been explained. In addition, therapeutic applications of various types of nanoparticles such as chitosan based nanomers, dendrimers, carbon nanotubes, niosomes, beta cyclodextrin carriers, cholesterol mediated cationic solid lipid nanoparticles, colloidal drug carriers, liposomes, and micelles have been discussed with their recent advancements. Emphasis has been given on the need of physiological and therapeutic optimization of existing drug delivery methods and their carriers to deliver therapeutic amount of drug into the brain for treatment of various neurological diseases and disorders. Further, strong recommendations are being made to develop nanosized drug carriers/vehicles and noninvasive therapeutic alternatives of conventional methods for better therapeutics of CNS related diseases. Hence, there is an urgent need to design nontoxic biocompatible drugs and develop noninvasive delivery methods to check posttreatment clinical fatalities in neuropatients which occur due to existing highly toxic invasive drugs and treatment methods. PMID:25136634

  15. A modified artificial immune system based pattern recognition approach -- an application to clinic diagnostics

    PubMed Central

    Zhao, Weixiang; Davis, Cristina E.

    2011-01-01

    Objective This paper introduces a modified artificial immune system (AIS)-based pattern recognition method to enhance the recognition ability of the existing conventional AIS-based classification approach and demonstrates the superiority of the proposed new AIS-based method via two case studies of breast cancer diagnosis. Methods and materials Conventionally, the AIS approach is often coupled with the k nearest neighbor (k-NN) algorithm to form a classification method called AIS-kNN. In this paper we discuss the basic principle and possible problems of this conventional approach, and propose a new approach where AIS is integrated with the radial basis function – partial least square regression (AIS-RBFPLS). Additionally, both the two AIS-based approaches are compared with two classical and powerful machine learning methods, back-propagation neural network (BPNN) and orthogonal radial basis function network (Ortho-RBF network). Results The diagnosis results show that: (1) both the AIS-kNN and the AIS-RBFPLS proved to be a good machine leaning method for clinical diagnosis, but the proposed AIS-RBFPLS generated an even lower misclassification ratio, especially in the cases where the conventional AIS-kNN approach generated poor classification results because of possible improper AIS parameters. For example, based upon the AIS memory cells of “replacement threshold = 0.3”, the average misclassification ratios of two approaches for study 1 are 3.36% (AIS-RBFPLS) and 9.07% (AIS-kNN), and the misclassification ratios for study 2 are 19.18% (AIS-RBFPLS) and 28.36% (AIS-kNN); (2) the proposed AIS-RBFPLS presented its robustness in terms of the AIS-created memory cells, showing a smaller standard deviation of the results from the multiple trials than AIS-kNN. For example, using the result from the first set of AIS memory cells as an example, the standard deviations of the misclassification ratios for study 1 are 0.45% (AIS-RBFPLS) and 8.71% (AIS-kNN) and those for study 2 are 0.49% (AIS-RBFPLS) and 6.61% (AIS-kNN); and (3) the proposed AIS-RBFPLS classification approaches also yielded better diagnosis results than two classical neural network approaches of BPNN and Ortho-RBF network. Conclusion In summary, this paper proposed a new machine learning method for complex systems by integrating the AIS system with RBFPLS. This new method demonstrates its satisfactory effect on classification accuracy for clinical diagnosis, and also indicates its wide potential applications to other diagnosis and detection problems. PMID:21515033

  16. Unsentimental Ethics: Towards a Content-Specific Account of the Moral-Conventional Distinction

    ERIC Educational Resources Information Center

    Royzman, Edward B.; Leeman, Robert F.; Baron, Jonathan

    2009-01-01

    In this paper, we offer an overview and a critique of the existing theories of the moral-conventional distinction, with emphasis on Nichols's [Nichols, S. (2002). Norms with feeling: Towards a psychological account of moral judgment. "Cognition, 84", 221-236] neo-sentimentalist approach. After discussing some distinctive features of Nichols's…

  17. Non-equilibrium Green's functions method: Non-trivial and disordered leads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Yu, E-mail: heyuyhe@gmail.com; Wang, Yu; Klimeck, Gerhard

    2014-11-24

    The non-equilibrium Green's function algorithm requires contact self-energies to model charge injection and extraction. All existing approaches assume infinitely periodic leads attached to a possibly quite complex device. This contradicts today's realistic devices in which contacts are spatially inhomogeneous, chemically disordered, and impacting the overall device characteristics. This work extends the complex absorbing potentials method for arbitrary, ideal, or non-ideal leads in atomistic tight binding representation. The algorithm is demonstrated on a Si nanowire with periodic leads, a graphene nanoribbon with trumpet shape leads, and devices with leads of randomly alloyed Si{sub 0.5}Ge{sub 0.5}. It is found that alloy randomnessmore » in the leads can reduce the predicted ON-state current of Si{sub 0.5}Ge{sub 0.5} transistors by 45% compared to conventional lead methods.« less

  18. A new ball game: the United Nations Convention on the Rights of Persons with Disabilities and assumptions in care for people with dementia.

    PubMed

    Smith, Anita; Sullivan, Danny

    2012-09-01

    The United Nations Convention on the Rights of Persons with Disabilities is a powerful international instrument which imposes significant responsibilities on signatories. This column discusses changes in the definition of legal capacity which will have significant impacts on decision-making related to people with dementia. Various restrictions and limitations on personal freedoms are discussed in light of the Convention. The main focus is on challenges to existing paradigms of substitute decision-making, which are in wide use through a guardianship model. Under Art 12 of the Convention, moves to supported decision-making will result in significant changes in ensuring the rights of people with dementia. There are challenges ahead in implementing supported decision-making schemes, not only due to tension with existing practices and legislation, but also the difficulty of developing and resourcing workable schemes. This is particularly so with advanced dementia, which is acknowledged as a pressing issue for Australia due to effective health care, an ageing population and changing expectations.

  19. Objective structured clinical examination "Death Certificate" station - Computer-based versus conventional exam format.

    PubMed

    Biolik, A; Heide, S; Lessig, R; Hachmann, V; Stoevesandt, D; Kellner, J; Jäschke, C; Watzke, S

    2018-04-01

    One option for improving the quality of medical post mortem examinations is through intensified training of medical students, especially in countries where such a requirement exists regardless of the area of specialisation. For this reason, new teaching and learning methods on this topic have recently been introduced. These new approaches include e-learning modules or SkillsLab stations; one way to objectify the resultant learning outcomes is by means of the OSCE process. However, despite offering several advantages, this examination format also requires considerable resources, in particular in regards to medical examiners. For this reason, many clinical disciplines have already implemented computer-based OSCE examination formats. This study investigates whether the conventional exam format for the OSCE forensic "Death Certificate" station could be replaced with a computer-based approach in future. For this study, 123 students completed the OSCE "Death Certificate" station, using both a computer-based and conventional format, half starting with the Computer the other starting with the conventional approach in their OSCE rotation. Assignment of examination cases was random. The examination results for the two stations were compared and both overall results and the individual items of the exam checklist were analysed by means of inferential statistics. Following statistical analysis of examination cases of varying difficulty levels and correction of the repeated measures effect, the results of both examination formats appear to be comparable. Thus, in the descriptive item analysis, while there were some significant differences between the computer-based and conventional OSCE stations, these differences were not reflected in the overall results after a correction factor was applied (e.g. point deductions for assistance from the medical examiner was possible only at the conventional station). Thus, we demonstrate that the computer-based OSCE "Death Certificate" station is a cost-efficient and standardised format for examination that yields results comparable to those from a conventional format exam. Moreover, the examination results also indicate the need to optimize both the test itself (adjusting the degree of difficulty of the case vignettes) and the corresponding instructional and learning methods (including, for example, the use of computer programmes to complete the death certificate in small group formats in the SkillsLab). Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  20. Helping crops stand up to salt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raeburn, P.

    1985-05-01

    A new approach to the problem of increasing soil salinity is to raise salt-tolerant plants. The search for such plants involves finding new applications for naturally occurring salt-resistant plants (halophytes), using conventional breeding techniques to identify and strengthen crop varieties known to have better-than-average salt tolerance, and applying recombinant DNA methods to introduce salt resistance into existing plants. One promising plant is salicornia, which produces oil high in polyunsaturates at a greater yield than soybeans. Two varieties of atriplex yield as much animal feed as alfalfa and can be harvested several times a year. Seed companies are supporting the research.

  1. Radar fall detection using principal component analysis

    NASA Astrophysics Data System (ADS)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  2. Protection from Space Radiation

    NASA Technical Reports Server (NTRS)

    Tripathi, R. K.; Wilson, J. W.; Shinn, J. L.; Singleterry, R. C.; Clowdsley, M. S.; Cucinotta, F. A.; Badhwar, G. D.; Kim, M. Y.; Badavi, F. F.; Heinbockel, J. H.

    2000-01-01

    The exposures anticipated for our astronauts in the anticipated Human Exploration and Development of Space (HEDS) will be significantly higher (both annual and carrier) than any other occupational group. In addition, the exposures in deep space result largely from the Galactic Cosmic Rays (GCR) for which there is as yet little experience. Some evidence exists indicating that conventional linear energy transfer (LET) defined protection quantities (quality factors) may not be appropriate [1,2]. The purpose of this presentation is to evaluate our current understanding of radiation protection with laboratory and flight experimental data and to discuss recent improvements in interaction models and transport methods.

  3. Critical ICT-Inhibiting Factors on IBS Production Management Processes in the Malaysia Construction Industry

    NASA Astrophysics Data System (ADS)

    Ern, Peniel Ang Soon; Kasim, Narimah; Hamid, Zuhairi Abd; Chen, Goh Kai

    2017-10-01

    Industrialized Building System (IBS) is one of the approaches that had been introduced as an alternative to conventional building method where it becomes the new strategy of enhancing the sustainable construction in current industries while spearheading a huge advancement of benefits with green constructions into the existing industries. The IBS approach is actively promoted through several strategies and incentives as an alternative to conventional building methods. Extensive uptakes of modern Information Communication Technology (ICT) applications are able to support the different IBS processes for effective production. However, it is argued that ICT uptake at the organisational level is still in its infancy. This raises the importance to identify critical inhibitors which are inhibing the effective uptake of ICT in the IBS production management process. Critical inhibitors to ICT uptake were identified through questionnaire survey with the IBS industry stakeholders. The mean index and critical t-values are generated with the use of the quantitative tool, Statistical Package for Social Sciences (SPSS). The top ten priority ranked inhibitors reflect the Cost, People and Process elements to ICT uptake. High costs in acquiring the technologies and resistance to change were some main concerns from the findings.

  4. The Application of Sheet Technology in Cartilage Tissue Engineering.

    PubMed

    Ge, Yang; Gong, Yi Yi; Xu, Zhiwei; Lu, Yanan; Fu, Wei

    2016-04-01

    Cartilage tissue engineering started to act as a promising, even essential alternative method in the process of cartilage repair and regeneration, considering adult avascular structure has very limited self-renewal capacity of cartilage tissue in adults and a bottle-neck existed in conventional surgical treatment methods. Recent progressions in tissue engineering realized the development of more feasible strategies to treat cartilage disorders. Of these strategies, cell sheet technology has shown great clinical potentials in the regenerative areas such as cornea and esophagus and is increasingly considered as a potential way to reconstruct cartilage tissues for its non-use of scaffolds and no destruction of matrix secreted by cultured cells. Acellular matrix sheet technologies utilized in cartilage tissue engineering, with a sandwich model, can ingeniously overcome the drawbacks that occurred in a conventional acellular block, where cells are often blocked from migrating because of the non-nanoporous structure. Electrospun-based sheets with nanostructures that mimic the natural cartilage matrix offer a level of control as well as manipulation and make them appealing and widely used in cartilage tissue engineering. In this review, we focus on the utilization of these novel and promising sheet technologies to construct cartilage tissues with practical and beneficial functions.

  5. A method of setting limits for the purpose of quality assurance

    NASA Astrophysics Data System (ADS)

    Sanghangthum, Taweap; Suriyapee, Sivalee; Kim, Gwe-Ya; Pawlicki, Todd

    2013-10-01

    The result from any assurance measurement needs to be checked against some limits for acceptability. There are two types of limits; those that define clinical acceptability (action limits) and those that are meant to serve as a warning that the measurement is close to the action limits (tolerance limits). Currently, there is no standard procedure to set these limits. In this work, we propose an operational procedure to set tolerance limits and action limits. The approach to establish the limits is based on techniques of quality engineering using control charts and a process capability index. The method is different for tolerance limits and action limits with action limits being categorized into those that are specified and unspecified. The procedure is to first ensure process control using the I-MR control charts. Then, the tolerance limits are set equal to the control chart limits on the I chart. Action limits are determined using the Cpm process capability index with the requirements that the process must be in-control. The limits from the proposed procedure are compared to an existing or conventional method. Four examples are investigated: two of volumetric modulated arc therapy (VMAT) point dose quality assurance (QA) and two of routine linear accelerator output QA. The tolerance limits range from about 6% larger to 9% smaller than conventional action limits for VMAT QA cases. For the linac output QA, tolerance limits are about 60% smaller than conventional action limits. The operational procedure describe in this work is based on established quality management tools and will provide a systematic guide to set up tolerance and action limits for different equipment and processes.

  6. Toward Consistent Methodology to Quantify Populations in Proximity to Oil and Gas Development: A National Spatial Analysis and Review

    PubMed Central

    Czolowski, Eliza D.; Santoro, Renee L.; Srebotnjak, Tanja

    2017-01-01

    Background: Higher risk of exposure to environmental health hazards near oil and gas wells has spurred interest in quantifying populations that live in proximity to oil and gas development. The available studies on this topic lack consistent methodology and ignore aspects of oil and gas development of value to public health–relevant assessment and decision-making. Objectives: We aim to present a methodological framework for oil and gas development proximity studies grounded in an understanding of hydrocarbon geology and development techniques. Methods: We geospatially overlay locations of active oil and gas wells in the conterminous United States and Census data to estimate the population living in proximity to hydrocarbon development at the national and state levels. We compare our methods and findings with existing proximity studies. Results: Nationally, we estimate that 17.6 million people live within 1,600m (∼1 mi) of at least one active oil and/or gas well. Three of the eight studies overestimate populations at risk from actively producing oil and gas wells by including wells without evidence of production or drilling completion and/or using inappropriate population allocation methods. The remaining five studies, by omitting conventional wells in regions dominated by historical conventional development, significantly underestimate populations at risk. Conclusions: The well inventory guidelines we present provide an improved methodology for hydrocarbon proximity studies by acknowledging the importance of both conventional and unconventional well counts as well as the relative exposure risks associated with different primary production categories (e.g., oil, wet gas, dry gas) and developmental stages of wells. https://doi.org/10.1289/EHP1535 PMID:28858829

  7. Randomized clinical trial of ligasure™ versus conventional splenectomy for injured spleen in blunt abdominal trauma.

    PubMed

    Amirkazem, Vejdan Seyyed; Malihe, Khosravi

    2017-02-01

    Spleen is the most common organ damaged in cases of blunt abdominal trauma and splenectomy and splenorrhaphy are the main surgical procedures that are used in surgical treatment of such cases. In routine open splenectomy cases, after laparotomy, application of sutures in splenic vasculature is the most widely used procedure to cease the bleeding. This clinical trial evaluates the role and benefits of the Ligasure™ system in traumatic splenectomy without using any suture materials and compares the result with conventional method of splenectomy. After making decision for splenectomy secondary to a blunt abdominal trauma, patients in control group (39) underwent splenectomy using conventional method with silk suture ligation of splenic vasculature. In the interventional group (41) a Ligasure™ vascular sealing system was used for ligating of the splenic vein and artery. The results of operation time, volume of intra-operation bleeding and post-operative complications were compared in both groups. The mean operation times in control and interventional group were 21 and 12 min respectively (p < 0.05). The average volume of bleeding in control group during open splenectomy was 280 cc, but in the interventional group decreased significantly to 80 ml (p < 0.05) using the Ligasure system. Post-operative complications such as bleeding were non-existent in both groups. The application of Ligasure™ in blunt abdominal trauma for splenectomy not only can decrease the operation time but also can decrease the volume of bleeding during operation without any additional increase in post-operative complications. This method is recommendable in traumatic splenic injuries that require splenectomy in order to control the bleeding as opposed to use of traditional silk sutures. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  8. Electronic Cigarette Use by College Students

    PubMed Central

    Sutfin, Erin L.; McCoy, Thomas P.; Morrell, Holly E. R.; Hoeppner, Bettina B.; Wolfson, Mark

    2013-01-01

    Background Electronic cigarettes, or ecigarettes, are battery operated devices that deliver nicotine via inhaled vapor. There is considerable controversy about the disease risk and toxicity of ecigarettes and empirical evidence on short- and long-term health effects is minimal. Limited data on e-cigarette use and correlates exist, and to our knowledge, no prevalence rates among U.S. college students have been reported. This study aimed to estimate the prevalence of ecigarette use and identify correlates of use among a large, multi-institution, random sample of college students. Methods 4,444 students from 8 colleges in North Carolina completed a Webbased survey in fall 2009. Results Ever use of ecigarettes was reported by 4.9% of students, with 1.5% reporting past month use. Correlates of ever use included male gender, Hispanic or “Other race” (compared to non-Hispanic Whites), Greek affiliation, conventional cigarette smoking and e-cigarette harm perceptions. Although e-cigarette use was more common among conventional cigarette smokers, 12% of ever e-cigarette users had never smoked a conventional cigarette. Among current cigarette smokers, e-cigarette use was negatively associated with lack of knowledge about e-cigarette harm, but was not associated with intentions to quit. Conclusions Although e-cigarette use was more common among conventional cigarette smokers, it was not exclusive to them. E-cigarette use was not associated with intentions to quit smoking among a sub-sample of conventional cigarette smokers. Unlike older, more established cigarette smokers, e-cigarette use by college students does not appear to be motivated by the desire to quit cigarette smoking. PMID:23746429

  9. Specificity of sexual arousal for sexual activities in men and women with conventional and masochistic sexual interests.

    PubMed

    Chivers, Meredith L; Roy, Carolyn; Grimbos, Teresa; Cantor, James M; Seto, Michael C

    2014-07-01

    Prior studies consistently report that men's genital responses correspond to their sexual activity interests (consenting vs. coercive sex) whereas women's responses do not. For women, however, these results may be confounded by the sexual activities studied and lack of suitable controls. We examined the subjective and genital arousal responses of men and women with conventional (22 men and 15 women) or masochistic sexual interests (16 men and 17 women) to narratives describing conventional sex or masochistic sex. The aims of the studies were twofold: (1) to examine whether gender differences in the specificity of sexual arousal previously observed for gender also exist for sexual activity interests; and (2) to examine whether men and women with masochistic sexual interests demonstrate specificity of sexual response for their preferred sexual activities. Surprisingly, the pattern of results was very similar for men and women. Both men and women with conventional sexual interests (WCI) reported more sexual arousal, and responded more genitally, to conventional than to masochistic sex, demonstrating specificity of sexual arousal for their preferred sexual activities. Despite showing specificity for conventional sexual activities, the genital responses of WCI were still gender nonspecific. In contrast, women and men with masochistic sexual interests demonstrated nonspecific subjective and genital responses to conventional and masochistic sex. Indices of genital and subjective sexual arousal to masochistic versus conventional stimuli were positively and significantly correlated with self-reported thoughts, fantasies, interests, and behaviors involving masochism. The results suggest that gender similarities in the specificity of sexual arousal for sexual activity exist despite consistent gender differences in the specificity of sexual arousal for gender.

  10. High-speed machining of Space Shuttle External Tank (ET) panels

    NASA Technical Reports Server (NTRS)

    Miller, J. A.

    1983-01-01

    Potential production rates and project cost savings achieved by converting the conventional machining process in manufacturing shuttle external tank panels to high speed machining (HSM) techniques were studied. Savings were projected from the comparison of current production rates with HSM rates and with rates attainable on new conventional machines. The HSM estimates were also based on rates attainable by retrofitting existing conventional equipment with high speed spindle motors and rates attainable using new state of the art machines designed and built for HSM.

  11. Toward a Method for Exposing and Elucidating Ethical Issues with Human Cognitive Enhancement Technologies.

    PubMed

    Hofmann, Bjørn

    2017-04-01

    To develop a method for exposing and elucidating ethical issues with human cognitive enhancement (HCE). The intended use of the method is to support and facilitate open and transparent deliberation and decision making with respect to this emerging technology with great potential formative implications for individuals and society. Literature search to identify relevant approaches. Conventional content analysis of the identified papers and methods in order to assess their suitability for assessing HCE according to four selection criteria. Method development. Amendment after pilot testing on smart-glasses. Based on three existing approaches in health technology assessment a method for exposing and elucidating ethical issues in the assessment of HCE technologies was developed. Based on a pilot test for smart-glasses, the method was amended. The method consists of six steps and a guiding list of 43 questions. A method for exposing and elucidating ethical issues in the assessment of HCE was developed. The method provides the ground work for context specific ethical assessment and analysis. Widespread use, amendments, and further developments of the method are encouraged.

  12. A Wireless Sensor Network-Based Portable Vehicle Detector Evaluation System

    PubMed Central

    Yoo, Seong-eun

    2013-01-01

    In an upcoming smart transportation environment, performance evaluations of existing Vehicle Detection Systems are crucial to maintain their accuracy. The existing evaluation method for Vehicle Detection Systems is based on a wired Vehicle Detection System reference and a video recorder, which must be operated and analyzed by capable traffic experts. However, this conventional evaluation system has many disadvantages. It is inconvenient to deploy, the evaluation takes a long time, and it lacks scalability and objectivity. To improve the evaluation procedure, this paper proposes a Portable Vehicle Detector Evaluation System based on wireless sensor networks. We describe both the architecture and design of a Vehicle Detector Evaluation System and the implementation results, focusing on the wireless sensor networks and methods for traffic information measurement. With the help of wireless sensor networks and automated analysis, our Vehicle Detector Evaluation System can evaluate a Vehicle Detection System conveniently and objectively. The extensive evaluations of our Vehicle Detector Evaluation System show that it can measure the traffic information such as volume counts and speed with over 98% accuracy. PMID:23344388

  13. Assessing and Testing Hydrokinetic Turbine Performance and Effects on Open Channel Hydrodynamics: An Irrigation Canal Case Study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunawan, Budi; Neary, Vincent Sinclair; Mortensen, Josh

    Hydrokinetic energy from flowing water in open channels has the potential to support local electricity needs with lower regulatory or capital investment than impounding water with more conventional means. MOU agencies involved in federal hydropower development have identified the need to better understand the opportunities for hydrokinetic (HK) energy development within existing canal systems that may already have integrated hydropower plants. This document provides an overview of the main considerations, tools, and assessment methods, for implementing field tests in an open-channel water system to characterize current energy converter (CEC) device performance and hydrodynamic effects. It describes open channel processes relevantmore » to their HK site and perform pertinent analyses to guide siting and CEC layout design, with the goal of streamlining the evaluation process and reducing the risk of interfering with existing uses of the site. This document outlines key site parameters of interest and effective tools and methods for measurement and analysis with examples drawn from the Roza Main Canal, in Yakima, WA to illustrate a site application.« less

  14. A wireless sensor network-based portable vehicle detector evaluation system.

    PubMed

    Yoo, Seong-eun

    2013-01-17

    In an upcoming smart transportation environment, performance evaluations of existing Vehicle Detection Systems are crucial to maintain their accuracy. The existing evaluation method for Vehicle Detection Systems is based on a wired Vehicle Detection System reference and a video recorder, which must be operated and analyzed by capable traffic experts. However, this conventional evaluation system has many disadvantages. It is inconvenient to deploy, the evaluation takes a long time, and it lacks scalability and objectivity. To improve the evaluation procedure, this paper proposes a Portable Vehicle Detector Evaluation System based on wireless sensor networks. We describe both the architecture and design of a Vehicle Detector Evaluation System and the implementation results, focusing on the wireless sensor networks and methods for traffic information measurement. With the help of wireless sensor networks and automated analysis, our Vehicle Detector Evaluation System can evaluate a Vehicle Detection System conveniently and objectively. The extensive evaluations of our Vehicle Detector Evaluation System show that it can measure the traffic information such as volume counts and speed with over 98% accuracy.

  15. Instantaneous Conventions

    PubMed Central

    Misyak, Jennifer; Noguchi, Takao; Chater, Nick

    2016-01-01

    Humans can communicate even with few existing conventions in common (e.g., when they lack a shared language). We explored what makes this phenomenon possible with a nonlinguistic experimental task requiring participants to coordinate toward a common goal. We observed participants creating new communicative conventions using the most minimal possible signals. These conventions, furthermore, changed on a trial-by-trial basis in response to shared environmental and task constraints. Strikingly, as a result, signals of the same form successfully conveyed contradictory messages from trial to trial. Such behavior is evidence for the involvement of what we term joint inference, in which social interactants spontaneously infer the most sensible communicative convention in light of the common ground between them. Joint inference may help to elucidate how communicative conventions emerge instantaneously and how they are modified and reshaped into the elaborate systems of conventions involved in human communication, including natural languages. PMID:27793986

  16. ASSESSMENT OF CONTROL TECHNOLOGIES FOR REDUCING EMISSIONS OF SO2 AND NOX FROM EXISTING COAL-FIRED UTILITY BOILERS

    EPA Science Inventory

    The report reviews information and estimated costs on 15 emissioncontrol technology categories applicable to existing coal-fired electric utility boilers. he categories include passive controls such as least emission dispatching, conventional processes, and emerging technologies ...

  17. Using Business Simulations as Authentic Assessment Tools

    ERIC Educational Resources Information Center

    Neely, Pat; Tucker, Jan

    2012-01-01

    New modalities for assessing student learning exist as a result of advances in computer technology. Conventional measurement practices have been transformed into computer based testing. Although current testing replicates assessment processes used in college classrooms, a greater opportunity exists to use computer technology to create authentic…

  18. An Innovative Context-Based Crystal-Growth Activity Space Method for Environmental Exposure Assessment: A Study Using GIS and GPS Trajectory Data Collected in Chicago.

    PubMed

    Wang, Jue; Kwan, Mei-Po; Chai, Yanwei

    2018-04-09

    Scholars in the fields of health geography, urban planning, and transportation studies have long attempted to understand the relationships among human movement, environmental context, and accessibility. One fundamental question for this research area is how to measure individual activity space, which is an indicator of where and how people have contact with their social and physical environments. Conventionally, standard deviational ellipses, road network buffers, minimum convex polygons, and kernel density surfaces have been used to represent people's activity space, but they all have shortcomings. Inconsistent findings of the effects of environmental exposures on health behaviors/outcomes suggest that the reliability of existing studies may be affected by the uncertain geographic context problem (UGCoP). This paper proposes the context-based crystal-growth activity space as an innovative method for generating individual activity space based on both GPS trajectories and the environmental context. This method not only considers people's actual daily activity patterns based on GPS tracks but also takes into account the environmental context which either constrains or encourages people's daily activity. Using GPS trajectory data collected in Chicago, the results indicate that the proposed new method generates more reasonable activity space when compared to other existing methods. This can help mitigate the UGCoP in environmental health studies.

  19. Analog self-powered harvester achieving switching pause control to increase harvested energy

    NASA Astrophysics Data System (ADS)

    Makihara, Kanjuro; Asahina, Kei

    2017-05-01

    In this paper, we propose a self-powered analog controller circuit to increase the efficiency of electrical energy harvesting from vibrational energy using piezoelectric materials. Although the existing synchronized switch harvesting on inductor (SSHI) method is designed to produce efficient harvesting, its switching operation generates a vibration-suppression effect that reduces the harvested levels of electrical energy. To solve this problem, the authors proposed—in a previous paper—a switching method that takes this vibration-suppression effect into account. This method temporarily pauses the switching operation, allowing the recovery of the mechanical displacement and, therefore, of the piezoelectric voltage. In this paper, we propose a self-powered analog circuit to implement this switching control method. Self-powered vibration harvesting is achieved in this study by attaching a newly designed circuit to an existing analog controller for SSHI. This circuit aims to effectively implement the aforementioned new switching control strategy, where switching is paused in some vibration peaks, in order to allow motion recovery and a consequent increase in the harvested energy. Harvesting experiments performed using the proposed circuit reveal that the proposed method can increase the energy stored in the storage capacitor by a factor of 8.5 relative to the conventional SSHI circuit. This proposed technique is useful to increase the harvested energy especially for piezoelectric systems having large coupling factor.

  20. An Innovative Context-Based Crystal-Growth Activity Space Method for Environmental Exposure Assessment: A Study Using GIS and GPS Trajectory Data Collected in Chicago

    PubMed Central

    Chai, Yanwei

    2018-01-01

    Scholars in the fields of health geography, urban planning, and transportation studies have long attempted to understand the relationships among human movement, environmental context, and accessibility. One fundamental question for this research area is how to measure individual activity space, which is an indicator of where and how people have contact with their social and physical environments. Conventionally, standard deviational ellipses, road network buffers, minimum convex polygons, and kernel density surfaces have been used to represent people’s activity space, but they all have shortcomings. Inconsistent findings of the effects of environmental exposures on health behaviors/outcomes suggest that the reliability of existing studies may be affected by the uncertain geographic context problem (UGCoP). This paper proposes the context-based crystal-growth activity space as an innovative method for generating individual activity space based on both GPS trajectories and the environmental context. This method not only considers people’s actual daily activity patterns based on GPS tracks but also takes into account the environmental context which either constrains or encourages people’s daily activity. Using GPS trajectory data collected in Chicago, the results indicate that the proposed new method generates more reasonable activity space when compared to other existing methods. This can help mitigate the UGCoP in environmental health studies. PMID:29642530

  1. A comparison of the clinical effectiveness of spinal orthoses manufactured using the conventional manual method and CAD/CAM method in the management of AIS.

    PubMed

    Wong, M S; Cheng, C Y; Ng, B K W; Lam, T P; Chiu, S W

    2006-01-01

    Spinal orthoses are commonly prescribed to patients with moderate AIS for prevention of further deterioration. In a conventional manufacturing method, plaster bandages are used to get patient's body contour and plaster cast is rectified manually. With the introduction of CAD/CAM system, a series of automated processes from body scanning to digital rectification and milling of positive model can be performed in a fast and accurate fashion. This project is to study the impact of CAD/CAM method as compared with the conventional method. In assessing the 147 recruited subjects fitted with spinal orthoses (43 subjects using conventional method and 104 subjects using CAD/CAM method), significant decreases (p<0.05) were found in the Cobb angles when comparing the pre-intervention data with that of the first year of intervention. Regarding the learning curve, Orthotists are getting more competent with the CAD/CAM technique in four years time. The mean productivity of the CAD/CAM method is 2.75 times higher than that of the conventional method. The CAD/CAM method could achieve similar clinical outcomes and with its high efficiency, could be considered as substitute for conventional methods in fabricating spinal orthoses for patients with AIS.

  2. Finding Business Information on the "Invisible Web": Search Utilities vs. Conventional Search Engines.

    ERIC Educational Resources Information Center

    Darrah, Brenda

    Researchers for small businesses, which may have no access to expensive databases or market research reports, must often rely on information found on the Internet, which can be difficult to find. Although current conventional Internet search engines are now able to index over on billion documents, there are many more documents existing in…

  3. Comparison of methods for identifying causative bacterial microorganisms in presumed acute endophthalmitis: conventional culture, blood culture, and PCR.

    PubMed

    Pongsachareonnont, Pear; Honglertnapakul, Worawalun; Chatsuwan, Tanittha

    2017-02-21

    Identification of bacterial pathogens in endophthalmitis is important to inform antibiotic selection and treatment decisions. Hemoculture bottles and polymerase chain reaction (PCR) analysis have been proposed to offer good detection sensitivity. This study compared the sensitivity and accuracy of a blood culture system, a PCR approach, and conventional culture methods for identification of causative bacteria in cases of acute endophthalmitis. Twenty-nine patients with a diagnosis of presumed acute bacterial endophthalmitis who underwent vitreous specimen collection at King Chulalongkorn Memorial Hospital were enrolled in this study. Forty-one specimens were collected. Each specimen was divided into three parts, and each part was analyzed using one of three microbial identification techniques: conventional plate culture, blood culture, and polymerase chain reaction and sequencing. The results of the three methods were then compared. Bacteria were identified in 15 of the 41 specimens (36.5%). Five (12.2%) specimens were positive by conventional culture methods, 11 (26.8%) were positive by hemoculture, and 11 (26.8%) were positive by PCR. Cohen's kappa analysis revealed p-values for conventional methods vs. hemoculture, conventional methods vs. PCR, and hemoculture vs. PCR of 0.057, 0.33, and 0.009, respectively. Higher detection rates of Enterococcus faecalis were observed for hemoculture and PCR than for conventional methods. Blood culture bottles and PCR detection may facilitate bacterial identification in cases of presumed acute endophthalmitis. These techniques should be used in addition to conventional plate culture methods because they provide a greater degree of sensitivity than conventional plate culture alone for the detection of specific microorganisms such as E. faecalis. Thai Clinical Trial Register No. TCTR20110000024 .

  4. A Comparison of Programed Instruction with Conventional Methods for Teaching Two Units of Eighth Grade Science.

    ERIC Educational Resources Information Center

    Eshleman, Winston Hull

    Compared were programed materials and conventional methods for teaching two units of eighth grade science. Programed materials used were linear programed books requiring constructed responses. The conventional methods included textbook study, written exercises, lectures, discussions, demonstrations, experiments, chalkboard drawings, films,…

  5. Development of an energy consumption and cost data base for fuel cell total energy systems and conventional building energy systems

    NASA Astrophysics Data System (ADS)

    Pine, G. D.; Christian, J. E.; Mixon, W. R.; Jackson, W. L.

    1980-07-01

    The procedures and data sources used to develop an energy consumption and system cost data base for use in predicting the market penetration of phosphoric acid fuel cell total energy systems in the nonindustrial building market are described. A computer program was used to simulate the hourly energy requirements of six types of buildings; office buildings; retail stores; hotels and motels; schools; hospitals; and multifamily residences. The simulations were done by using hourly weather tapes for one city in each of the ten Department of Energy administrative regions. Two types of building construction were considered, one for existing buildings and one for new buildings. A fuel cell system combined with electrically driven heat pumps and one combined with a gas boiler and an electrically driven chiller were compared with similar conventional systems. The methods of system simulation, component sizing, and system cost estimation are described for each system.

  6. Cardiac auscultation training of medical students: a comparison of electronic sensor-based and acoustic stethoscopes

    PubMed Central

    Høyte, Henning; Jensen, Torstein; Gjesdal, Knut

    2005-01-01

    Background To determine whether the use of an electronic, sensor based stethoscope affects the cardiac auscultation skills of undergraduate medical students. Methods Forty eight third year medical students were randomized to use either an electronic stethoscope, or a conventional acoustic stethoscope during clinical auscultation training. After a training period of four months, cardiac auscultation skills were evaluated using four patients with different cardiac murmurs. Two experienced cardiologists determined correct answers. The students completed a questionnaire for each patient. The thirteen questions were weighted according to their relative importance, and a correct answer was credited from one to six points. Results No difference in mean score was found between the two groups (p = 0.65). Grading and characterisation of murmurs and, if present, report of non existing murmurs were also rated. None of these yielded any significant differences between the groups. Conclusion Whether an electronic or a conventional stethoscope was used during training and testing did not affect the students' performance on a cardiac auscultation test. PMID:15882458

  7. Dual mode stereotactic localization method and application

    DOEpatents

    Keppel, Cynthia E.; Barbosa, Fernando Jorge; Majewski, Stanislaw

    2002-01-01

    The invention described herein combines the structural digital X-ray image provided by conventional stereotactic core biopsy instruments with the additional functional metabolic gamma imaging obtained with a dedicated compact gamma imaging mini-camera. Before the procedure, the patient is injected with an appropriate radiopharmaceutical. The radiopharmaceutical uptake distribution within the breast under compression in a conventional examination table expressed by the intensity of gamma emissions is obtained for comparison (co-registration) with the digital mammography (X-ray) image. This dual modality mode of operation greatly increases the functionality of existing stereotactic biopsy devices by yielding a much smaller number of false positives than would be produced using X-ray images alone. The ability to obtain both the X-ray mammographic image and the nuclear-based medicine gamma image using a single device is made possible largely through the use of a novel, small and movable gamma imaging camera that permits its incorporation into the same table or system as that currently utilized to obtain X-ray based mammographic images for localization of lesions.

  8. Exploiting the wavelet structure in compressed sensing MRI.

    PubMed

    Chen, Chen; Huang, Junzhou

    2014-12-01

    Sparsity has been widely utilized in magnetic resonance imaging (MRI) to reduce k-space sampling. According to structured sparsity theories, fewer measurements are required for tree sparse data than the data only with standard sparsity. Intuitively, more accurate image reconstruction can be achieved with the same number of measurements by exploiting the wavelet tree structure in MRI. A novel algorithm is proposed in this article to reconstruct MR images from undersampled k-space data. In contrast to conventional compressed sensing MRI (CS-MRI) that only relies on the sparsity of MR images in wavelet or gradient domain, we exploit the wavelet tree structure to improve CS-MRI. This tree-based CS-MRI problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved by an iterative scheme. Simulations and in vivo experiments demonstrate the significant improvement of the proposed method compared to conventional CS-MRI algorithms, and the feasibleness on MR data compared to existing tree-based imaging algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Building model analysis applications with the Joint Universal Parameter IdenTification and Evaluation of Reliability (JUPITER) API

    USGS Publications Warehouse

    Banta, E.R.; Hill, M.C.; Poeter, E.; Doherty, J.E.; Babendreier, J.

    2008-01-01

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input and output conventions allow application users to access various applications and the analysis methods they embody with a minimum of time and effort. Process models simulate, for example, physical, chemical, and (or) biological systems of interest using phenomenological, theoretical, or heuristic approaches. The types of model analyses supported by the JUPITER API include, but are not limited to, sensitivity analysis, data needs assessment, calibration, uncertainty analysis, model discrimination, and optimization. The advantages provided by the JUPITER API for users and programmers allow for rapid programming and testing of new ideas. Application-specific coding can be in languages other than the Fortran-90 of the API. This article briefly describes the capabilities and utility of the JUPITER API, lists existing applications, and uses UCODE_2005 as an example.

  10. Quality of the written radiology report: a review of the literature.

    PubMed

    Pool, Felicity; Goergen, Stacy

    2010-08-01

    A literature review was carried out, guided by the question, What are the important elements of a high-quality radiology written report? Two papers known to the authors were used as a basis for 5 PubMed search strategies. Exclusion criteria were applied to retrieved citations. Reference lists of retrieved citations were scanned for additional relevant papers and exclusion criteria applied to these. Web sites of professional radiology organizations were scanned for guidelines relating to the written radiology report. Retrieved guidelines were appraised using the Appraisal of Guidelines for Research & Evaluation instrument. Methodologies of retrieved papers were not suitable for conventional appraisal, and an evidence table was constructed. The search strategy identified 25 published papers and 4 guidelines. Published study methodologies included 1 randomized controlled trial; 1 before-and-after study of interventions; 10 observational studies, audits, or analyses; 12 surveys; and 1 narrative review of the literature. Existing guidelines have a number of weaknesses with regard to scope and purpose, methods of development, stakeholder consultation, and editorial independence and applicability. There is a major gap in published studies relating to testing of interventions to improve report quality using conventional randomized controlled trial methods. Published studies and guidelines generally support report content, including clinical history, examination quality, description of findings, comparison, and diagnosis. Important report attributes include accuracy, clarity, and certainty. There is wide variation in the language used to describe imaging findings and diagnostic certainty. Survey participants strongly preferred reports with structured or itemized formats, but few studies exist regarding the effect of report structure on quality. Copyright 2010 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  11. Computerized organ localization in abdominal CT volume with context-driven generalized Hough transform

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Li, Qiang

    2014-03-01

    Fast localization of organs is a key step in computer-aided detection of lesions and in image guided radiation therapy. We developed a context-driven Generalized Hough Transform (GHT) for robust localization of organ-of-interests (OOIs) in a CT volume. Conventional GHT locates the center of an organ by looking-up center locations of pre-learned organs with "matching" edges. It often suffers from mislocalization because "similar" edges in vicinity may attract the prelearned organs towards wrong places. The proposed method not only uses information from organ's own shape but also takes advantage of nearby "similar" edge structures. First, multiple GHT co-existing look-up tables (cLUT) were constructed from a set of training shapes of different organs. Each cLUT represented the spatial relationship between the center of the OOI and the shape of a co-existing organ. Second, the OOI center in a test image was determined using GHT with each cLUT separately. Third, the final localization of OOI was based on weighted combination of the centers obtained in the second stage. The training set consisted of 10 CT volumes with manually segmented OOIs including liver, spleen and kidneys. The method was tested on a set of 25 abdominal CT scans. Context-driven GHT correctly located all OOIs in the test image and gave localization errors of 19.5±9.0, 12.8±7.3, 9.4±4.6 and 8.6±4.1 mm for liver, spleen, left and right kidney respectively. Conventional GHT mis-located 8 out of 100 organs and its localization errors were 26.0±32.6, 14.1±10.6, 30.1±42.6 and 23.6±39.7mm for liver, spleen, left and right kidney respectively.

  12. Analysis of the World Experience of Smart Grid Deployment: Economic Effectiveness Issues

    NASA Astrophysics Data System (ADS)

    Ratner, S. V.; Nizhegorodtsev, R. M.

    2018-06-01

    Despite the positive dynamics in the growth of RES-based power production in electric power systems of many countries, the further development of commercially mature technologies of wind and solar generation is often constrained by the existing grid infrastructure and conventional energy supply practices. The integration of large wind and solar power plants into a single power grid and the development of microgeneration require the widespread introduction of a new smart grid technology cluster (smart power grids), whose technical advantages over the conventional ones have been fairly well studied, while issues of their economic effectiveness remain open. Estimation and forecasting potential economic effects from the introduction of innovative technologies in the power sector during the stage preceding commercial development is a methodologically difficult task that requires the use of knowledge from different sciences. This paper contains the analysis of smart grid project implementation in Europe and the United States. Interval estimates are obtained for their basic economic parameters. It was revealed that the majority of smart grid implemented projects are not yet commercially effective, since their positive externalities are usually not recognized on the revenue side due to the lack of universal methods for public benefits monetization. The results of the research can be used in modernization and development planning for the existing grid infrastructure both at the federal level and at the level of certain regions and territories.

  13. WE-FG-207B-05: Iterative Reconstruction Via Prior Image Constrained Total Generalized Variation for Spectral CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, S; Zhang, Y; Ma, J

    Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less

  14. Global positioning method based on polarized light compass system

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Yang, Jiangtao; Wang, Yubo; Tang, Jun; Shen, Chong

    2018-05-01

    This paper presents a global positioning method based on a polarized light compass system. A main limitation of polarization positioning is the environment such as weak and locally destroyed polarization environments, and the solution to the positioning problem is given in this paper which is polarization image de-noising and segmentation. Therefore, the pulse coupled neural network is employed for enhancing positioning performance. The prominent advantages of the present positioning technique are as follows: (i) compared to the existing position method based on polarized light, better sun tracking accuracy can be achieved and (ii) the robustness and accuracy of positioning under weak and locally destroyed polarization environments, such as cloudy or building shielding, are improved significantly. Finally, some field experiments are given to demonstrate the effectiveness and applicability of the proposed global positioning technique. The experiments have shown that our proposed method outperforms the conventional polarization positioning method, the real time longitude and latitude with accuracy up to 0.0461° and 0.0911°, respectively.

  15. Comparison of performance due to guided hyperlearning, unguided hyperlearning, and conventional learning in mathematics: an empirical study

    NASA Astrophysics Data System (ADS)

    Fathurrohman, Maman; Porter, Anne; Worthy, Annette L.

    2014-07-01

    In this paper, the use of guided hyperlearning, unguided hyperlearning, and conventional learning methods in mathematics are compared. The design of the research involved a quasi-experiment with a modified single-factor multiple treatment design comparing the three learning methods, guided hyperlearning, unguided hyperlearning, and conventional learning. The participants were from three first-year university classes, numbering 115 students in total. Each group received guided, unguided, or conventional learning methods in one of the three different topics, namely number systems, functions, and graphing. The students' academic performance differed according to the type of learning. Evaluation of the three methods revealed that only guided hyperlearning and conventional learning were appropriate methods for the psychomotor aspects of drawing in the graphing topic. There was no significant difference between the methods when learning the cognitive aspects involved in the number systems topic and the functions topic.

  16. A new pre-classification method based on associative matching method

    NASA Astrophysics Data System (ADS)

    Katsuyama, Yutaka; Minagawa, Akihiro; Hotta, Yoshinobu; Omachi, Shinichiro; Kato, Nei

    2010-01-01

    Reducing the time complexity of character matching is critical to the development of efficient Japanese Optical Character Recognition (OCR) systems. To shorten processing time, recognition is usually split into separate preclassification and recognition stages. For high overall recognition performance, the pre-classification stage must both have very high classification accuracy and return only a small number of putative character categories for further processing. Furthermore, for any practical system, the speed of the pre-classification stage is also critical. The associative matching (AM) method has often been used for fast pre-classification, because its use of a hash table and reliance solely on logical bit operations to select categories makes it highly efficient. However, redundant certain level of redundancy exists in the hash table because it is constructed using only the minimum and maximum values of the data on each axis and therefore does not take account of the distribution of the data. We propose a modified associative matching method that satisfies the performance criteria described above but in a fraction of the time by modifying the hash table to reflect the underlying distribution of training characters. Furthermore, we show that our approach outperforms pre-classification by clustering, ANN and conventional AM in terms of classification accuracy, discriminative power and speed. Compared to conventional associative matching, the proposed approach results in a 47% reduction in total processing time across an evaluation test set comprising 116,528 Japanese character images.

  17. Ictal SPECT using an attachable automated injector: clinical usefulness in the prediction of ictal onset zone.

    PubMed

    Lee, Jung-Ju; Lee, Sang Kun; Choi, Jang Wuk; Kim, Dong-Wook; Park, Kyung Il; Kim, Bom Sahn; Kang, Hyejin; Lee, Dong Soo; Lee, Seo-Young; Kim, Sung Hun; Chung, Chun Kee; Nam, Hyeon Woo; Kim, Kwang Ki

    2009-12-01

    Ictal single-photon emission computed tomography (SPECT) is a valuable method for localizing the ictal onset zone in the presurgical evaluation of patients with intractable epilepsy. Conventional methods used to localize the ictal onset zone have problems with time lag from seizure onset to injection. To evaluate the clinical usefulness of a method that we developed, which involves an attachable automated injector (AAI), in reducing time lag and improving the ability to localize the zone of seizure onset. Patients admitted to the epilepsy monitoring unit (EMU) between January 1, 2003, and June 30, 2008, were included. The definition of ictal onset zone was made by comprehensive review of medical records, magnetic resonance imaging (MRI), data from video electroencephalography (EEG) monitoring, and invasive EEG monitoring if available. We comprehensively evaluated the time lag to injection and the image patterns of ictal SPECT using traditional visual analysis, statistical parametric mapping-assisted, and subtraction ictal SPECT coregistered to an MRI-assisted means of analysis. Image patterns were classified as localizing, lateralizing, and nonlateralizing. The whole number of patients was 99: 48 in the conventional group and 51 in the AAI group. The mean (SD) delay time to injection from seizure onset was 12.4+/-12.0 s in the group injected by our AAI method and 40.4+/-26.3 s in the group injected by the conventional method (P=0.000). The mean delay time to injection from seizure detection was 3.2+/-2.5 s in the group injected by the AAI method and 21.4+/-9.7 s in the group injected by the conventional method (P=0.000). The AAI method was superior to the conventional method in localizing the area of seizure onset (36 out of 51 with AAI method vs. 21 out of 48 with conventional method, P=0.009), especially in non-temporal lobe epilepsy (non-TLE) patients (17 out of 27 with AAI method vs. 3 out of 13 with conventional method, P=0.041), and in lateralizing the seizure onset hemisphere (47 out of 51 with AAI method vs. 33 out of 48 with conventional method, P=0.004). The AAI method was superior to the conventional method in reducing the time lag of tracer injection and in localizing and lateralizing the ictal onset zone, especially in patients with non-TLE.

  18. Investigation of pattern transfer to piezoelectric jetted polymer using roll-to-roll nanoimprint lithography

    NASA Astrophysics Data System (ADS)

    Menezes, Shannon John

    Nanoimprint Lithography (NIL) has existed since the mid 1990s as a proven concept of creating micro- and nanostructures using direct mechanical pattern transfer. Initially seen as a viable option to replace conventional lithography methods, the lack of technology to support large-scale manufacturing using NIL has motivated researchers to explore the application of NIL to create a better, more cost-efficient process with the ability to integrate NIL into a mass manufacturing system. One such method is the roll-to-roll process, similar to that used in printing presses of newspapers and plastics. This thesis is an investigation to characterize polymer deposition using a piezoelectric jetting head and attempt to create micro- and nanostructures on the polymer using R2RNIL technique.

  19. Determining the properties of accretion-gap neutron stars

    NASA Technical Reports Server (NTRS)

    Kluzniak, Wlodzimierz; Michelson, Peter; Wagoner, Robert V.

    1990-01-01

    If neutron stars have radii as small as has been argued by some, observations of accretion-powered X-rays could verify the existence of innermost stable circular orbits (predicted by general relativity) around weakly magnetized neutron stars. This may be done by detecting X-ray emission from clumps of matter before and after they cross the gap (where matter cannot be supported by rotation) between the inner accretion disk and the stellar surface. Assuming the validity of general relativity, it would then be possible to determine the masses of such neutron stars independently of any knowledge of binary orbital parameters. If an accurate mass determination were already available through any of the methods conventionally used, the new mass determination method proposed here could then be used to quantitatively test strong field effects of gravitational theory.

  20. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  1. Geopressured-geothermal test of the EDNA Delcambre No. 1 well, Tigre Lagoon Field, Vermilion Parish, Louisiana: Analysis of water and dissolved natural gas: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hankind, B.E.; Karkalits, O.C.

    1978-09-01

    The presence of large volumes of hot water (250-425 F) containing dissolved natural gas in the Gulf of Mexico coastal areas at depths of 5,000 to 25,000 feet (the geopressured zone) has been known for several years. Because natural gas and oil from conventional production methods were relatively inexpensive prior to 1973, and because foreign oil was readily available, no economic incentive existed for developing this resource. With the oil embargo and the resulting rapid escalation in prices of oil and gas since 1973, a new urgency exists for examining the economic potential of the geopressured-geothermal resource. The main objectivemore » of the research reported here was to determine the volume of gas dissolved in the geopressured water, as well as the qualitative and quantitative composition of the water and the dissolved gas. A further objective was to use an existing shut-in gas well so that drilling time and the attendant costs could be avoided.« less

  2. Decapsulation Method for Flip Chips with Ceramics in Microelectronic Packaging

    NASA Astrophysics Data System (ADS)

    Shih, T. I.; Duh, J. G.

    2008-06-01

    The decapsulation of flip chips bonded to ceramic substrates is a challenging task in the packaging industry owing to the vulnerability of the chip surface during the process. In conventional methods, such as manual grinding and polishing, the solder bumps are easily damaged during the removal of underfill, and the thin chip may even be crushed due to mechanical stress. An efficient and reliable decapsulation method consisting of thermal and chemical processes was developed in this study. The surface quality of chips after solder removal is satisfactory for the existing solder rework procedure as well as for die-level failure analysis. The innovative processes included heat-sink and ceramic substrate removal, solder bump separation, and solder residue cleaning from the chip surface. In the last stage, particular temperatures were selected for the removal of eutectic Pb-Sn, high-lead, and lead-free solders considering their respective melting points.

  3. Industrial chimney monitoring - contemporary methods

    NASA Astrophysics Data System (ADS)

    Kaszowska, Olga; Gruchlik, Piotr; Mika, Wiesław

    2018-04-01

    The paper presents knowledge acquired during the monitoring of a flue-gas stack, performed as part of technical and scientific surveillance of mining activity and its impact on industrial objects. The chimney is located in an area impacted by mining activity since the 1970s, from a coal mine which is no longer in existence. In the period of 2013-16, this area was subject to mining carried out by a mining entrepreneur who currently holds a license to excavate hard coal. Periodic measurements of the deflection of the 113-meter chimney are performed using conventional geodetic methods. The GIG used 3 methods to observe the stack: landbased 3D laser scanning, continuous deflection monitoring with a laser sensor, and drone-based visual inspections. The drone offered the possibility to closely inspect the upper sections of the flue-gas stack, which are difficult to see from the ground level.

  4. Molecular and Microscopical Investigation of the Microflora Inhabiting a Deteriorated Italian Manuscript Dated from the Thirteenth Century

    PubMed Central

    Michaelsen, Astrid; Piñar, Guadalupe

    2010-01-01

    This case study shows the application of nontraditional diagnostic methods to investigate the microbial consortia inhabiting an ancient manuscript. The manuscript was suspected to be biologically deteriorated and SEM observations showed the presence of fungal spores attached to fibers, but classic culturing methods did not succeed in isolating microbial contaminants. Therefore, molecular methods, including PCR, denaturing gradient gel electrophoresis (DGGE), and clone libraries, were used as a sensitive alternative to conventional cultivation techniques. DGGE fingerprints revealed a high biodiversity of both bacteria and fungi inhabiting the manuscript. DNA sequence analysis confirmed the existence of fungi and bacteria in manuscript samples. A number of fungal clones identified on the manuscript showed similarity to fungal species inhabiting dry or saline environments, suggesting that the manuscript environment selects for osmophilic or xerophilic fungal species. Most of the bacterial sequences retrieved from the manuscript belong to phylotypes with cellulolytic activities. PMID:20449583

  5. Accurate reconstruction in digital holographic microscopy using antialiasing shift-invariant contourlet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian

    2018-03-01

    The measurement of microstructured components is a challenging task in optical engineering. Digital holographic microscopy has attracted intensive attention due to its remarkable capability of measuring complex surfaces. However, speckles arise in the recorded interferometric holograms, and they will degrade the reconstructed wavefronts. Existing speckle removal methods suffer from the problems of frequency aliasing and phase distortions. A reconstruction method based on the antialiasing shift-invariant contourlet transform (ASCT) is developed. Salient edges and corners have sparse representations in the transform domain of ASCT, and speckles can be recognized and removed effectively. As subsampling in the scale and directional filtering schemes is avoided, the problems of frequency aliasing and phase distortions occurring in the conventional multiscale transforms can be effectively overcome, thereby improving the accuracy of wavefront reconstruction. As a result, the proposed method is promising for the digital holographic measurement of complex structures.

  6. On Application of Model Predictive Control to Power Converter with Switching

    NASA Astrophysics Data System (ADS)

    Zanma, Tadanao; Fukuta, Junichi; Doki, Shinji; Ishida, Muneaki; Okuma, Shigeru; Matsumoto, Takashi; Nishimori, Eiji

    This paper concerns a DC-DC converter control. In DC-DC converters, there exist both continuous components such as inductance, conductance and resistance and discrete ones, IGBT and MOSFET as semiconductor switching elements. Such a system can be regarded as a hybrid dynamical system. Thus, this paper presents a dc-dc control technique based on the model predictive control. Specifically, a case in which the load of the dc-dc converter changes from active to sleep is considered. In the case, a control method which makes the output voltage follow to the reference quickly in transition, and the switching frequency be constant in steady state. In addition, in applying the model predictive control to power electronics circuits, the switching characteristic of the device and the restriction condition for protection are also considered. The effectiveness of the proposed method is illustrated by comparing a conventional method through some simulation results.

  7. Quantitative photoacoustic elasticity and viscosity imaging for cirrhosis detection

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Shi, Yujiao; Yang, Fen; Yang, Sihua

    2018-05-01

    Elasticity and viscosity assessments are essential for understanding and characterizing the physiological and pathological states of tissue. In this work, by establishing a photoacoustic (PA) shear wave model, an approach for quantitative PA elasticity imaging based on measurement of the rise time of the thermoelastic displacement was developed. Thus, using an existing PA viscoelasticity imaging method that features a phase delay measurement, quantitative PA elasticity imaging and viscosity imaging can be obtained in a simultaneous manner. The method was tested and validated by imaging viscoelastic agar phantoms prepared at different agar concentrations, and the imaging data were in good agreement with rheometry results. Ex vivo experiments on liver pathological models demonstrated the capability for cirrhosis detection, and the results were consistent with the corresponding histological results. This method expands the scope of conventional PA imaging and has potential to become an important alternative imaging modality.

  8. Analysis of cigarette purchase task instrument data with a left-censored mixed effects model.

    PubMed

    Liao, Wenjie; Luo, Xianghua; Le, Chap T; Chu, Haitao; Epstein, Leonard H; Yu, Jihnhee; Ahluwalia, Jasjit S; Thomas, Janet L

    2013-04-01

    The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. Although a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug's RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, for example, 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method, and future directions of research are also discussed.

  9. The Flight Optimization System Weights Estimation Method

    NASA Technical Reports Server (NTRS)

    Wells, Douglas P.; Horvath, Bryce L.; McCullers, Linwood A.

    2017-01-01

    FLOPS has been the primary aircraft synthesis software used by the Aeronautics Systems Analysis Branch at NASA Langley Research Center. It was created for rapid conceptual aircraft design and advanced technology impact assessments. FLOPS is a single computer program that includes weights estimation, aerodynamics estimation, engine cycle analysis, propulsion data scaling and interpolation, detailed mission performance analysis, takeoff and landing performance analysis, noise footprint estimation, and cost analysis. It is well known as a baseline and common denominator for aircraft design studies. FLOPS is capable of calibrating a model to known aircraft data, making it useful for new aircraft and modifications to existing aircraft. The weight estimation method in FLOPS is known to be of high fidelity for conventional tube with wing aircraft and a substantial amount of effort went into its development. This report serves as a comprehensive documentation of the FLOPS weight estimation method. The development process is presented with the weight estimation process.

  10. Analysis of Cigarette Purchase Task Instrument Data with a Left-Censored Mixed Effects Model

    PubMed Central

    Liao, Wenjie; Luo, Xianghua; Le, Chap; Chu, Haitao; Epstein, Leonard H.; Yu, Jihnhee; Ahluwalia, Jasjit S.; Thomas, Janet L.

    2015-01-01

    The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. While a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug’s RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, e.g. 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method and future directions of research are also discussed. PMID:23356731

  11. A Modified Electrostatic Adsorption Apparatus for Latent Fingerprint Development on Unfired Cartridge Cases.

    PubMed

    Xu, Jingyang; Zhang, Ziyuan; Zheng, Xiaochun; Bond, John W

    2017-05-01

    Visualization of latent fingerprints on metallic surfaces by the method of applying electrostatic charging and adsorption is considered as a promising chemical-free method, which has the merit of nondestruction, and is considered to be effective for some difficult situations such as aged fingerprint deposits or those exposed to environmental extremes. In fact, a portable electrostatic generator can be easily accessible in a local forensic technology laboratory, which is already widely used in the visualization of footwear impressions. In this study, a modified version of this electrostatic apparatus is proposed for latent fingerprint development and has shown great potential in visualizing fingerprints on metallic surfaces such as cartridge cases. Results indicate that this experimental arrangement can successfully develop aged latent fingerprints on metal surfaces, and we demonstrate its effectiveness compared with existing conventional fingerprint recovery methods. © 2016 American Academy of Forensic Sciences.

  12. Computer-Assisted Learning Applications in Health Educational Informatics: A Review.

    PubMed

    Shaikh, Faiq; Inayat, Faisal; Awan, Omer; Santos, Marlise D; Choudhry, Adnan M; Waheed, Abdul; Kajal, Dilkash; Tuli, Sagun

    2017-08-10

    Computer-assisted learning (CAL) as a health informatics application is a useful tool for medical students in the era of expansive knowledge bases and the increasing need for and the consumption of automated and interactive systems. As the scope and breadth of medical knowledge expand, the need for additional learning outside of lecture hours is becoming increasingly important. CAL can be an impactful adjunct to conventional methods that currently exist in the halls of learning. There is an increasing body of literature that suggests that CAL should be a commonplace and the recommended method of learning for medical students. Factors such as technical issues that hinder the performance of CAL are also evaluated. We conclude by encouraging the use of CAL by medical students as a highly beneficial method of learning that complements and enhances lectures and provides intuitive, interactive modulation of a self-paced curriculum based on the individual's academic abilities.

  13. Multi-Mounted X-Ray Computed Tomography.

    PubMed

    Fu, Jian; Liu, Zhenzhong; Wang, Jingzheng

    2016-01-01

    Most existing X-ray computed tomography (CT) techniques work in single-mounted mode and need to scan the inspected objects one by one. It is time-consuming and not acceptable for the inspection in a large scale. In this paper, we report a multi-mounted CT method and its first engineering implementation. It consists of a multi-mounted scanning geometry and the corresponding algebraic iterative reconstruction algorithm. This approach permits the CT rotation scanning of multiple objects simultaneously without the increase of penetration thickness and the signal crosstalk. Compared with the conventional single-mounted methods, it has the potential to improve the imaging efficiency and suppress the artifacts from the beam hardening and the scatter. This work comprises a numerical study of the method and its experimental verification using a dataset measured with a developed multi-mounted X-ray CT prototype system. We believe that this technique is of particular interest for pushing the engineering applications of X-ray CT.

  14. Theory and preliminary experimental verification of quantitative edge illumination x-ray phase contrast tomography.

    PubMed

    Hagen, C K; Diemoz, P C; Endrizzi, M; Rigon, L; Dreossi, D; Arfelli, F; Lopez, F C M; Longo, R; Olivo, A

    2014-04-07

    X-ray phase contrast imaging (XPCi) methods are sensitive to phase in addition to attenuation effects and, therefore, can achieve improved image contrast for weakly attenuating materials, such as often encountered in biomedical applications. Several XPCi methods exist, most of which have already been implemented in computed tomographic (CT) modality, thus allowing volumetric imaging. The Edge Illumination (EI) XPCi method had, until now, not been implemented as a CT modality. This article provides indications that quantitative 3D maps of an object's phase and attenuation can be reconstructed from EI XPCi measurements. Moreover, a theory for the reconstruction of combined phase and attenuation maps is presented. Both reconstruction strategies find applications in tissue characterisation and the identification of faint, weakly attenuating details. Experimental results for wires of known materials and for a biological object validate the theory and confirm the superiority of the phase over conventional, attenuation-based image contrast.

  15. Turbomachinery aeroelasticity at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Kaza, Krishna Rao V.

    1989-01-01

    The turbomachinery aeroelastic effort is focused on unstalled and stalled flutter, forced response, and whirl flutter of both single rotation and counter rotation propfans. It also includes forced response of the Space Shuttle Main Engine (SSME) turbopump blades. Because of certain unique features of propfans and the SSME turbopump blades, it is not possible to directly use the existing aeroelastic technology of conventional propellers, turbofans or helicopters. Therefore, reliable aeroelastic stability and response analysis methods for these propulsion systems must be developed. The development of these methods for propfans requires specific basic technology disciplines, such as 2-D and 3-D steady and unsteady aerodynamic theories in subsonic, transonic and supersonic flow regimes; modeling of composite blades; geometric nonlinear effects; and passive and active control of flutter and response. These methods are incorporated in a computer program, ASTROP. The program has flexibility such that new and future models in basic disciplines can be easily implemented.

  16. The Convention on the Rights of Persons with Disabilities: Notes on Genealogy and Prospects

    ERIC Educational Resources Information Center

    Winzer, Margret; Mazurek, Kas

    2014-01-01

    The dense and complex "Convention on the Rights of Persons with Disabilities" (CRPD) is both a human rights treaty and a development tool. It supplements the web of existing human rights instruments insofar as they relate to disability. Schooling is enshrouded as a rights-based case; inclusive education as a development tool for all…

  17. Usefulness of the automatic quantitative estimation tool for cerebral blood flow: clinical assessment of the application software tool AQCEL.

    PubMed

    Momose, Mitsuhiro; Takaki, Akihiro; Matsushita, Tsuyoshi; Yanagisawa, Shin; Yano, Kesato; Miyasaka, Tadashi; Ogura, Yuka; Kadoya, Masumi

    2011-01-01

    AQCEL enables automatic reconstruction of single-photon emission computed tomogram (SPECT) without image degradation and quantitative analysis of cerebral blood flow (CBF) after the input of simple parameters. We ascertained the usefulness and quality of images obtained by the application software AQCEL in clinical practice. Twelve patients underwent brain perfusion SPECT using technetium-99m ethyl cysteinate dimer at rest and after acetazolamide (ACZ) loading. Images reconstructed using AQCEL were compared with those reconstructed using conventional filtered back projection (FBP) method for qualitative estimation. Two experienced nuclear medicine physicians interpreted the image quality using the following visual scores: 0, same; 1, slightly superior; 2, superior. For quantitative estimation, the mean CBF values of the normal hemisphere of the 12 patients using ACZ calculated by the AQCEL method were compared with those calculated by the conventional method. The CBF values of the 24 regions of the 3-dimensional stereotaxic region of interest template (3DSRT) calculated by the AQCEL method at rest and after ACZ loading were compared to those calculated by the conventional method. No significant qualitative difference was observed between the AQCEL and conventional FBP methods in the rest study. The average score by the AQCEL method was 0.25 ± 0.45 and that by the conventional method was 0.17 ± 0.39 (P = 0.34). There was a significant qualitative difference between the AQCEL and conventional methods in the ACZ loading study. The average score for AQCEL was 0.83 ± 0.58 and that for the conventional method was 0.08 ± 0.29 (P = 0.003). During quantitative estimation using ACZ, the mean CBF values of 12 patients calculated by the AQCEL method were 3-8% higher than those calculated by the conventional method. The square of the correlation coefficient between these methods was 0.995. While comparing the 24 3DSRT regions of 12 patients, the squares of the correlation coefficient between AQCEL and conventional methods were 0.973 and 0.986 for the normal and affected sides at rest, respectively, and 0.977 and 0.984 for the normal and affected sides after ACZ loading, respectively. The quality of images reconstructed using the application software AQCEL were superior to that obtained using conventional method after ACZ loading, and high correlations were shown in quantity at rest and after ACZ loading. This software can be applied to clinical practice and is a useful tool for improvement of reproducibility and throughput.

  18. Effects of feeding high protein or conventional canola meal on dry cured and conventionally cured bacon.

    PubMed

    Little, K L; Bohrer, B M; Stein, H H; Boler, D D

    2015-05-01

    Objectives were to compare belly, bacon processing, bacon slice, and sensory characteristics from pigs fed high protein canola meal (CM-HP) or conventional canola meal (CM-CV). Soybean meal was replaced with 0 (control), 33, 66, or 100% of both types of canola meal. Left side bellies from 70 carcasses were randomly assigned to conventional or dry cure treatment and matching right side bellies were assigned the opposite treatment. Secondary objectives were to test the existence of bilateral symmetry on fresh belly characteristics and fatty acid profiles of right and left side bellies originating from the same carcass. Bellies from pigs fed CM-HP were slightly lighter and thinner than bellies from pigs fed CM-CV, yet bacon processing, bacon slice, and sensory characteristics were unaffected by dietary treatment and did not differ from the control. Furthermore, testing the existence of bilateral symmetry on fresh belly characteristics revealed that bellies originating from the right side of the carcasses were slightly (P≤0.05) wider, thicker, heavier and firmer than bellies from the left side of the carcass. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. A proposed global metric to aid mercury pollution policy

    NASA Astrophysics Data System (ADS)

    Selin, Noelle E.

    2018-05-01

    The Minamata Convention on Mercury entered into force in August 2017, committing its currently 92 parties to take action to protect human health and the environment from anthropogenic emissions and releases of mercury. But how can we tell whether the convention is achieving its objective? Although the convention requires periodic effectiveness evaluation (1), scientific uncertainties challenge our ability to trace how mercury policies translate into reduced human and wildlife exposure and impacts. Mercury emissions to air and releases to land and water follow a complex path through the environment before accumulating as methylmercury in fish, mammals, and birds. As these environmental processes are both uncertain and variable, analyzing existing data alone does not currently provide a clear signal of whether policies are effective. A global-scale metric to assess the impact of mercury emissions policies would help parties assess progress toward the convention's goal. Here, I build on the example of the Montreal Protocol on Substances that Deplete the Ozone Layer to identify criteria for a mercury metric. I then summarize why existing mercury data are insufficient and present and discuss a proposed new metric based on mercury emissions to air. Finally, I identify key scientific uncertainties that challenge future effectiveness evaluation.

  20. NetCDF-CF: Supporting Earth System Science with Data Access, Analysis, and Visualization

    NASA Astrophysics Data System (ADS)

    Davis, E.; Zender, C. S.; Arctur, D. K.; O'Brien, K.; Jelenak, A.; Santek, D.; Dixon, M. J.; Whiteaker, T. L.; Yang, K.

    2017-12-01

    NetCDF-CF is a community-developed convention for storing and describing earth system science data in the netCDF binary data format. It is an OGC recognized standard with numerous existing FOSS (Free and Open Source Software) and commercial software tools can explore, analyze, and visualize data that is stored and described as netCDF-CF data. To better support a larger segment of the earth system science community, a number of efforts are underway to extend the netCDF-CF convention with the goal of increasing the types of data that can be represented as netCDF-CF data. This presentation will provide an overview and update of work to extend the existing netCDF-CF convention. It will detail the types of earth system science data currently supported by netCDF-CF and the types of data targeted for support by current netCDF-CF convention development efforts. It will also describe some of the tools that support the use of netCDF-CF compliant datasets, the types of data they support, and efforts to extend them to handle the new data types that netCDF-CF will support.

  1. Machining and characterization of self-reinforced polymers

    NASA Astrophysics Data System (ADS)

    Deepa, A.; Padmanabhan, K.; Kuppan, P.

    2017-11-01

    This Paper focuses on obtaining the mechanical properties and the effect of the different machining techniques on self-reinforced composites sample and to derive the best machining method with remarkable properties. Each sample was tested by the Tensile and Flexural tests, fabricated using hot compaction test and those loads were calculated. These composites are machined using conventional methods because of lack of advanced machinery in most of the industries. The advanced non-conventional methods like Abrasive water jet machining were used. These machining techniques are used to get the better output for the composite materials with good mechanical properties compared to conventional methods. But the use of non-conventional methods causes the changes in the work piece, tool properties and more economical compared to the conventional methods. Finding out the best method ideal for the designing of these Self Reinforced Composites with and without defects and the use of Scanning Electron Microscope (SEM) analysis for the comparing the microstructure of the PP and PE samples concludes our process.

  2. Development of a novel cell sorting method that samples population diversity in flow cytometry.

    PubMed

    Osborne, Geoffrey W; Andersen, Stacey B; Battye, Francis L

    2015-11-01

    Flow cytometry based electrostatic cell sorting is an important tool in the separation of cell populations. Existing instruments can sort single cells into multi-well collection plates, and keep track of cell of origin and sorted well location. However currently single sorted cell results reflect the population distribution and fail to capture the population diversity. Software was designed that implements a novel sorting approach, "Slice and Dice Sorting," that links a graphical representation of a multi-well plate to logic that ensures that single cells are sampled and sorted from all areas defined by the sort region/s. Therefore the diversity of the total population is captured, and the more frequently occurring or rarer cell types are all sampled. The sorting approach was tested computationally, and using functional cell based assays. Computationally we demonstrate that conventional single cell sorting can sample as little as 50% of the population diversity dependant on the population distribution, and that Slice and Dice sorting samples much more of the variety present within a cell population. We then show by sorting single cells into wells using the Slice and Dice sorting method that there are cells sorted using this method that would be either rarely sorted, or not sorted at all using conventional single cell sorting approaches. The present study demonstrates a novel single cell sorting method that samples much more of the population diversity than current methods. It has implications in clonal selection, stem cell sorting, single cell sequencing and any areas where population heterogeneity is of importance. © 2015 International Society for Advancement of Cytometry.

  3. Toward Consistent Methodology to Quantify Populations in Proximity to Oil and Gas Development: A National Spatial Analysis and Review.

    PubMed

    Czolowski, Eliza D; Santoro, Renee L; Srebotnjak, Tanja; Shonkoff, Seth B C

    2017-08-23

    Higher risk of exposure to environmental health hazards near oil and gas wells has spurred interest in quantifying populations that live in proximity to oil and gas development. The available studies on this topic lack consistent methodology and ignore aspects of oil and gas development of value to public health-relevant assessment and decision-making. We aim to present a methodological framework for oil and gas development proximity studies grounded in an understanding of hydrocarbon geology and development techniques. We geospatially overlay locations of active oil and gas wells in the conterminous United States and Census data to estimate the population living in proximity to hydrocarbon development at the national and state levels. We compare our methods and findings with existing proximity studies. Nationally, we estimate that 17.6 million people live within 1,600m (∼1 mi) of at least one active oil and/or gas well. Three of the eight studies overestimate populations at risk from actively producing oil and gas wells by including wells without evidence of production or drilling completion and/or using inappropriate population allocation methods. The remaining five studies, by omitting conventional wells in regions dominated by historical conventional development, significantly underestimate populations at risk. The well inventory guidelines we present provide an improved methodology for hydrocarbon proximity studies by acknowledging the importance of both conventional and unconventional well counts as well as the relative exposure risks associated with different primary production categories (e.g., oil, wet gas, dry gas) and developmental stages of wells. https://doi.org/10.1289/EHP1535.

  4. Profit-based conventional resource scheduling with renewable energy penetration

    NASA Astrophysics Data System (ADS)

    Reddy, K. Srikanth; Panwar, Lokesh Kumar; Kumar, Rajesh; Panigrahi, B. K.

    2017-08-01

    Technological breakthroughs in renewable energy technologies (RETs) enabled them to attain grid parity thereby making them potential contenders for existing conventional resources. To examine the market participation of RETs, this paper formulates a scheduling problem accommodating energy market participation of wind- and solar-independent power producers (IPPs) treating both conventional and RETs as identical entities. Furthermore, constraints pertaining to penetration and curtailments of RETs are restructured. Additionally, an appropriate objective function for profit incurred by conventional resource IPPs through reserve market participation as a function of renewable energy curtailment is also proposed. The proposed concept is simulated with a test system comprising 10 conventional generation units in conjunction with solar photovoltaic (SPV) and wind energy generators (WEG). The simulation results indicate that renewable energy integration and its curtailment limits influence the market participation or scheduling strategies of conventional resources in both energy and reserve markets. Furthermore, load and reliability parameters are also affected.

  5. Large dynamic range terahertz spectrometers based on plasmonic photomixers (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Javadi, Hamid; Jarrahi, Mona

    2017-02-01

    Heterodyne terahertz spectrometers are highly in demand for space explorations and astrophysics studies. A conventional heterodyne terahertz spectrometer consists of a terahertz mixer that mixes a received terahertz signal with a local oscillator signal to generate an intermediate frequency signal in the radio frequency (RF) range, where it can be easily processed and detected by RF electronics. Schottky diode mixers, superconductor-insulator-superconductor (SIS) mixers and hot electron bolometer (HEB) mixers are the most commonly used mixers in conventional heterodyne terahertz spectrometers. While conventional heterodyne terahertz spectrometers offer high spectral resolution and high detection sensitivity levels at cryogenic temperatures, their dynamic range and bandwidth are limited by the low radiation power of existing terahertz local oscillators and narrow bandwidth of existing terahertz mixers. To address these limitations, we present a novel approach for heterodyne terahertz spectrometry based on plasmonic photomixing. The presented design replaces terahertz mixer and local oscillator of conventional heterodyne terahertz spectrometers with a plasmonic photomixer pumped by an optical local oscillator. The optical local oscillator consists of two wavelength-tunable continuous-wave optical sources with a terahertz frequency difference. As a result, the spectrometry bandwidth and dynamic range of the presented heterodyne spectrometer is not limited by radiation frequency and power restrictions of conventional terahertz sources. We demonstrate a proof-of-concept terahertz spectrometer with more than 90 dB dynamic range and 1 THz spectrometry bandwidth.

  6. Cryopreservation: Vitrification and Controlled Rate Cooling.

    PubMed

    Hunt, Charles J

    2017-01-01

    Cryopreservation is the application of low temperatures to preserve the structural and functional integrity of cells and tissues. Conventional cooling protocols allow ice to form and solute concentrations to rise during the cryopreservation process. The damage caused by the rise in solute concentration can be mitigated by the use of compounds known as cryoprotectants. Such compounds protect cells from the consequences of slow cooling injury, allowing them to be cooled at cooling rates which avoid the lethal effects of intracellular ice. An alternative to conventional cooling is vitrification. Vitrification methods incorporate cryoprotectants at sufficiently high concentrations to prevent ice crystallization so that the system forms an amorphous glass thus avoiding the damaging effects caused by conventional slow cooling. However, vitrification too can impose damaging consequences on cells as the cryoprotectant concentrations required to vitrify cells at lower cooling rates are potentially, and often, harmful. While these concentrations can be lowered to nontoxic levels, if the cells are ultra-rapidly cooled, the resulting metastable system can lead to damage through devitrification and growth of ice during subsequent storage and rewarming if not appropriately handled.The commercial and clinical application of stem cells requires robust and reproducible cryopreservation protocols and appropriate long-term, low-temperature storage conditions to provide reliable master and working cell banks. Though current Good Manufacturing Practice (cGMP) compliant methods for the derivation and banking of clinical grade pluripotent stem cells exist and stem cell lines suitable for clinical applications are available, current cryopreservation protocols, whether for vitrification or conventional slow freezing, remain suboptimal. Apart from the resultant loss of valuable product that suboptimal cryopreservation engenders, there is a danger that such processes will impose a selective pressure on the cells selecting out a nonrepresentative, freeze-resistant subpopulation. Optimizing this process requires knowledge of the fundamental processes that occur during the freezing of cellular systems, the mechanisms of damage and methods for avoiding them. This chapter draws together the knowledge of cryopreservation gained in other systems with the current state-of-the-art for embryonic and induced pluripotent stem cell preservation in an attempt to provide the background for future attempts to optimize cryopreservation protocols.

  7. Potential risk for bacterial contamination in conventional reused ventilator systems and disposable closed ventilator-suction systems

    PubMed Central

    Li, Ya-Chi; Lin, Hui-Ling; Liao, Fang-Chun; Wang, Sing-Siang; Chang, Hsiu-Chu; Hsu, Hung-Fu; Chen, Sue-Hsien

    2018-01-01

    Background Few studies have investigated the difference in bacterial contamination between conventional reused ventilator systems and disposable closed ventilator-suction systems. The aim of this study was to investigate the bacterial contamination rates of the reused and disposable ventilator systems, and the association between system disconnection and bacterial contamination of ventilator systems. Methods The enrolled intubated and mechanically ventilated patients used a conventional reused ventilator system and a disposable closed ventilator-suction system, respectively, for a week; specimens were then collected from the ventilator circuit systems to evaluate human and environmental bacterial contamination. The sputum specimens from patients were also analyzed in this study. Results The detection rate of bacteria in the conventional reused ventilator system was substantially higher than that in the disposable ventilator system. The inspiratory and expiratory limbs of the disposable closed ventilator-suction system had higher bacterial concentrations than the conventional reused ventilator system. The bacterial concentration in the heated humidifier of the reused ventilator system was significantly higher than that in the disposable ventilator system. Positive associations existed among the bacterial concentrations at different locations in the reused and disposable ventilator systems, respectively. The predominant bacteria identified in the reused and disposable ventilator systems included Acinetobacter spp., Bacillus cereus, Elizabethkingia spp., Pseudomonas spp., and Stenotrophomonas (Xan) maltophilia. Conclusions Both the reused and disposable ventilator systems had high bacterial contamination rates after one week of use. Disconnection of the ventilator systems should be avoided during system operation to decrease the risks of environmental pollution and human exposure, especially for the disposable ventilator system. Trial registration ClinicalTrials.gov PRS / NCT03359148 PMID:29547638

  8. A conservative finite difference algorithm for the unsteady transonic potential equation in generalized coordinates

    NASA Technical Reports Server (NTRS)

    Bridgeman, J. O.; Steger, J. L.; Caradonna, F. X.

    1982-01-01

    An implicit, approximate-factorization, finite-difference algorithm has been developed for the computation of unsteady, inviscid transonic flows in two and three dimensions. The computer program solves the full-potential equation in generalized coordinates in conservation-law form in order to properly capture shock-wave position and speed. A body-fitted coordinate system is employed for the simple and accurate treatment of boundary conditions on the body surface. The time-accurate algorithm is modified to a conventional ADI relaxation scheme for steady-state computations. Results from two- and three-dimensional steady and two-dimensional unsteady calculations are compared with existing methods.

  9. Transport properties of lithium ions doped vanado-bismuth-tellurite glasses

    NASA Astrophysics Data System (ADS)

    Keshavamurthy, K.; Eraiah, B.

    2016-05-01

    The glasses of composition (65-x)V2O5-xLi2O-20TeO2-15Bi2O3 (x = 15 and 25 mol%) were prepared by conventional melt quenching method and their electrical conductivity and dielectric measurements have been carried out in the frequency range 40Hz to 6MHz over a temperature 373 to 473 K. The conductivity values increased with both Li2O concentration and temperature. Interestingly, the dielectric response showed the existence of a negative capacitance effect in the present glass system and concluded that this effect arose from the presence of external inductive reactance.

  10. Communications and control for electric power systems: Power system stability applications of artificial neural networks

    NASA Technical Reports Server (NTRS)

    Toomarian, N.; Kirkham, Harold

    1994-01-01

    This report investigates the application of artificial neural networks to the problem of power system stability. The field of artificial intelligence, expert systems, and neural networks is reviewed. Power system operation is discussed with emphasis on stability considerations. Real-time system control has only recently been considered as applicable to stability, using conventional control methods. The report considers the use of artificial neural networks to improve the stability of the power system. The networks are considered as adjuncts and as replacements for existing controllers. The optimal kind of network to use as an adjunct to a generator exciter is discussed.

  11. Predictive momentum management for the Space Station

    NASA Technical Reports Server (NTRS)

    Hatis, P. D.

    1986-01-01

    Space station control moment gyro momentum management is addressed by posing a deterministic optimization problem with a performance index that includes station external torque loading, gyro control torque demand, and excursions from desired reference attitudes. It is shown that a simple analytic desired attitude solution exists for all axes with pitch prescription decoupled, but roll and yaw coupled. Continuous gyro desaturation is shown to fit neatly into the scheme. Example results for pitch axis control of the NASA power tower Space Station are shown based on predictive attitude prescription. Control effector loading is shown to be reduced by this method when compared to more conventional momentum management techniques.

  12. A linear stability analysis for nonlinear, grey, thermal radiative transfer problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan B., E-mail: wollaber@lanl.go; Larsen, Edward W., E-mail: edlarsen@umich.ed

    2011-02-20

    We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, wemore » also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.« less

  13. A linear stability analysis for nonlinear, grey, thermal radiative transfer problems

    NASA Astrophysics Data System (ADS)

    Wollaber, Allan B.; Larsen, Edward W.

    2011-02-01

    We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used “Implicit Monte Carlo” (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or “Semi-Analog Monte Carlo” (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if α, the IMC time-discretization parameter, satisfies 0.5 < α ⩽ 1. This is consistent with conventional wisdom. However, we also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.

  14. Predicting drug loading in PLA-PEG nanoparticles.

    PubMed

    Meunier, M; Goupil, A; Lienard, P

    2017-06-30

    Polymer nanoparticles present advantageous physical and biopharmaceutical properties as drug delivery systems compared to conventional liquid formulations. Active pharmaceutical ingredients (APIs) are often hydrophobic, thus not soluble in conventional liquid delivery. Encapsulating the drugs in polymer nanoparticles can improve their pharmacological and bio-distribution properties, preventing rapid clearance from the bloodstream. Such nanoparticles are commonly made of non-toxic amphiphilic self-assembling block copolymers where the core (poly-[d,l-lactic acid] or PLA) serves as a reservoir for the API and the external part (Poly-(Ethylene-Glycol) or PEG) serves as a stealth corona to avoid capture by macrophage. The present study aims to predict the drug affinity for PLA-PEG nanoparticles and their effective drug loading using in silico tools in order to virtually screen potential drugs for non-covalent encapsulation applications. To that end, different simulation methods such as molecular dynamics and Monte-Carlo have been used to estimate the binding of actives on model polymer surfaces. Initially, the methods and models are validated against a series of pigments molecules for which experimental data exist. The drug affinity for the core of the nanoparticles is estimated using a Monte-Carlo "docking" method. Drug miscibility in the polymer matrix, using the Hildebrand solubility parameter (δ), and the solvation free energy of the drug in the PLA polymer model is then estimated. Finally, existing published ALogP quantitative structure-property relationships (QSPR) are compared to this method. Our results demonstrate that adsorption energies modelled by docking atomistic simulations on PLA surfaces correlate well with experimental drug loadings, whereas simpler approaches based on Hildebrand solubility parameters and Flory-Huggins interaction parameters do not. More complex molecular dynamics techniques which use estimation of the solvation free energies both in PLA and in water led to satisfactory predictive models. In addition, experimental drug loadings and Log P are found to correlate well. This work can be used to improve the understanding of drug-polymer interactions, a key component to designing better delivery systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Modified HPLC-ESI-MS Method for Glycated Hemoglobin Quantification Based on the IFCC Reference Measurement Procedure and Its Application for Quantitative Analyses in Clinical Laboratories of China.

    PubMed

    Song, Zhixin; Xie, Baoyuan; Ma, Huaian; Zhang, Rui; Li, Pengfei; Liu, Lihong; Yue, Yuhong; Zhang, Jianping; Tong, Qing; Wang, Qingtao

    2016-09-01

    The level of glycated hemoglobin (HbA1c ) has been recognized as an important indicator of long-term glycemic control. However, the HbA1c measurement is not currently included as a diagnostic determinant in China. Current study aims to assess a candidate modified International Federation of Clinical Chemistry reference method for the forthcoming standardization of HbA1c measurements in China. The HbA1c concentration was measured using a modified high-performance liquid chromatography-electrospray ionization-mass spectrometry (HPLC-ESI-MS) method. The modified method replaces the propylcyanide column with a C18 reversed-phase column, which has a lower cost and is more commonly used in China, and uses 0.1% (26.5 mmol/l) formic acid instead of trifluoroacetic acid. Moreover, in order to minimize matrix interference and reduce the running time, a solid-phase extraction was employed. The discrepancies between HbA1c measurements using conventional methods and the HPLC-ESI-MS method were clarified in clinical samples from healthy people and diabetic patients. Corresponding samples were distributed to 89 hospitals in Beijing for external quality assessment. The linearity, reliability, and accuracy of the modified HPLC-ESI-MS method with a shortened running time of 6 min were successfully validated. Out of 89 hospitals evaluated, the relative biases of HbA1c concentrations were < 8% for 74 hospitals and < 5% for 60 hospitals. Compared with other conventional methods, HbA1c concentrations determined by HPLC methods were similar to the values obtained from the current HPLC-ESI-MS method. The HPLC-ESI-MS method represents an improvement over existing methods and provides a simple, stable, and rapid HbA1c measurement with strong signal intensities and reduced ion suppression. © 2015 Wiley Periodicals, Inc.

  16. A spatially filtered multilevel model to account for spatial dependency: application to self-rated health status in South Korea

    PubMed Central

    2014-01-01

    Background This study aims to suggest an approach that integrates multilevel models and eigenvector spatial filtering methods and apply it to a case study of self-rated health status in South Korea. In many previous health-related studies, multilevel models and single-level spatial regression are used separately. However, the two methods should be used in conjunction because the objectives of both approaches are important in health-related analyses. The multilevel model enables the simultaneous analysis of both individual and neighborhood factors influencing health outcomes. However, the results of conventional multilevel models are potentially misleading when spatial dependency across neighborhoods exists. Spatial dependency in health-related data indicates that health outcomes in nearby neighborhoods are more similar to each other than those in distant neighborhoods. Spatial regression models can address this problem by modeling spatial dependency. This study explores the possibility of integrating a multilevel model and eigenvector spatial filtering, an advanced spatial regression for addressing spatial dependency in datasets. Methods In this spatially filtered multilevel model, eigenvectors function as additional explanatory variables accounting for unexplained spatial dependency within the neighborhood-level error. The specification addresses the inability of conventional multilevel models to account for spatial dependency, and thereby, generates more robust outputs. Results The findings show that sex, employment status, monthly household income, and perceived levels of stress are significantly associated with self-rated health status. Residents living in neighborhoods with low deprivation and a high doctor-to-resident ratio tend to report higher health status. The spatially filtered multilevel model provides unbiased estimations and improves the explanatory power of the model compared to conventional multilevel models although there are no changes in the signs of parameters and the significance levels between the two models in this case study. Conclusions The integrated approach proposed in this paper is a useful tool for understanding the geographical distribution of self-rated health status within a multilevel framework. In future research, it would be useful to apply the spatially filtered multilevel model to other datasets in order to clarify the differences between the two models. It is anticipated that this integrated method will also out-perform conventional models when it is used in other contexts. PMID:24571639

  17. Development of sandwich dot-ELISA for specific detection of Ochratoxin A and its application on to contaminated cereal grains originating from India

    PubMed Central

    Venkataramana, M.; Rashmi, R.; Uppalapati, Siva R.; Chandranayaka, S.; Balakrishna, K.; Radhika, M.; Gupta, Vijai K.; Batra, H. V.

    2015-01-01

    In the present study, generation and characterization of a highly specific monoclonal antibody (mAb) against Ochratoxin A (OTA) was undertaken. The generated mAb was further used to develop a simple, fast, and sensitive sandwich dot-ELISA (s-dot ELISA) method for detection of OTA from contaminated food grain samples. The limit of detection (LOD) of the developed enzyme-linked immunosorbent assay (ELISA) method was determined as 5.0 ng/mL of OTA. Developed method was more specific toward OTA and no cross reactivity was observed with the other tested mycotoxins such as deoxynivalenol, fumonisin B1, or aflatoxin B1. To assess the utility and reliability of the developed method, several field samples of maize, wheat and rice (n = 195) collected from different geographical regions of southern Karnataka region of India were evaluated for the OTA occurrence. Seventy two out of 195 samples (19 maize, 38 wheat, and 15 rice) were found to be contaminated by OTA by s-dot ELISA. The assay results were further co-evaluated with conventional analytical high-performance liquid chromatography (HPLC) method. Results of the s-dot ELISA are in concordance with HPLC except for three samples that were negative for OTA presence by s-dot ELISA but found positive by HPLC. Although positive by HPLC, the amount of OTA in the three samples was found to be lesser than the accepted levels (>5 μg/kg) of OTA presence in cereals. Therefore, in conclusion, the developed s-dot ELISA is a better alternative for routine cereal based food and feed analysis in diagnostic labs to check the presence of OTA over existing conventional culture based, tedious analytical methods. PMID:26074899

  18. Identification of Malassezia species in patients with seborrheic dermatitis in China.

    PubMed

    Zhang, Hao; Ran, Yuping; Xie, Zhen; Zhang, Ruifeng

    2013-02-01

    The causes of seborrheic dermatitis (SD) are complex and incompletely understood. Among the factors, Malassezia yeasts have been reported to play a major etiological role in SD. Many previous studies adopted conventional culture methods that were disadvantaged to detect Malassezia microflora in SD patients, resulting in a low detection rate for each species and high variance in types of microflora observed. This study analyzed Malassezia microflora in SD patients by applying a transparent dressing to the lesional skin and using direct detection of fungal DNA using nested PCR. We collected samples from the lesional skin of 146 SD patients in China and extracted fungal DNA directly from the lesional samples without culture. Specific primers for each Malassezia species were designed to amplify existing yeasts in each sample. Some samples were randomly selected to culture and identified by morphological and physiologic criteria. M. globosa and M. restricta were found in 87.0 and 81.5% of seborrheic dermatitis patients, respectively, which together accounted for more than 50% of Malassezia spp. recovered in these Chinese patients. The majority of SD patients (82.9%) showed co-colonization of two or more Malassezia species. M. globosa and M. restricta predominated in Malassezia colonization in Chinese SD patients. Compared with conventional culture, non-culture-based methods may more accurately reflect Malassezia microflora constitution.

  19. Doping of two-dimensional MoS2 by high energy ion implantation

    NASA Astrophysics Data System (ADS)

    Xu, Kang; Zhao, Yuda; Lin, Ziyuan; Long, Yan; Wang, Yi; Chan, Mansun; Chai, Yang

    2017-12-01

    Two-dimensional (2D) materials have been demonstrated to be promising candidates for next generation electronic circuits. Analogues to conventional Si-based semiconductors, p- and n-doping of 2D materials are essential for building complementary circuits. Controllable and effective doping strategies require large tunability of the doping level and negligible structural damage to ultrathin 2D materials. In this work, we demonstrate a doping method utilizing a conventional high-energy ion-implantation machine. Before the implantation, a Polymethylmethacrylate (PMMA) protective layer is used to decelerate the dopant ions and minimize the structural damage to MoS2, thus aggregating the dopants inside MoS2 flakes. By optimizing the implantation energy and fluence, phosphorus dopants are incorporated into MoS2 flakes. Our Raman and high-resolution transmission electron microscopy (HRTEM) results show that only negligibly structural damage is introduced to the MoS2 lattice during the implantation. P-doping effect by the incorporation of p+ is demonstrated by Photoluminescence (PL) and electrical characterizations. Thin PMMA protection layer leads to large kinetic damage but also a more significant doping effect. Also, MoS2 with large thickness shows less kinetic damage. This doping method makes use of existing infrastructures in the semiconductor industry and can be extended to other 2D materials and dopant species as well.

  20. Combining PubMed knowledge and EHR data to develop a weighted bayesian network for pancreatic cancer prediction.

    PubMed

    Zhao, Di; Weng, Chunhua

    2011-10-01

    In this paper, we propose a novel method that combines PubMed knowledge and Electronic Health Records to develop a weighted Bayesian Network Inference (BNI) model for pancreatic cancer prediction. We selected 20 common risk factors associated with pancreatic cancer and used PubMed knowledge to weigh the risk factors. A keyword-based algorithm was developed to extract and classify PubMed abstracts into three categories that represented positive, negative, or neutral associations between each risk factor and pancreatic cancer. Then we designed a weighted BNI model by adding the normalized weights into a conventional BNI model. We used this model to extract the EHR values for patients with or without pancreatic cancer, which then enabled us to calculate the prior probabilities for the 20 risk factors in the BNI. The software iDiagnosis was designed to use this weighted BNI model for predicting pancreatic cancer. In an evaluation using a case-control dataset, the weighted BNI model significantly outperformed the conventional BNI and two other classifiers (k-Nearest Neighbor and Support Vector Machine). We conclude that the weighted BNI using PubMed knowledge and EHR data shows remarkable accuracy improvement over existing representative methods for pancreatic cancer prediction. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Combining PubMed Knowledge and EHR Data to Develop a Weighted Bayesian Network for Pancreatic Cancer Prediction

    PubMed Central

    Zhao, Di; Weng, Chunhua

    2011-01-01

    In this paper, we propose a novel method that combines PubMed knowledge and Electronic Health Records to develop a weighted Bayesian Network Inference (BNI) model for pancreatic cancer prediction. We selected 20 common risk factors associated with pancreatic cancer and used PubMed knowledge to weigh the risk factors. A keyword-based algorithm was developed to extract and classify PubMed abstracts into three categories that represented positive, negative, or neutral associations between each risk factor and pancreatic cancer. Then we designed a weighted BNI model by adding the normalized weights into a conventional BNI model. We used this model to extract the EHR values for patients with or without pancreatic cancer, which then enabled us to calculate the prior probabilities for the 20 risk factors in the BNI. The software iDiagnosis was designed to use this weighted BNI model for predicting pancreatic cancer. In an evaluation using a case-control dataset, the weighted BNI model significantly outperformed the conventional BNI and two other classifiers (k-Nearest Neighbor and Support Vector Machine). We conclude that the weighted BNI using PubMed knowledge and EHR data shows remarkable accuracy improvement over existing representative methods for pancreatic cancer prediction. PMID:21642013

  2. Accurate evaluation of fast threshold voltage shift for SiC MOS devices under various gate bias stress conditions

    NASA Astrophysics Data System (ADS)

    Sometani, Mitsuru; Okamoto, Mitsuo; Hatakeyama, Tetsuo; Iwahashi, Yohei; Hayashi, Mariko; Okamoto, Dai; Yano, Hiroshi; Harada, Shinsuke; Yonezawa, Yoshiyuki; Okumura, Hajime

    2018-04-01

    We investigated methods of measuring the threshold voltage (V th) shift of 4H-silicon carbide (SiC) metal–oxide–semiconductor field-effect transistors (MOSFETs) under positive DC, negative DC, and AC gate bias stresses. A fast measurement method for V th shift under both positive and negative DC stresses revealed the existence of an extremely large V th shift in the short-stress-time region. We then examined the effect of fast V th shifts on drain current (I d) changes within a pulse under AC operation. The fast V th shifts were suppressed by nitridation. However, the I d change within one pulse occurred even in commercially available SiC MOSFETs. The correlation between I d changes within one pulse and V th shifts measured by a conventional method is weak. Thus, a fast and in situ measurement method is indispensable for the accurate evaluation of I d changes under AC operation.

  3. Recent progress in the assembly of nanodevices and van der Waals heterostructures by deterministic placement of 2D materials.

    PubMed

    Frisenda, Riccardo; Navarro-Moratalla, Efrén; Gant, Patricia; Pérez De Lara, David; Jarillo-Herrero, Pablo; Gorbachev, Roman V; Castellanos-Gomez, Andres

    2018-01-02

    Designer heterostructures can now be assembled layer-by-layer with unmatched precision thanks to the recently developed deterministic placement methods to transfer two-dimensional (2D) materials. This possibility constitutes the birth of a very active research field on the so-called van der Waals heterostructures. Moreover, these deterministic placement methods also open the door to fabricate complex devices, which would be otherwise very difficult to achieve by conventional bottom-up nanofabrication approaches, and to fabricate fully-encapsulated devices with exquisite electronic properties. The integration of 2D materials with existing technologies such as photonic and superconducting waveguides and fiber optics is another exciting possibility. Here, we review the state-of-the-art of the deterministic placement methods, describing and comparing the different alternative methods available in the literature, and we illustrate their potential to fabricate van der Waals heterostructures, to integrate 2D materials into complex devices and to fabricate artificial bilayer structures where the layers present a user-defined rotational twisting angle.

  4. Precision and accuracy in smFRET based structural studies—A benchmark study of the Fast-Nano-Positioning System

    NASA Astrophysics Data System (ADS)

    Nagy, Julia; Eilert, Tobias; Michaelis, Jens

    2018-03-01

    Modern hybrid structural analysis methods have opened new possibilities to analyze and resolve flexible protein complexes where conventional crystallographic methods have reached their limits. Here, the Fast-Nano-Positioning System (Fast-NPS), a Bayesian parameter estimation-based analysis method and software, is an interesting method since it allows for the localization of unknown fluorescent dye molecules attached to macromolecular complexes based on single-molecule Förster resonance energy transfer (smFRET) measurements. However, the precision, accuracy, and reliability of structural models derived from results based on such complex calculation schemes are oftentimes difficult to evaluate. Therefore, we present two proof-of-principle benchmark studies where we use smFRET data to localize supposedly unknown positions on a DNA as well as on a protein-nucleic acid complex. Since we use complexes where structural information is available, we can compare Fast-NPS localization to the existing structural data. In particular, we compare different dye models and discuss how both accuracy and precision can be optimized.

  5. Equivalence testing using existing reference data: An example with genetically modified and conventional crops in animal feeding studies.

    PubMed

    van der Voet, Hilko; Goedhart, Paul W; Schmidt, Kerstin

    2017-11-01

    An equivalence testing method is described to assess the safety of regulated products using relevant data obtained in historical studies with assumedly safe reference products. The method is illustrated using data from a series of animal feeding studies with genetically modified and reference maize varieties. Several criteria for quantifying equivalence are discussed, and study-corrected distribution-wise equivalence is selected as being appropriate for the example case study. An equivalence test is proposed based on a high probability of declaring equivalence in a simplified situation, where there is no between-group variation, where the historical and current studies have the same residual variance, and where the current study is assumed to have a sample size as set by a regulator. The method makes use of generalized fiducial inference methods to integrate uncertainties from both the historical and the current data. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. ROKU: a novel method for identification of tissue-specific genes.

    PubMed

    Kadota, Koji; Ye, Jiazhen; Nakai, Yuji; Terada, Tohru; Shimizu, Kentaro

    2006-06-12

    One of the important goals of microarray research is the identification of genes whose expression is considerably higher or lower in some tissues than in others. We would like to have ways of identifying such tissue-specific genes. We describe a method, ROKU, which selects tissue-specific patterns from gene expression data for many tissues and thousands of genes. ROKU ranks genes according to their overall tissue specificity using Shannon entropy and detects tissues specific to each gene if any exist using an outlier detection method. We evaluated the capacity for the detection of various specific expression patterns using synthetic and real data. We observed that ROKU was superior to a conventional entropy-based method in its ability to rank genes according to overall tissue specificity and to detect genes whose expression pattern are specific only to objective tissues. ROKU is useful for the detection of various tissue-specific expression patterns. The framework is also directly applicable to the selection of diagnostic markers for molecular classification of multiple classes.

  7. An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling

    NASA Astrophysics Data System (ADS)

    Wang, Enjiang; Liu, Yang

    2018-01-01

    The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.

  8. Autonomous Aerodynamic Control of Micro Air Vehicles

    DTIC Science & Technology

    2009-10-19

    Wind tunnel studies have also begun in which detailed aerodynamic quantification can be mad regarding MAV performance with flexible airframes...research. The design is similar to existing MAVs. The airframe has a conventional aircraft design to allow for easy determination of aerodynamic...exceeded in normal flight by conventional aircraft ; however, it is not uncommon for a MAV to surpass the limits due to its low inertia. While collecting

  9. Poly (lactic-co-glycolic acid) particles prepared by microfluidics and conventional methods. Modulated particle size and rheology.

    PubMed

    Perez, Aurora; Hernández, Rebeca; Velasco, Diego; Voicu, Dan; Mijangos, Carmen

    2015-03-01

    Microfluidic techniques are expected to provide narrower particle size distribution than conventional methods for the preparation of poly (lactic-co-glycolic acid) (PLGA) microparticles. Besides, it is hypothesized that the particle size distribution of poly (lactic-co-glycolic acid) microparticles influences the settling behavior and rheological properties of its aqueous dispersions. For the preparation of PLGA particles, two different methods, microfluidic and conventional oil-in-water emulsification methods were employed. The particle size and particle size distribution of PLGA particles prepared by microfluidics were studied as a function of the flow rate of the organic phase while particles prepared by conventional methods were studied as a function of stirring rate. In order to study the stability and structural organization of colloidal dispersions, settling experiments and oscillatory rheological measurements were carried out on aqueous dispersions of PLGA particles with different particle size distributions. Microfluidics technique allowed the control of size and size distribution of the droplets formed in the process of emulsification. This resulted in a narrower particle size distribution for samples prepared by MF with respect to samples prepared by conventional methods. Polydisperse samples showed a larger tendency to aggregate, thus confirming the advantages of microfluidics over conventional methods, especially if biomedical applications are envisaged. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Lazy collaborative filtering for data sets with missing values.

    PubMed

    Ren, Yongli; Li, Gang; Zhang, Jun; Zhou, Wanlei

    2013-12-01

    As one of the biggest challenges in research on recommender systems, the data sparsity issue is mainly caused by the fact that users tend to rate a small proportion of items from the huge number of available items. This issue becomes even more problematic for the neighborhood-based collaborative filtering (CF) methods, as there are even lower numbers of ratings available in the neighborhood of the query item. In this paper, we aim to address the data sparsity issue in the context of neighborhood-based CF. For a given query (user, item), a set of key ratings is first identified by taking the historical information of both the user and the item into account. Then, an auto-adaptive imputation (AutAI) method is proposed to impute the missing values in the set of key ratings. We present a theoretical analysis to show that the proposed imputation method effectively improves the performance of the conventional neighborhood-based CF methods. The experimental results show that our new method of CF with AutAI outperforms six existing recommendation methods in terms of accuracy.

  11. The advance of non-invasive detection methods in osteoarthritis

    NASA Astrophysics Data System (ADS)

    Dai, Jiao; Chen, Yanping

    2011-06-01

    Osteoarthritis (OA) is one of the most prevalent chronic diseases which badly affected the patients' living quality and economy. Detection and evaluation technology can provide basic information for early treatment. A variety of imaging methods in OA were reviewed, such as conventional X-ray, computed tomography (CT), ultrasound (US), magnetic resonance imaging (MRI) and near-infrared spectroscopy (NIRS). Among the existing imaging modalities, the spatial resolution of X-ray is extremely high; CT is a three-dimensional method, which has high density resolution; US as an evaluation method of knee OA discriminates lesions sensitively between normal cartilage and degenerative one; as a sensitive and nonionizing method, MRI is suitable for the detection of early OA, but the cost is too expensive for routine use; NIRS is a safe, low cost modality, and is also good at detecting early stage OA. In a word, each method has its own advantages, but NIRS is provided with broader application prospect, and it is likely to be used in clinical daily routine and become the golden standard for diagnostic detection.

  12. Force analysis of magnetic bearings with power-saving controls

    NASA Technical Reports Server (NTRS)

    Johnson, Dexter; Brown, Gerald V.; Inman, Daniel J.

    1992-01-01

    Most magnetic bearing control schemes use a bias current with a superimposed control current to linearize the relationship between the control current and the force it delivers. For most operating conditions, the existence of the bias current requires more power than alternative methods that do not use conventional bias. Two such methods are examined which diminish or eliminate bias current. In the typical bias control scheme it is found that for a harmonic control force command into a voltage limited transconductance amplifier, the desired force output is obtained only up to certain combinations of force amplitude and frequency. Above these values, the force amplitude is reduced and a phase lag occurs. The power saving alternative control schemes typically exhibit such deficiencies at even lower command frequencies and amplitudes. To assess the severity of these effects, a time history analysis of the force output is performed for the bias method and the alternative methods. Results of the analysis show that the alternative approaches may be viable. The various control methods examined were mathematically modeled using nondimensionalized variables to facilitate comparison of the various methods.

  13. A two-step method for rapid characterization of electroosmotic flows in capillary electrophoresis.

    PubMed

    Zhang, Wenjing; He, Muyi; Yuan, Tao; Xu, Wei

    2017-12-01

    The measurement of electroosmotic flow (EOF) is important in a capillary electrophoresis (CE) experiment in terms of performance optimization and stability improvement. Although several methods exist, there are demanding needs to accurately characterize ultra-low electroosmotic flow rates (EOF rates), such as in coated capillaries used in protein separations. In this work, a new method, called the two-step method, was developed to accurately and rapidly measure EOF rates in a capillary, especially for measuring the ultra-low EOF rates in coated capillaries. In this two-step method, the EOF rates were calculated by measuring the migration time difference of a neutral marker in two consecutive experiments, in which a pressure driven was introduced to accelerate the migration and the DC voltage was reversed to switch the EOF direction. Uncoated capillaries were first characterized by both this two-step method and a conventional method to confirm the validity of this new method. Then this new method was applied in the study of coated capillaries. Results show that this new method is not only fast in speed, but also better in accuracy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frayce, D.; Khayat, R.E.; Derdouri, A.

    The dual reciprocity boundary element method (DRBEM) is implemented to solve three-dimensional transient heat conduction problems in the presence of arbitrary sources, typically as these problems arise in materials processing. The DRBEM has a major advantage over conventional BEM, since it avoids the computation of volume integrals. These integrals stem from transient, nonlinear, and/or source terms. Thus there is no need to discretize the inner domain, since only a number of internal points are needed for the computation. The validity of the method is assessed upon comparison with results from benchmark problems where analytical solutions exist. There is generally goodmore » agreement. Comparison against finite element results is also favorable. Calculations are carried out in order to assess the influence of the number and location of internal nodes. The influence of the ratio of the numbers of internal to boundary nodes is also examined.« less

  15. The first radiocarbon data of bone remains of mammoth faunal forms in northwestern Russia

    NASA Astrophysics Data System (ADS)

    Nikonov, A. A.; van der Plicht, J.

    2010-05-01

    Unlike in the neighboring territories, the distribution and the period of habitation of late Pleistocene mammoth complex animals in the northwestern area of Russia had not been studied until recently. This article fills in this gap using the bone material from the Zoological Institute of the Russian Academy of Sciences and the collections of one of the authors. The samples of 14 bones and teeth of big mammals uncovered in different places of the region were dated. The data obtained by conventional 14C method and AMS method agree with each other and make it possible to determine two periods of habitation of mammoth complex animals in the region: 39 000-23 000 years ago and 13 000-9800 years ago, which confirms that ice-free landscapes existed here at these time intervals.

  16. Armored Enzyme Nanoparticles for Remediation of Subsurface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grate, Jay W.

    2005-09-01

    The remediation of subsurface contaminants is a critical problem for the Department of Energy, other government agencies, and our nation. Severe contamination of soil and groundwater exists at several DOE sites due to various methods of intentional and unintentional release. Given the difficulties involved in conventional removal or separation processes, it is vital to develop methods to transform contaminants and contaminated earth/water to reduce risks to human health and the environment. Transformation of the contaminants themselves may involve conversion to other immobile species that do not migrate into well water or surface waters, as is proposed for metals and radionuclides;more » or degradation to harmless molecules, as is desired for organic contaminants. Transformation of contaminated earth (as opposed to the contaminants themselves) may entail reductions in volume or release of bound contaminants for remediation.« less

  17. [Use of adsorption methods for plasma component apheresis].

    PubMed

    Bang, B; Heegaard, N H

    1991-11-25

    Plasma-apheresis is a nonspecific and wasteful intervention requiring the use of potentially infectious and expensive replacement fluids. Selective removal of the unwanted plasma component circumvents most of the problems. For selective binding and removal of plasma components adsorption methods based on the principles of affinity chromatography have been useful. The ideal adsorption column still does not exist, but the number of clinical applications is increasing. The results vary, but the treatment has been used with success in hypercholesterolemia, and in patients with hemophilia with antifactor antibodies and patients with antibodies directed towards HLA-antigens awaiting renal transplantation. In conclusion selective plasma component-apheresis is an improvement in some diseases as compared to conventional plasma-apheresis. The technique is still being improved but large clinical trials examining the effects of plasma-component-apheresis have not yet been published.

  18. Application of the UTCHEM simulator to DNAPL site characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, G.W.

    1995-12-31

    Numerical simulation using the University of Texas Chemical Flood Simulator (UTCHEM) was used to evaluate two dense, nonaqueous phase liquid (DNAPL) characterization methods. The methods involved the use of surfactants and partitioning tracers to characterize a suspected trichloroethene (TCE) DNAPL zone beneath a US Air Force Plant in Texas. The simulations were performed using a cross-sectional model of the alluvial aquifer in an area that is believed to contain residual TCE at the base of the aquifer. Characterization simulations compared standard groundwater sampling, an interwell NAPL Solubilization Test, and an interwell NAPL Partitioning Tracer Test. The UTCHEM simulations illustrated howmore » surfactants and partitioning tracers can be used to give definite evidence of the presence and volume of DNAPL in a situation where conventional groundwater sampling can only indicate the existence of the dissolved contaminant plume.« less

  19. 76 FR 45907 - Implementation of the Amendments to the International Convention on Standards of Training...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-01

    ...The Coast Guard proposes to amend the existing regulations that implement the International Convention on Standards of Training, Certification and Watchkeeping for Seafarers, 1978, as amended (STCW Convention), as well as the Seafarer's Training, Certification and Watchkeeping Code (STCW Code). The changes proposed in this Supplemental Notice of Proposed Rulemaking (SNPRM) address the comments received from the public response to the Notice of Proposed Rulemaking (NPRM), in most cases through revisions based on those comments, and propose to incorporate the 2010 amendments to the STCW Convention that will come into force on January 1, 2012. In addition, this SNPRM proposes to make other non-STCW changes necessary to reorganize, clarify, and update these regulations.

  20. An in vitro comparison of photogrammetric and conventional complete-arch implant impression techniques.

    PubMed

    Bergin, Junping Ma; Rubenstein, Jeffrey E; Mancl, Lloyd; Brudvik, James S; Raigrodski, Ariel J

    2013-10-01

    Conventional impression techniques for recording the location and orientation of implant-supported, complete-arch prostheses are time consuming and prone to error. The direct optical recording of the location and orientation of implants, without the need for intermediate transfer steps, could reduce or eliminate those disadvantages. The objective of this study was to assess the feasibility of using a photogrammetric technique to record the location and orientation of multiple implants and to compare the results with those of a conventional complete-arch impression technique. A stone cast of an edentulous mandibular arch containing 5 implant analogs was fabricated to create a master model. The 3-dimensional (3D) spatial orientations of implant analogs on the master model were measured with a coordinate measuring machine (CMM) (control). Five definitive casts were made from the master model with a splinted impression technique. The positions of the implant analogs on the 5 casts were measured with a NobelProcera scanner (conventional method). Prototype optical targets were attached to the master model implant analogs, and 5 sets of images were recorded with a digital camera and a standardized image capture protocol. Dimensional data were imported into commercially available photogrammetry software (photogrammetric method). The precision and accuracy of the 2 methods were compared with a 2-sample t test (α=.05) and a 95% confidence interval. The location precision (standard error of measurement) for CMM was 3.9 µm (95% CI 2.7 to 7.1), for photogrammetry, 5.6 µm (95% CI 3.4 to 16.1), and for the conventional method, 17.2 µm (95% CI 10.3 to 49.4). The average measurement error was 26.2 µm (95% CI 15.9 to 36.6) for the conventional method and 28.8 µm (95% CI 24.8 to 32.9) for the photogrammetric method. The overall measurement accuracy was not significantly different when comparing the conventional to the photogrammetric method (mean difference = -2.6 µm, 95% CI -12.8 to 7.6). The precision of the photogrammetric method was similar to CMM, but lower for the conventional method as compared to CMM and the photogrammetric method. However, the overall measurement accuracy of the photogrammetric and conventional methods was similar. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  1. Multifeature-based high-resolution palmprint recognition.

    PubMed

    Dai, Jifeng; Zhou, Jie

    2011-05-01

    Palmprint is a promising biometric feature for use in access control and forensic applications. Previous research on palmprint recognition mainly concentrates on low-resolution (about 100 ppi) palmprints. But for high-security applications (e.g., forensic usage), high-resolution palmprints (500 ppi or higher) are required from which more useful information can be extracted. In this paper, we propose a novel recognition algorithm for high-resolution palmprint. The main contributions of the proposed algorithm include the following: 1) use of multiple features, namely, minutiae, density, orientation, and principal lines, for palmprint recognition to significantly improve the matching performance of the conventional algorithm. 2) Design of a quality-based and adaptive orientation field estimation algorithm which performs better than the existing algorithm in case of regions with a large number of creases. 3) Use of a novel fusion scheme for an identification application which performs better than conventional fusion methods, e.g., weighted sum rule, SVMs, or Neyman-Pearson rule. Besides, we analyze the discriminative power of different feature combinations and find that density is very useful for palmprint recognition. Experimental results on the database containing 14,576 full palmprints show that the proposed algorithm has achieved a good performance. In the case of verification, the recognition system's False Rejection Rate (FRR) is 16 percent, which is 17 percent lower than the best existing algorithm at a False Acceptance Rate (FAR) of 10(-5), while in the identification experiment, the rank-1 live-scan partial palmprint recognition rate is improved from 82.0 to 91.7 percent.

  2. a Method for the Positioning and Orientation of Rail-Bound Vehicles in Gnss-Free Environments

    NASA Astrophysics Data System (ADS)

    Hung, R.; King, B. A.; Chen, W.

    2016-06-01

    Mobile Mapping System (MMS) are increasingly applied for spatial data collection to support different fields because of their efficiencies and the levels of detail they can provide. The Position and Orientation System (POS), which is conventionally employed for locating and orienting MMS, allows direct georeferencing of spatial data in real-time. Since the performance of a POS depends on both the Inertial Navigation System (INS) and the Global Navigation Satellite System (GNSS), poor GNSS conditions, such as in long tunnels and underground, introduce the necessity for post-processing. In above-ground railways, mobile mapping technology is employed with high performance sensors for finite usage, which has considerable potential for enhancing railway safety and management in real-time. In contrast, underground railways present a challenge for a conventional POS thus alternative configurations are necessary to maintain data accuracy and alleviate the need for post-processing. This paper introduces a method of rail-bound navigation to replace the role of GNSS for railway applications. The proposed method integrates INS and track alignment data for environment-independent navigation and reduces the demand of post-processing. The principle of rail-bound navigation is presented and its performance is verified by an experiment using a consumer-grade Inertial Measurement Unit (IMU) and a small-scale railway model. The method produced a substantial improvement in position and orientation for a poorly initialised system in centimetre positional accuracy. The potential improvements indicated by, and limitations of rail-bound navigation are also considered for further development in existing railway systems.

  3. Parsimony and goodness-of-fit in multi-dimensional NMR inversion

    NASA Astrophysics Data System (ADS)

    Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos

    2017-01-01

    Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.

  4. Planar-focusing cathodes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewellen, J. W.; Noonan, J.; Accelerator Systems Division

    2005-01-01

    Conventional {pi}-mode rf photoinjectors typically use magnetic solenoids for emittance compensation. This provides independent focusing strength but can complicate rf power feed placement, introduce asymmetries (due to coil crossovers), and greatly increase the cost of the photoinjector. Cathode-region focusing can also provide for a form of emittance compensation. Typically this method strongly couples focusing strength to the field gradient on the cathode, however, and usually requires altering the longitudinal position of the cathode to change the focusing. We propose a new method for achieving cathode-region variable-strength focusing for emittance compensation. The new method reduces the coupling to the gradient onmore » the cathode and does not require a change in the longitudinal position of the cathode. Expected performance for an S-band system is similar to conventional solenoid-based designs. This paper presents the results of rf cavity and beam dynamics simulations of the new design. We have proposed a method for performing emittance compensation using a cathode-region focusing scheme. This technique allows the focusing strength to be adjusted somewhat independently of the on-axis field strength. Beam dynamics calculations indicate performance should be comparable to presently in-use emittance compensation schemes, with a simpler configuration and fewer possibilities for emittance degradation due to the focusing optics. There are several potential difficulties with this approach, including cathode material selection, cathode heating, and peak fields in the gun. We hope to begin experimenting with a cathode of this type in the near future, and several possibilities exist for reducing the peak gradients to more acceptable levels.« less

  5. Unravelling a Clinical Paradox - Why Does Bronchial Thermoplasty Work in Asthma?

    PubMed

    Donovan, Graham M; Elliot, John G; Green, Francis H Y; James, Alan L; Noble, Peter B

    2018-04-18

    Rationale Bronchial thermoplasty is a relatively new but effective treatment in asthmatic subjects who do not respond to conventional therapy. While the favoured mechanism is ablation of the airway smooth muscle layer, because bronchial thermoplasty treats only a small number of central airways, there is ongoing debate regarding its precise method of action. Objectives Elucidate the underlying method of action behind bronchial thermoplasty. Methods We employ a combination of extensive human lung specimens and novel computational methods. Whole left lungs were acquired from the Prairie Provinces Fatal Asthma Study. Subjects were classified as control (N=31), non-fatal asthma (N=32), or fatal asthma (N=25). Simulated lungs for each group were constructed stochastically, and flow distributions and functional indicators were quantified both before and after a 70% reduction in airway smooth muscle in the thermoplasty-treated airways. Main Results Bronchial thermoplasty triggers global redistribution of clustered flow patterns, wherein structural changes to the treated central airways lead to a re-opening cascade in the small airways and significant improvement in lung function via reduced spatial heterogeneity of flow patterns. This mechanism accounts for progressively greater efficacy of thermoplasty with both severity of asthma and degree of muscle activation, consistent with existing empirical results. Conclusions We report a probable mechanism of action for bronchial thermoplasty: alteration of lung-wide flow patterns in response to structural alteration of the treated central airways. This insight could lead to improved therapy via patient-specific, tailored versions of the treatment -- as well as implications for more conventional asthma therapies.

  6. Theological Higher Education in Cuba. Part 4: The Historical Roots and Milestones of the Eastern Cuba Baptist Theological Seminary

    ERIC Educational Resources Information Center

    Esqueda, Octavio J.

    2007-01-01

    This article presents an overview of the Eastern Cuba Baptist Theological Seminary. The seminary was founded in the city of Santiago de Cuba, on October 10, 1949, by the Eastern Baptist Convention. The seminary exists to provide training for pastors in the Eastern Baptist Convention. The school offers a four-year program leading to a bachelor in…

  7. Understanding the Potential Content and Structure of an International Convention on the Human Rights of People with Disabilities: Sample Treaty Provisions Drawn from Existing International Instruments. A Reference Tool.

    ERIC Educational Resources Information Center

    Lord, Janet E.

    This document is designed to prepare advocates in the international disability community for productive participation in the development of international conventions on the human rights of people with disabilities. Knowledge of the standard categories of international law provisions will help participants address issues related to the structure of…

  8. 4onse: four times open & non-conventional technology for sensing the environment

    NASA Astrophysics Data System (ADS)

    Cannata, Massimiliano; Ratnayake, Rangageewa; Antonovic, Milan; Strigaro, Daniele; Cardoso, Mirko; Hoffmann, Marcus

    2017-04-01

    The availability of complete, quality and dense monitoring hydro-meteorological data is essential to address a number of practical issues including, but not limited to, flood-water and urban drainage management, climate change impact assessment, early warning and risk management, now-casting and weather predictions. Thanks to the recent technological advances such as Internet Of Things, Big Data and Ubiquitous Internet, non-conventional monitoring systems based on open technologies and low cost sensors may represent a great opportunity either as a complement of authoritative monitoring network or as a vital source of information wherever existing monitoring networks are in decline or completely missing. Nevertheless, scientific literature on such a kind of open and non-conventional monitoring systems is still limited and often relates to prototype engineering and testing in rather limited case studies. For this reason the 4onse project aims at integrating existing open technologies in the field of Free & Open Source Software, Open Hardware, Open Data, and Open Standards and evaluate this kind of system in a real case (about 30 stations) for a medium period of 2 years to better scientifically understand strengths, criticalities and applicabilities in terms of data quality; system durability; management costs; performances; sustainability. The ultimate objective is to contribute in non-conventional monitoring systems adoption based on four open technologies.

  9. A simple, less invasive stripper micropipetter-based technique for day 3 embryo biopsy.

    PubMed

    Cedillo, Luciano; Ocampo-Bárcenas, Azucena; Maldonado, Israel; Valdez-Morales, Francisco J; Camargo, Felipe; López-Bayghen, Esther

    2016-01-01

    Preimplantation genetic screening (PGS) is an important procedure for in vitro fertilization (IVF). A key step of PGS, blastomere removal, is abundant with many technical issues. The aim of this study was to compare a more simple procedure based on the Stipper Micropipetter, named S-biopsy, to the conventional aspiration method. On Day 3, 368 high-quality embryos (>7 cells on Day3 with <10% fragmentation) were collected from 38 women. For each patient, their embryos were equally separated between the conventional method ( n  = 188) and S-biopsy method ( n  = 180). The conventional method was performed using a standardized protocol. For the S-biopsy method, a laser was used to remove a significantly smaller portion of the zona pellucida. Afterwards, the complete embryo was aspirated with a Stripper Micropipetter, forcing the removal of the blastomere. Selected blastomeres went to PGS using CGH microarrays. Embryo integrity and blastocyst formation were assessed on Day 5. Differences between groups were assessed by either the Mann-Whitney test or Fisher Exact test. Both methods resulted in the removal of only one blastomere. The S-biopsy and the conventional method did not differ in terms of affecting embryo integrity (95.0% vs. 95.7%) or blastocyst formation (72.7% vs. 70.7%). PGS analysis indicated that aneuploidy rate were similar between the two methods (63.1% vs. 65.2%). However, the time required to perform the S-biopsy method (179.2 ± 17.5 s) was significantly shorter (5-fold) than the conventional method. The S-biopsy method is comparable to the conventional method that is used to remove a blastomere for PGS, but requires less time. Furthermore, due to the simplicity of the S-biopsy technique, this method is more ideal for IVF laboratories.

  10. Integrative modeling and novel particle swarm-based optimal design of wind farms

    NASA Astrophysics Data System (ADS)

    Chowdhury, Souma

    To meet the energy needs of the future, while seeking to decrease our carbon footprint, a greater penetration of sustainable energy resources such as wind energy is necessary. However, a consistent growth of wind energy (especially in the wake of unfortunate policy changes and reported under-performance of existing projects) calls for a paradigm shift in wind power generation technologies. This dissertation develops a comprehensive methodology to explore, analyze and define the interactions between the key elements of wind farm development, and establish the foundation for designing high-performing wind farms. The primary contribution of this research is the effective quantification of the complex combined influence of wind turbine features, turbine placement, farm-land configuration, nameplate capacity, and wind resource variations on the energy output of the wind farm. A new Particle Swarm Optimization (PSO) algorithm, uniquely capable of preserving population diversity while addressing discrete variables, is also developed to provide powerful solutions towards optimizing wind farm configurations. In conventional wind farm design, the major elements that influence the farm performance are often addressed individually. The failure to fully capture the critical interactions among these factors introduces important inaccuracies in the projected farm performance and leads to suboptimal wind farm planning. In this dissertation, we develop the Unrestricted Wind Farm Layout Optimization (UWFLO) methodology to model and optimize the performance of wind farms. The UWFLO method obviates traditional assumptions regarding (i) turbine placement, (ii) turbine-wind flow interactions, (iii) variation of wind conditions, and (iv) types of turbines (single/multiple) to be installed. The allowance of multiple turbines, which demands complex modeling, is rare in the existing literature. The UWFLO method also significantly advances the state of the art in wind farm optimization by allowing simultaneous optimization of the type and the location of the turbines. Layout optimization (using UWFLO) of a hypothetical 25-turbine commercial-scale wind farm provides a remarkable 4.4% increase in capacity factor compared to a conventional array layout. A further 2% increase in capacity factor is accomplished when the types of turbines are also optimally selected. The scope of turbine selection and placement however depends on the land configuration and the nameplate capacity of the farm. Such dependencies are not clearly defined in the existing literature. We develop response surface-based models, which implicitly employ UWFLO, to quantify and analyze the roles of these other crucial design factors in optimal wind farm planning. The wind pattern at a site can vary significantly from year to year, which is not adequately captured by conventional wind distribution models. The resulting ill-predictability of the annual distribution of wind conditions introduces significant uncertainties in the estimated energy output of the wind farm. A new method is developed to characterize these wind resource uncertainties and model the propagation of these uncertainties into the estimated farm output. The overall wind pattern/regime also varies from one region to another, which demands turbines with capabilities uniquely suited for different wind regimes. Using the UWFLO method, we model the performance potential of currently available turbines for different wind regimes, and quantify their feature-based expected market suitability. Such models can initiate an understanding of the product variation that current turbine manufacturers should pursue, to adequately satisfy the needs of the naturally diverse wind energy market. The wind farm design problems formulated in this dissertation involve highly multimodal objective and constraint functions and a large number of continuous and discrete variables. An effective modification of the PSO algorithm is developed to address such challenging problems. Continuous search, as in conventional PSO, is implemented as the primary search strategy; discrete variables are then updated using a nearest-allowed-discrete-point criterion. Premature stagnation of particles due to loss of population diversity is one of the primary drawbacks of the basic PSO dynamics. A new measure of population diversity is formulated, which unlike existing metrics capture both the overall spread and the distribution of particles in the variable space. This diversity metric is then used to apply (i) an adaptive repulsion away from the best global solution in the case of continuous variables, and (ii) a stochastic update of the discrete variables. The new PSO algorithm provides competitive performance compared to a popular genetic algorithm, when applied to solve a comprehensive set of 98 mixed-integer nonlinear programming problems.

  11. Demystifying the Enigma of Smoking – An Observational Comparative Study on Tobacco Smoking

    PubMed Central

    Nallakunta, Rajesh; Reddy, Sudhakara Reddy; Chennoju, Sai Kiran

    2016-01-01

    Introduction Smoking is a hazardous habit which causes definite changes in the oral cavity, consequently there exist changes in the mucosa when subjected to smoking. Palatal mucosa is first to be affected. The present study determines the palatal status in reverse smokers and conventional smokers. Aim To study and compare the clinical, cytological and histopathological changes in palatal mucosa among reverse and conventional smokers. Materials and Methods Study sample was categorized into two groups. Group 1 comprised of 20 subjects with the habit of reverse smoking and Group 2 comprised of 20 subjects with the habit of conventional smoking. Initially, clinical appearance of the palatal mucosa was recorded, followed by a cytological smear and biopsy of the involved area among all the subjects. The findings were studied clinically, the specimens were analysed cytologically and histopathologically, and compared among the two groups. Results The severity of clinical changes of the palatal mucosa among reverse smokers was statistically significant when compared to those of conventional smokers. There was no statistically significant difference observed in cytological staging between the groups with a p-value of 0.35. The histopathological changes in both the groups showed a significant difference with a p-value of 0.02. A significant positive correlation was observed between the clinical appearance, and cytological, histopathological changes. Conclusion Profound clinically aggressive changes were observed in group I compared to group II. Severity of dysplastic changes have been detected in few subjects through histopathological examination irrespective of no prominent clinical and cytological changes observed among the two groups. PMID:27190962

  12. Examining How Neighborhood Disadvantage Influences Trajectories of Adolescent Violence: A Look at Social Bonding and Psychological Distress

    PubMed Central

    Foshee, Vangie A.; Ennett, Susan T.

    2012-01-01

    Background To understand how neighborhoods influence the development of youth violence, we investigated intrapersonal mediators of the relationship between neighborhood disadvantage and youth violence trajectories between ages 11 and 18. The hypothesized mediators included indicators of social bonding (belief in conventional values, involvement in school activities, religious engagement, and commitment to traditional goals) and psychological distress. Methods The sample (N=5,118) was 50% female and 52% Caucasian. Data from a 5-wave panel study spanning ages 11 to 18 were analyzed using sex-stratified multilevel growth curves. Results Neighborhood disadvantage was associated with higher levels of violence perpetrated by girls, lower belief in conventional values for both girls and boys, less commitment to traditional goals by girls, and higher levels of psychological distress reported by girls. Sobel tests identified three significant mediators of the effects of neighborhood disadvantage on girls’ violence trajectories: belief in conventional values, commitment to traditional goals and psychological distress. The only significant mediator of the relationship between neighborhood disadvantage and boys’ violence trajectories was belief in conventional values. The effects of neighborhood disadvantage on violence trajectories were not fully mediated; in fact, results suggested suppression effects, or inconsistent mediation, may exist. Conclusions The results emphasize the importance of both contextual and intrapersonal attributes in understanding the development of violence among school-age youth. Early school-based and community-level prevention initiatives that promote social bonding and address mental health needs may help reduce the impact of youth violence, particularly for girls. PMID:22070508

  13. The human urinary microbiome; bacterial DNA in voided urine of asymptomatic adults

    PubMed Central

    Lewis, Debbie A.; Brown, Richard; Williams, Jon; White, Paul; Jacobson, S. Kim; Marchesi, Julian R.; Drake, Marcus J.

    2013-01-01

    The urinary microbiome of healthy individuals and the way it alters with ageing have not been characterized and may influence disease processes. Conventional microbiological methods have limited scope to capture the full spectrum of urinary bacterial species. We studied the urinary microbiota from a population of healthy individuals, ranging from 26 to 90 years of age, by amplification of the 16S rRNA gene, with resulting amplicons analyzed by 454 pyrosequencing. Mid-stream urine (MSU) was collected by the “clean-catch” method. Quantitative PCR of 16S rRNA genes in urine samples, allowed relative enumeration of the bacterial loads. Analysis of the samples indicates that females had a more heterogeneous mix of bacterial genera compared to the male samples and generally had representative members of the phyla Actinobacteria and Bacteroidetes. Analysis of the data leads us to conclude that a “core” urinary microbiome could potentially exist, when samples are grouped by age with fluctuation in abundance between age groups. The study also revealed age-specific genera Jonquetella, Parvimonas, Proteiniphilum, and Saccharofermentans. In conclusion, conventional microbiological methods are inadequate to fully identify around two-thirds of the bacteria identified in this study. Whilst this proof-of-principle study has limitations due to the sample size, the discoveries evident in this sample data are strongly suggestive that a larger study on the urinary microbiome should be encouraged and that the identification of specific genera at particular ages may be relevant to pathogenesis of clinical conditions. PMID:23967406

  14. The human urinary microbiome; bacterial DNA in voided urine of asymptomatic adults.

    PubMed

    Lewis, Debbie A; Brown, Richard; Williams, Jon; White, Paul; Jacobson, S Kim; Marchesi, Julian R; Drake, Marcus J

    2013-01-01

    The urinary microbiome of healthy individuals and the way it alters with ageing have not been characterized and may influence disease processes. Conventional microbiological methods have limited scope to capture the full spectrum of urinary bacterial species. We studied the urinary microbiota from a population of healthy individuals, ranging from 26 to 90 years of age, by amplification of the 16S rRNA gene, with resulting amplicons analyzed by 454 pyrosequencing. Mid-stream urine (MSU) was collected by the "clean-catch" method. Quantitative PCR of 16S rRNA genes in urine samples, allowed relative enumeration of the bacterial loads. Analysis of the samples indicates that females had a more heterogeneous mix of bacterial genera compared to the male samples and generally had representative members of the phyla Actinobacteria and Bacteroidetes. Analysis of the data leads us to conclude that a "core" urinary microbiome could potentially exist, when samples are grouped by age with fluctuation in abundance between age groups. The study also revealed age-specific genera Jonquetella, Parvimonas, Proteiniphilum, and Saccharofermentans. In conclusion, conventional microbiological methods are inadequate to fully identify around two-thirds of the bacteria identified in this study. Whilst this proof-of-principle study has limitations due to the sample size, the discoveries evident in this sample data are strongly suggestive that a larger study on the urinary microbiome should be encouraged and that the identification of specific genera at particular ages may be relevant to pathogenesis of clinical conditions.

  15. Accumulation and Transfer of Cadmium, by Indica Rice Cultivars Fujian Province of China

    NASA Astrophysics Data System (ADS)

    James, B.; Wang, G.

    2016-12-01

    This study was designed to evaluate the accumulating ability of cadmium (Cd) by different Indica rice varieties and to understand the differences in transfer factor in the soil-to-rice grain. A total of 189 crop samples and 189 corresponding soil samples were collected for treatment and chemical analysis. Sixteen (16) Indica rice varieties were selected for this study. Our preliminary results showed that there exist significant differences (p<0.05) in the grain Cd concentrations of the variety studied. A regression method was adopted to calculate the representative soil-to-grain (TF0.1) of each cultivar. The accumulating ability of cadmium of the 16 cultivars varied greatly.Yi-xiang 2292 had the highest TFsoil-grain (2.91), which was 22 times higher than the lowest cultivar Pei- za-tai- fen (0.13). However, no significant difference in TFsoil-grain was observed between conventional and hybrid cultivars. A further study was carried out to understand the transfer characteristics and accumulating ability of cadmium using four (4) selected cultivars (both of hybrid and conventional indica rice cultivars).The TFstem-grain among the variety revealed that significant differences (p<0.05) exist in the stem of the selected variety in the translocation of Cd among indica rice variety and cadmium decreases in the pattern: root>stem>leaf>grain in the four cultivars except Te -you 009 that showed similar cadmium content in root and stem. Among the hybrid cultivars Yi -you 673 accumulated the most Cadmium in root, stem, leaf and grain, while Te- you 009 accumulated the least Cadmium in root, whereas the conventional cultivar Jia-fu-zhan accumulated the lowest Cadmium in leaf and grain. Our findings also revealed that the Cadmium concentrations in rice grains were more significantly correlated with the Cadmium in stem, followed by leaf, which reveals that the transfer from stem and leaf to grain may be the determinant steps for Cadmium accumulation in the grains.

  16. Localization microscopy of DNA in situ using Vybrant{sup ®} DyeCycle™ Violet fluorescent probe: A new approach to study nuclear nanostructure at single molecule resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Żurek-Biesiada, Dominika; Szczurek, Aleksander T.; Prakash, Kirti

    Higher order chromatin structure is not only required to compact and spatially arrange long chromatids within a nucleus, but have also important functional roles, including control of gene expression and DNA processing. However, studies of chromatin nanostructures cannot be performed using conventional widefield and confocal microscopy because of the limited optical resolution. Various methods of superresolution microscopy have been described to overcome this difficulty, like structured illumination and single molecule localization microscopy. We report here that the standard DNA dye Vybrant{sup ®} DyeCycle™ Violet can be used to provide single molecule localization microscopy (SMLM) images of DNA in nuclei ofmore » fixed mammalian cells. This SMLM method enabled optical isolation and localization of large numbers of DNA-bound molecules, usually in excess of 10{sup 6} signals in one cell nucleus. The technique yielded high-quality images of nuclear DNA density, revealing subdiffraction chromatin structures of the size in the order of 100 nm; the interchromatin compartment was visualized at unprecedented optical resolution. The approach offers several advantages over previously described high resolution DNA imaging methods, including high specificity, an ability to record images using a single wavelength excitation, and a higher density of single molecule signals than reported in previous SMLM studies. The method is compatible with DNA/multicolor SMLM imaging which employs simple staining methods suited also for conventional optical microscopy. - Highlights: • Super-resolution imaging of nuclear DNA with Vybrant Violet and blue excitation. • 90nm resolution images of DNA structures in optically thick eukaryotic nuclei. • Enhanced resolution confirms the existence of DNA-free regions inside the nucleus. • Optimized imaging conditions enable multicolor super-resolution imaging.« less

  17. Application of a Novel DCPD Adjustment Method for the J-R Curve Characterization: A study based on ORNL and ASTM Interlaboratory Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xiang; Sokolov, Mikhail A; Nanstad, Randy K

    Material fracture toughness in the fully ductile region can be described by a J-integral vs. crack growth resistance curve (J-R curve). As a conventional J-R curve measurement method, the elastic unloading compliance (EUC) method becomes impractical for elevated temperature testing due to relaxation of the material and friction induced back-up shape of the J-R curve. One alternative solution of J-R curve testing applies the Direct Current Potential Drop (DCPD) technique for measuring crack extension. However, besides crack growth, potential drop can also be influenced by plastic deformation, crack tip blunting, etc., and uncertainties exist in the current DCPD methodology especiallymore » in differentiating potential drop due to stable crack growth and due to material deformation. Thus, using DCPD for J-R curve determination remains a challenging task. In this study, a new adjustment procedure for applying DCPD to derive the J-R curve has been developed for conventional fracture toughness specimens, including compact tension, three-point bend, and disk-shaped compact specimens. Data analysis has been performed on Oak Ridge National Laboratory (ORNL) and American Society for Testing and Materials (ASTM) interlaboratory results covering different specimen thicknesses, test temperatures, and materials, to evaluate the applicability of the new DCPD adjustment procedure for J-R curve characterization. After applying the newly-developed procedure, direct comparison between the DCPD method and the normalization method on the same specimens indicated close agreement for the overall J-R curves, as well as the provisional values of fracture toughness near the onset of ductile crack extension, Jq, and of tearing modulus.« less

  18. Least-squares reverse time migration in elastic media

    NASA Astrophysics Data System (ADS)

    Ren, Zhiming; Liu, Yang; Sen, Mrinal K.

    2017-02-01

    Elastic reverse time migration (RTM) can yield accurate subsurface information (e.g. PP and PS reflectivity) by imaging the multicomponent seismic data. However, the existing RTM methods are still insufficient to provide satisfactory results because of the finite recording aperture, limited bandwidth and imperfect illumination. Besides, the P- and S-wave separation and the polarity reversal correction are indispensable in conventional elastic RTM. Here, we propose an iterative elastic least-squares RTM (LSRTM) method, in which the imaging accuracy is improved gradually with iteration. We first use the Born approximation to formulate the elastic de-migration operator, and employ the Lagrange multiplier method to derive the adjoint equations and gradients with respect to reflectivity. Then, an efficient inversion workflow (only four forward computations needed in each iteration) is introduced to update the reflectivity. Synthetic and field data examples reveal that the proposed LSRTM method can obtain higher-quality images than the conventional elastic RTM. We also analyse the influence of model parametrizations and misfit functions in elastic LSRTM. We observe that Lamé parameters, velocity and impedance parametrizations have similar and plausible migration results when the structures of different models are correlated. For an uncorrelated subsurface model, velocity and impedance parametrizations produce fewer artefacts caused by parameter crosstalk than the Lamé coefficient parametrization. Correlation- and convolution-type misfit functions are effective when amplitude errors are involved and the source wavelet is unknown, respectively. Finally, we discuss the dependence of elastic LSRTM on migration velocities and its antinoise ability. Imaging results determine that the new elastic LSRTM method performs well as long as the low-frequency components of migration velocities are correct. The quality of images of elastic LSRTM degrades with increasing noise.

  19. Novel Method for Superposing 3D Digital Models for Monitoring Orthodontic Tooth Movement.

    PubMed

    Schmidt, Falko; Kilic, Fatih; Piro, Neltje Emma; Geiger, Martin Eberhard; Lapatki, Bernd Georg

    2018-04-18

    Quantitative three-dimensional analysis of orthodontic tooth movement (OTM) is possible by superposition of digital jaw models made at different times during treatment. Conventional methods rely on surface alignment at palatal soft-tissue areas, which is applicable to the maxilla only. We introduce two novel numerical methods applicable to both maxilla and mandible. The OTM from the initial phase of multi-bracket appliance treatment of ten pairs of maxillary models were evaluated and compared with four conventional methods. The median range of deviation of OTM for three users was 13-72% smaller for the novel methods than for the conventional methods, indicating greater inter-observer agreement. Total tooth translation and rotation were significantly different (ANOVA, p < 0.01) for OTM determined by use of the two numerical and four conventional methods. Directional decomposition of OTM from the novel methods showed clinically acceptable agreement with reference results except for vertical translations (deviations of medians greater than 0.6 mm). The difference in vertical translational OTM can be explained by maxillary vertical growth during the observation period, which is additionally recorded by conventional methods. The novel approaches are, thus, particularly suitable for evaluation of pure treatment effects, because growth-related changes are ignored.

  20. The reliability and reproducibility of cephalometric measurements: a comparison of conventional and digital methods

    PubMed Central

    AlBarakati, SF; Kula, KS; Ghoneima, AA

    2012-01-01

    Objective The aim of this study was to assess the reliability and reproducibility of angular and linear measurements of conventional and digital cephalometric methods. Methods A total of 13 landmarks and 16 skeletal and dental parameters were defined and measured on pre-treatment cephalometric radiographs of 30 patients. The conventional and digital tracings and measurements were performed twice by the same examiner with a 6 week interval between measurements. The reliability within the method was determined using Pearson's correlation coefficient (r2). The reproducibility between methods was calculated by paired t-test. The level of statistical significance was set at p < 0.05. Results All measurements for each method were above 0.90 r2 (strong correlation) except maxillary length, which had a correlation of 0.82 for conventional tracing. Significant differences between the two methods were observed in most angular and linear measurements except for ANB angle (p = 0.5), angle of convexity (p = 0.09), anterior cranial base (p = 0.3) and the lower anterior facial height (p = 0.6). Conclusion In general, both methods of conventional and digital cephalometric analysis are highly reliable. Although the reproducibility of the two methods showed some statistically significant differences, most differences were not clinically significant. PMID:22184624

  1. Histological changes induced by 15 F CO2 laser microprobe especially designed for root canal sterilization: an in-vivo study

    NASA Astrophysics Data System (ADS)

    Kesler, Gavriel; Koren, Rumelia; Gal, Rivka

    1998-04-01

    Until now, no suitable delivery fiber existed for CO2 laser endodontic radiation in the apical region where it is most difficult to eliminate the pulp tissue using conventional methods. To overcome this problem, we designed a microprobe that reaches closer to the apex, distributing the energy density to a smaller area of the root canal, thus favorably increasing the thermal effects. The 15 F CO2 microprobe is a flexible, hollow, metal fiber, 300 micrometer in diameter and 20 mm in length, coupled onto a handpiece, with the following radiation parameters: wavelength -- 10.6 micrometer; pulse duration -- 50m/sec; energy per pulse 0.25 joule; energy density -- 353.7J/cm2 per pulse; power on tissue -- 5 W. The study was conducted on 30 vital maxillary or mandibulary; central, lateral, or premolar teeth destined for extraction due to periodontal problems. Twenty were experimentally treated with pulsed CO2 laser delivered by this newly developed fiber after conventional root canal preparation. Temperature measured at three points on the root surface during laser treatment did not exceed 38 degrees Celsius. Ten teeth represented the control group in which only root canal preparation was performed in the conventional method. Histological examination of the laser treated teeth showed coagulation necrosis and vacuolization of remaining pulp tissue in the root canal periphery. Primary and secondary dentin appeared normal, in all cases treated with 15 F CO2 laser. Gramm stain and bacteriologic examination revealed complete sterilization. These results demonstrate the unique capabilities of this special microprobe in sterilization of the root canal, and no thermal damage to the surrounding tissue.

  2. Histological changes induced by CO2 laser microprobe specially designed for root canal sterilization: in vivo study.

    PubMed

    Kesler, G; Koren, R; Kesler, A; Hay, N; Gal, R

    1998-10-01

    Until now, no suitable delivery fiber has existed for CO2 laser endodontic radiation in the apical region, where it is most difficult to eliminate the pulp tissue using conventional methods. To overcome this problem, we have designed a microprobe that reaches closer to the apex, distributing the energy density to a smaller area of the root canal and thus favorably increasing the thermal effects. A CO2 laser microprobe coupled onto a special hand piece was attached to the delivery fiber of a Sharplan 15-F CO2 laser. The study was conducted on 30 vital maxillary or mandibulary, central, lateral, or premolar teeth destined for extraction due to periodontal problems. Twenty were experimentally treated with pulsed CO2 laser delivered by this newly developed fiber after conventional root canal preparation. Temperature measured at three points on the root surface during laser treatment did not exceed 38 degrees C. Ten teeth represented the control group, in which only root canal preparation was performed in the conventional method. Histological examination of the laser-treated teeth showed coagulation necrosis and vacuolization of the remaining pulp tissue in the root canal periphery. Primary and secondary dentin appeared normal in all cases treated with 15-F CO2 laser. Gram stain and bacteriologic examination revealed complete sterilization. These results demonstrate the unique capabilities of this special microprobe in sterilization of the root canal, with no thermal damage to the surrounding tissue. The combination of classical root canal preparation with CO2 laser irradiation using this special microprobe before closing the canal can drastically change the quality of root canal fillings.

  3. Describing three-class task performance: three-class linear discriminant analysis and three-class ROC analysis

    NASA Astrophysics Data System (ADS)

    He, Xin; Frey, Eric C.

    2007-03-01

    Binary ROC analysis has solid decision-theoretic foundations and a close relationship to linear discriminant analysis (LDA). In particular, for the case of Gaussian equal covariance input data, the area under the ROC curve (AUC) value has a direct relationship to the Hotelling trace. Many attempts have been made to extend binary classification methods to multi-class. For example, Fukunaga extended binary LDA to obtain multi-class LDA, which uses the multi-class Hotelling trace as a figure-of-merit, and we have previously developed a three-class ROC analysis method. This work explores the relationship between conventional multi-class LDA and three-class ROC analysis. First, we developed a linear observer, the three-class Hotelling observer (3-HO). For Gaussian equal covariance data, the 3- HO provides equivalent performance to the three-class ideal observer and, under less strict conditions, maximizes the signal to noise ratio for classification of all pairs of the three classes simultaneously. The 3-HO templates are not the eigenvectors obtained from multi-class LDA. Second, we show that the three-class Hotelling trace, which is the figureof- merit in the conventional three-class extension of LDA, has significant limitations. Third, we demonstrate that, under certain conditions, there is a linear relationship between the eigenvectors obtained from multi-class LDA and 3-HO templates. We conclude that the 3-HO based on decision theory has advantages both in its decision theoretic background and in the usefulness of its figure-of-merit. Additionally, there exists the possibility of interpreting the two linear features extracted by the conventional extension of LDA from a decision theoretic point of view.

  4. PROJECT HEAVEN: Preoperative Training in Virtual Reality

    PubMed Central

    Iamsakul, Kiratipath; Pavlovcik, Alexander V.; Calderon, Jesus I.; Sanderson, Lance M.

    2017-01-01

    A cephalosomatic anastomosis (CSA; also called HEAVEN: head anastomosis venture) has been proposed as an option for patients with neurological impairments, such as spinal cord injury (SCI), and terminal medical illnesses, for which medicine is currently powerless. Protocols to prepare a patient for life after CSA do not currently exist. However, methods used in conventional neurorehabilitation can be used as a reference for developing preparatory training. Studies on virtual reality (VR) technologies have documented VR's ability to enhance rehabilitation and improve the quality of recovery in patients with neurological disabilities. VR-augmented rehabilitation resulted in increased motivation towards performing functional training and improved the biopsychosocial state of patients. In addition, VR experiences coupled with haptic feedback promote neuroplasticity, resulting in the recovery of motor functions in neurologically-impaired individuals. To prepare the recipient psychologically for life after CSA, the development of VR experiences paired with haptic feedback is proposed. This proposal aims to innovate techniques in conventional neurorehabilitation to implement preoperative psychological training for the recipient of HEAVEN. Recipient's familiarity to body movements will prevent unexpected psychological reactions from occurring after the HEAVEN procedure. PMID:28540125

  5. PROJECT HEAVEN: Preoperative Training in Virtual Reality.

    PubMed

    Iamsakul, Kiratipath; Pavlovcik, Alexander V; Calderon, Jesus I; Sanderson, Lance M

    2017-01-01

    A cephalosomatic anastomosis (CSA; also called HEAVEN: head anastomosis venture) has been proposed as an option for patients with neurological impairments, such as spinal cord injury (SCI), and terminal medical illnesses, for which medicine is currently powerless. Protocols to prepare a patient for life after CSA do not currently exist. However, methods used in conventional neurorehabilitation can be used as a reference for developing preparatory training. Studies on virtual reality (VR) technologies have documented VR's ability to enhance rehabilitation and improve the quality of recovery in patients with neurological disabilities. VR-augmented rehabilitation resulted in increased motivation towards performing functional training and improved the biopsychosocial state of patients. In addition, VR experiences coupled with haptic feedback promote neuroplasticity, resulting in the recovery of motor functions in neurologically-impaired individuals. To prepare the recipient psychologically for life after CSA, the development of VR experiences paired with haptic feedback is proposed. This proposal aims to innovate techniques in conventional neurorehabilitation to implement preoperative psychological training for the recipient of HEAVEN. Recipient's familiarity to body movements will prevent unexpected psychological reactions from occurring after the HEAVEN procedure.

  6. Frosta: a new technology for making fast-melting tablets.

    PubMed

    Jeong, Seong Hoon; Fu, Yourong; Park, Kinam

    2005-11-01

    The fast-melting tablet (FMT) technology, which is known to be one of the most innovated methods in oral drug delivery systems, is a rapidly growing area of drug delivery. The initial success of the FMT formulation led to the development of various technologies. These technologies, however, still have some limitations. Recently, a new technology called Frosta (Akina) was developed for making FMTs. The Frosta technology utilises the conventional wet granulation process and tablet press for cost-effective production of tablets. The Frosta tablets are mechanically strong with friability of < 1% and are stable in accelerated stability conditions when packaged into a bottle container. They are robust enough to be packaged in multi-tablet vials. Conventional rotary tablet presses can be used for the production of the tablets and no other special instruments are required. Thus, the cost of making FMTs is lower than that of other existing technologies. Depending on the size, Frosta tablets can melt in < 10 s after placing them in the oral cavity for easy swallowing. The Frosta technology is ideal for wide application of FMTs technology to various drug and nutritional formulations.

  7. Fracture mechanics correlation of boron/aluminum coupons containing stress risers

    NASA Technical Reports Server (NTRS)

    Adsit, N. R.; Waszczak, J. P.

    1975-01-01

    The mechanical behavior of boron/aluminum near stress risers has been studied and reported. This effort was directed toward defining the tensile behavior of both unidirectional and (0/ plus or minus 45) boron/aluminum using linear elastic fracture mechanics (LEFM). The material used was 5.6-mil boron in 6061 aluminum, consolidated using conventional diffusion bonding techniques. Mechanical properties are reported for both unidirectional and (0/ plus or minus 45) boron/aluminum, which serve as control data for the fracture mechanics predictions. Three different flawed specimen types were studied. In each case the series of specimens remained geometrically similar to eliminate variations in finite size correction factors. The fracture data from these tests were reduced using two techniques. They both used conventional LEFM methods, but the existence of a characteristic flaw was assumed in one case and not the other. Both the data and the physical behavior of the specimens support the characteristic flaw hypothesis. Cracks were observed growing slowly in the (0/ plus or minus 45) laminates, until a critical crack length was reached at which time catastrophic failure occurred.

  8. Rank-based testing of equal survivorship based on cross-sectional survival data with or without prospective follow-up.

    PubMed

    Chan, Kwun Chuen Gary; Qin, Jing

    2015-10-01

    Existing linear rank statistics cannot be applied to cross-sectional survival data without follow-up since all subjects are essentially censored. However, partial survival information are available from backward recurrence times and are frequently collected from health surveys without prospective follow-up. Under length-biased sampling, a class of linear rank statistics is proposed based only on backward recurrence times without any prospective follow-up. When follow-up data are available, the proposed rank statistic and a conventional rank statistic that utilizes follow-up information from the same sample are shown to be asymptotically independent. We discuss four ways to combine these two statistics when follow-up is present. Simulations show that all combined statistics have substantially improved power compared with conventional rank statistics, and a Mantel-Haenszel test performed the best among the proposal statistics. The method is applied to a cross-sectional health survey without follow-up and a study of Alzheimer's disease with prospective follow-up. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Characterization of Electronic Materials HgZnSe and HgZnTe Using Innovative and Conventional Techniques

    NASA Technical Reports Server (NTRS)

    Tanton, George; Kesmodel, Roy; Burden, Judy; Su, Ching-Hua; Cobb, Sharon D.; Lehoczky, S. L.

    2000-01-01

    HgZnSe and HgZnTe are electronic materials of interest for potential IR detector and focal plane array applications due to their improved strength and compositional stability over HgCdTe, but they are difficult to grow on Earth and to fully characterize. Conventional contact methods of characterization, such as Hall and van der Paw, although adequate for many situations are typically labor intensive and not entirely suitable where only very small samples are available. To adequately characterize and compare properties of electronic materials grown in low earth orbit with those grown on Earth, innovative techniques are needed that complement existing methods. This paper describes the implementation and test results of a unique non-contact method of characterizing uniformity, mobility, and carrier concentration together with results from conventional methods applied to HgZnSe and HgZnTe. The innovative method has advantages over conventional contact methods since it circumvents problems of possible contamination from alloying electrical contacts to a sample and also has the capability to map a sample. Non- destructive mapping, the determination of the carrier concentration and mobility at each place on a sample, provides a means to quantitatively compare, at high spatial resolution, effects of microgravity on electronic properties and uniformity of electronic materials grown in low-Earth orbit with Earth grown materials. The mapping technique described here uses a 1mm diameter polarized beam of radiation to probe the sample. Activation of a magnetic field, in which the sample is placed, causes the plane of polarization of the probe beam to rotate. This Faraday rotation is a function of the free carrier concentration and the band parameters of the material. Maps of carrier concentration, mobility, and transmission generated from measurements of the Faraday rotation angles over the temperature range from 300K to 77K will be presented. New information on band parameters, obtained by combining results from conventional Hall measurements of the free carrier concentration with Faraday rotation measurements, will also be presented. One example of how this type of information was derived is illustrated in the following figure which shows Faraday rotation vs wavelength modeled for Hg(l-x)ZnxSe at a temperature of 300K and x=0.07. The plasma contribution, total Faraday rotation, and interband contribution to the Faraday rotation, are designated in the Figure as del(p), FR tot, and del(i) respectively. Experimentally measured values of FR tot, each indicated by + , agree acceptably well with the model at the probe wavelength of 10.6 microns. The model shows that at the probe wavelength, practically all the rotation is due to the plasma component, which can be expressed as delta(sub p)= 2pi(e(sup 3))NBL/c(sup 2)nm*(sup 2) omega(sup 2). In this equation, delta(sub p) is the rotation angle due to the free carrier plasma, N is the free carrier concentration, B the magnetic field strength, L the thickness of the sample, n the index of refraction, omega the probe radiation frequency, c the speed of light, e the electron charge, and m* the effective mass. A measurement of N by conventional techniques, combined with a measurement of the Faraday rotation angle allows m* to be accurately determined since it is an inverse square function.

  10. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  11. Rhythmic entrainment source separation: Optimizing analyses of neural responses to rhythmic sensory stimulation.

    PubMed

    Cohen, Michael X; Gulbinaite, Rasa

    2017-02-15

    Steady-state evoked potentials (SSEPs) are rhythmic brain responses to rhythmic sensory stimulation, and are often used to study perceptual and attentional processes. We present a data analysis method for maximizing the signal-to-noise ratio of the narrow-band steady-state response in the frequency and time-frequency domains. The method, termed rhythmic entrainment source separation (RESS), is based on denoising source separation approaches that take advantage of the simultaneous but differential projection of neural activity to multiple electrodes or sensors. Our approach is a combination and extension of existing multivariate source separation methods. We demonstrate that RESS performs well on both simulated and empirical data, and outperforms conventional SSEP analysis methods based on selecting electrodes with the strongest SSEP response, as well as several other linear spatial filters. We also discuss the potential confound of overfitting, whereby the filter captures noise in absence of a signal. Matlab scripts are available to replicate and extend our simulations and methods. We conclude with some practical advice for optimizing SSEP data analyses and interpreting the results. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Listening to light scattering in turbid media: quantitative optical scattering imaging using photoacoustic measurements with one-wavelength illumination

    NASA Astrophysics Data System (ADS)

    Yuan, Zhen; Li, Xiaoqi; Xi, Lei

    2014-06-01

    Biomedical photoacoustic tomography (PAT), as a potential imaging modality, can visualize tissue structure and function with high spatial resolution and excellent optical contrast. It is widely recognized that the ability of quantitatively imaging optical absorption and scattering coefficients from photoacoustic measurements is essential before PAT can become a powerful imaging modality. Existing quantitative PAT (qPAT), while successful, has been focused on recovering absorption coefficient only by assuming scattering coefficient a constant. An effective method for photoacoustically recovering optical scattering coefficient is presently not available. Here we propose and experimentally validate such a method for quantitative scattering coefficient imaging using photoacoustic data from one-wavelength illumination. The reconstruction method developed combines conventional PAT with the photon diffusion equation in a novel way to realize the recovery of scattering coefficient. We demonstrate the method using various objects having scattering contrast only or both absorption and scattering contrasts embedded in turbid media. The listening-to-light-scattering method described will be able to provide high resolution scattering imaging for various biomedical applications ranging from breast to brain imaging.

  13. Laser tracker orientation in confined space using on-board targets

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Kyle, Stephen; Lin, Jiarui; Yang, Linghui; Ren, Yu; Zhu, Jigui

    2016-08-01

    This paper presents a novel orientation method for two laser trackers using on-board targets attached to the tracker head and rotating with it. The technique extends an existing method developed for theodolite intersection systems which are now rarely used. This method requires only a very narrow space along the baseline between the instrument heads, in order to establish the orientation relationship. This has potential application in environments where space is restricted. The orientation parameters can be calculated by means of two-face reciprocal measurements to the on-board targets, and measurements to a common point close to the baseline. An accurate model is then applied which can be solved through nonlinear optimization. Experimental comparison has been made with the conventional orientation method, which is based on measurements to common intersection points located off the baseline. This requires more space and the comparison has demonstrated the feasibility of the more compact technique presented here. Physical setup and testing suggest that the method is practical. Uncertainties estimated by simulation indicate good performance in terms of measurement quality.

  14. Sparse subspace clustering for data with missing entries and high-rank matrix completion.

    PubMed

    Fan, Jicong; Chow, Tommy W S

    2017-09-01

    Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Near-Net Forging Technology Demonstration Program

    NASA Technical Reports Server (NTRS)

    Hall, I. Keith

    1996-01-01

    Significant advantages in specific mechanical properties, when compared to conventional aluminum (Al) alloys, make aluminum-lithium (Al-Li) alloys attractive candidate materials for use in cryogenic propellant tanks and dry bay structures. However, the cost of Al-Li alloys is typically five times that of 2219 aluminum. If conventional fabrication processes are employed to fabricate launch vehicle structure, the material costs will restrict their utilization. In order to fully exploit the potential cost and performance benefits of Al-Li alloys, it is necessary that near-net manufacturing methods be developed to off-set or reduce raw material costs. Near-net forging is an advanced manufacturing method that uses elevated temperature metal movement (forging) to fabricate a single piece, near-net shape, structure. This process is termed 'near-net' because only a minimal amount of post-forge machining is required. The near-net forging process was developed to reduce the material scrap rate (buy-to-fly ratio) and fabrication costs associated with conventional manufacturing methods. The goal for the near-net forging process, when mature, is to achieve an overall cost reduction of approximately 50 percent compared with conventional manufacturing options for producing structures fabricated from Al-Li alloys. This NASA Marshall Space Flight Center (MSFC) sponsored program has been a part of a unique government / industry partnership, coordinated to develop and demonstrate near-net forging technology. The objective of this program was to demonstrate scale-up of the near-net forging process. This objective was successfully achieved by fabricating four integrally stiffened, 170- inch diameter by 20-inch tall, Al-Li alloy 2195, Y-ring adapters. Initially, two 2195 Al-Li ingots were converted and back extruded to produce four cylindrical blockers. Conventional ring rolling of the blockers was performed to produce ring preforms, which were then contour ring rolled to produce 'contour preforms'. All of the contour preforms on this first-of-a-kind effort were imperfect, and the ingot used to fabricate two of the preforms was of an earlier vintage. As lessons were learned throughout the program, the tooling and procedures evolved, and hence the preform quality. Two of the best contour preforms were near- net forged to produce a process pathfinder Y-ring adapter and a 'mechanical properties pathfinder' Y-ring adapter. At this point, Lockheed Martin Astronautics elected to procure additional 2195 aluminum-lithium ingot of the latest vintage, produce two additional preforms, and substitute them for older vintage material non-perfectly filled preforms already produced on this contract. The existing preforms could have been used to fulfill the requirements of the contract.

  16. Mechanical property characterization of intraply hybrid composites

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Lark, R. F.; Sinclair, J. H.

    1979-01-01

    An investigation of the mechanical properties of intraply hybrids made from graphite fiber/epoxy matrix hybridized with secondary S-glass or Kevlar 49 fiber composites is presented. The specimen stress-strain behavior was determined, showing that mechanical properties of intraply hybrid composites can be measured with available methods such as the ten-degree off-axis test for intralaminar shear, and conventional tests for tensile, flexure, and Izod impact properties. The results also showed that combinations of high modulus graphite/S-glass/epoxy matrix composites exist which yield intraply hybrid laminates with the best 'balanced' properties, and that the translation efficiency of mechanical properties from the constituent composites to intraply hybrids may be assessed with a simple equation.

  17. Casuistry and its communitarian critics.

    PubMed

    Kuczewski, Mark G

    1994-06-01

    Communitarian critics have derided case-based reasoning for ignoring the need to arrive at a shared hierarchy of goods prior to case resolution. They claim that such a failure means that casuistry depends on either a naive metaphysical realism or an ethical conventionalism. Casuistry does embrace a certain unobjectionable moral realism and can require appeals to narrative histories, but despite this dependence on the surrounding culture, casuists possess a way to remain critical of society through the concept of practical wisdom and the use of a moral taxonomy. Therefore, casuistry's viability depends upon the existence and employment of this Aristotelian virtue. Furthermore, the casuistry that emerges is a sophisticated type of communitarianism rather than a free-standing method.

  18. Supercritical water oxidation for the destruction of toxic organic wastewaters: a review.

    PubMed

    Veriansyah, Bambang; Kim, Jae-Duck

    2007-01-01

    The destruction of toxic organic wastewaters from munitions demilitarization and complex industrial chemical clearly becomes an overwhelming problem if left to conventional treatment processes. Two options, incineration and supercritical water oxidation (SCWO), exist for the complete destruction of toxic organic wastewaters. Incinerator has associated problems such as very high cost and public resentment; on the other hand, SCWO has proved to be a very promising method for the treatment of many different wastewaters with extremely efficient organic waste destruction 99.99% with none of the emissions associated with incineration. In this review, the concepts of SCWO, result and present perspectives of application, and industrial status of SCWO are critically examined and discussed.

  19. [Risk factors for the spine: nursing assessment and care].

    PubMed

    Bringuente, M E; de Castro, I S; de Jesus, J C; Luciano, L dos S

    1997-01-01

    The present work aimed at studying risk factor that affect people with back pain, identifying them and implementing an intervention proposal of a health education program based on self-care teaching, existential humanist philosophical projects and stress equalization approach line, skeletal-muscle reintegration activities, basic techniques on stress equalization and massage. It has been developed for a population of 42 (forty-two) clients. Two instruments which integrate nursing consultation protocol have been used in data collection. The results showed the existence of associated risk factors which are changeable according to health education programs. The assessment process has contributed for therapeutic measures focus, using non-conventional care methods for this approach providing an improvement to these clients life quality.

  20. Removing environmental organic pollutants with bioremediation and phytoremediation.

    PubMed

    Kang, Jun Won

    2014-06-01

    Hazardous organic pollutants represent a threat to human, animal, and environmental health. If left unmanaged, these pollutants could cause concern. Many researchers have stepped up efforts to find more sustainable and cost-effective alternatives to using hazardous chemicals and treatments to remove existing harmful pollutants. Environmental biotechnology, such as bioremediation and phytoremediation, is a promising field that utilizes natural resources including microbes and plants to eliminate toxic organic contaminants. This technology offers an attractive alternative to other conventional remediation processes because of its relatively low cost and environmentally-friendly method. This review discusses current biological technologies for the removal of organic contaminants, including chlorinated hydrocarbons, focusing on their limitation and recent efforts to correct the drawbacks.

  1. Analysis of XFEL serial diffraction data from individual crystalline fibrils

    PubMed Central

    Wojtas, David H.; Ayyer, Kartik; Liang, Mengning; Mossou, Estelle; Romoli, Filippo; Seuring, Carolin; Beyerlein, Kenneth R.; Bean, Richard J.; Morgan, Andrew J.; Oberthuer, Dominik; Fleckenstein, Holger; Heymann, Michael; Gati, Cornelius; Yefanov, Oleksandr; Barthelmess, Miriam; Ornithopoulou, Eirini; Galli, Lorenzo; Xavier, P. Lourdu; Ling, Wai Li; Frank, Matthias; Yoon, Chun Hong; White, Thomas A.; Bajt, Saša; Mitraki, Anna; Boutet, Sebastien; Aquila, Andrew; Barty, Anton; Forsyth, V. Trevor; Chapman, Henry N.; Millane, Rick P.

    2017-01-01

    Serial diffraction data collected at the Linac Coherent Light Source from crystalline amyloid fibrils delivered in a liquid jet show that the fibrils are well oriented in the jet. At low fibril concentrations, diffraction patterns are recorded from single fibrils; these patterns are weak and contain only a few reflections. Methods are developed for determining the orientation of patterns in reciprocal space and merging them in three dimensions. This allows the individual structure amplitudes to be calculated, thus overcoming the limitations of orientation and cylindrical averaging in conventional fibre diffraction analysis. The advantages of this technique should allow structural studies of fibrous systems in biology that are inaccessible using existing techniques. PMID:29123682

  2. Retrofitting existing chemical scrubbers to biotrickling filters for H2S emission control

    PubMed Central

    Gabriel, David; Deshusses, Marc A.

    2003-01-01

    Biological treatment is a promising alternative to conventional air-pollution control methods, but thus far biotreatment processes for odor control have always required much larger reactor volumes than chemical scrubbers. We converted an existing full-scale chemical scrubber to a biological trickling filter and showed that effective treatment of hydrogen sulfide (H2S) in the converted scrubber was possible even at gas contact times as low as 1.6 s. That is 8–20 times shorter than previous biotrickling filtration reports and comparable to usual contact times in chemical scrubbers. Significant removal of reduced sulfur compounds, ammonia, and volatile organic compounds present in traces in the air was also observed. Continuous operation for >8 months showed stable performance and robust behavior for H2S treatment, with pollutant-removal performance comparable to that achieved by using a chemical scrubber. Our study demonstrates that biotrickling filters can replace chemical scrubbers and be a safer, more economical technique for odor control. PMID:12740445

  3. Retrofitting existing chemical scrubbers to biotrickling filters for H2S emission control.

    PubMed

    Gabriel, David; Deshusses, Marc A

    2003-05-27

    Biological treatment is a promising alternative to conventional air-pollution control methods, but thus far biotreatment processes for odor control have always required much larger reactor volumes than chemical scrubbers. We converted an existing full-scale chemical scrubber to a biological trickling filter and showed that effective treatment of hydrogen sulfide (H2S) in the converted scrubber was possible even at gas contact times as low as 1.6 s. That is 8-20 times shorter than previous biotrickling filtration reports and comparable to usual contact times in chemical scrubbers. Significant removal of reduced sulfur compounds, ammonia, and volatile organic compounds present in traces in the air was also observed. Continuous operation for >8 months showed stable performance and robust behavior for H2S treatment, with pollutant-removal performance comparable to that achieved by using a chemical scrubber. Our study demonstrates that biotrickling filters can replace chemical scrubbers and be a safer, more economical technique for odor control.

  4. Membrane filtration method for enumeration and isolation of Alicyclobacillus spp. from apple juice.

    PubMed

    Lee, S-Y; Chang, S-S; Shin, J-H; Kang, D-H

    2007-11-01

    To evaluate the applicability of filtration membranes for detecting Alicyclobacillus spp. spores in apple juice. Ten types of nitrocellulose membrane filters from five manufacturers were used to collect and enumerate five Alicyclobacillus spore isolates and results were compared to conventional K agar plating. Spore recovery differed among filters with an average recovery rate of 126.2%. Recovery levels also differed among spore isolates. Although significant difference (P < 0.05) in spore sizes existed, no correlation could be determined between spore size and membrane filter recovery rate. Recovery of spores using membrane filtration is dependent on the manufacturer and filter pore size. Correlations between spore recovery rate and spore size could not be determined. Low numbers of Alicyclobacillus spores in juice can be effectively detected using membrane filtration although recovery rate differences exist among different manufacturers. Use of membrane filtration is a simple, fast alternative to the week-long enrichment procedures currently employed in most quality assurance tests.

  5. Impact of Interfacial Roughness on the Sorption Properties of Nanocast Polymers

    DOE PAGES

    Sridhar, Manasa; Gunugunuri, Krishna R.; Hu, Naiping; ...

    2016-03-16

    Nanocasting is an emerging method to prepare organic polymers with regular, nanometer pores using inorganic templates. This report assesses the impact of imperfect template replication on the sorption properties of such polymer castings. Existing X-ray diffraction data show that substantial diffuse scattering exists in the small-angle region even though TEM images show near perfect lattices of uniform pores. To assess the origin of the diffuse scattering, the morphology of the phenol - formaldehyde foams (PFF) was investigated by small-angle X-ray scattering (SAXS). The observed diffuse scattering is attributed to interfacial roughness due to fractal structures. Such roughness has a profoundmore » impact on the sorption properties. Conventional pore- filling models, for example, overestimate protein sorption capacity. A mathematical framework is presented to calculate sorption properties based on observed morphological parameters. The formalism uses the surface fractal dimension determined by SAXS in conjunction with nitrogen adsorption isotherms to predict lysozyme sorption. The results are consistent with measured lysozyme loading.« less

  6. Projected shell model study on nuclei near the N = Z line

    NASA Astrophysics Data System (ADS)

    Sun, Y.

    2003-04-01

    Study of the N ≈ Z nuclei in the mass-80 region is not only interesting due to the existence of abundant nuclear-structure phenomena, but also important in understanding the nucleosynthesis in the rp-process. It is difficult to apply a conventional shell model due to the necessary involvement of the g 9/2 sub-shell. In this paper, the projected shell model is introduced to this study. Calculations are systematically performed for the collective levels as well as the quasi-particle excitations. It is demonstrated that calculations with this truncation scheme can achieve a comparable quality as the large-scale shell model diagonalizations for 48 Cr, but the present method can be applied to much heavier mass regions. While the known experimental data of the yrast bands in the N ≈ Z nuclei (from Se to Ru) are reasonably described, the present calculations predict the existence of high- K states, some of which lie low in energy under certain structure conditions.

  7. Treatment of post-stroke dysphagia by vitalstim therapy coupled with conventional swallowing training.

    PubMed

    Xia, Wenguang; Zheng, Chanjuan; Lei, Qingtao; Tang, Zhouping; Hua, Qiang; Zhang, Yangpu; Zhu, Suiqiang

    2011-02-01

    To investigate the effects of VitalStim therapy coupled with conventional swallowing training on recovery of post-stroke dysphagia, a total of 120 patients with post-stroke dysphagia were randomly and evenly divided into three groups: conventional swallowing therapy group, VitalStim therapy group, and VitalStim therapy plus conventional swallowing therapy group. Prior to and after the treatment, signals of surface electromyography (sEMG) of swallowing muscles were detected, swallowing function was evaluated by using the Standardized Swallowing Assessment (SSA) and Videofluoroscopic Swallowing Study (VFSS) tests, and swallowing-related quality of life (SWAL-QOL) was evaluated using the SWAL-QOL questionnaire. There were significant differences in sEMG value, SSA, VFSS, and SWAL-QOL scores in each group between prior to and after treatment. After 4-week treatment, sEMG value, SSA, VFSS and SWAL-QOL scores were significantly greater in the VitalStim therapy plus conventional swallowing training group than in the conventional swallowing training group and VitalStim therapy group, but no significant difference existed between conventional swallowing therapy group and VitalStim therapy group. It was concluded that VitalStim therapy coupled with conventional swallowing training was conducive to recovery of post-stroke dysphagia.

  8. Comparison of Different Recruitment Methods for Sexual and Reproductive Health Research: Social Media-Based Versus Conventional Methods.

    PubMed

    Motoki, Yoko; Miyagi, Etsuko; Taguri, Masataka; Asai-Sato, Mikiko; Enomoto, Takayuki; Wark, John Dennis; Garland, Suzanne Marie

    2017-03-10

    Prior research about the sexual and reproductive health of young women has relied mostly on self-reported survey studies. Thus, participant recruitment using Web-based methods can improve sexual and reproductive health research about cervical cancer prevention. In our prior study, we reported that Facebook is a promising way to reach young women for sexual and reproductive health research. However, it remains unknown whether Web-based or other conventional recruitment methods (ie, face-to-face or flyer distribution) yield comparable survey responses from similar participants. We conducted a survey to determine whether there was a difference in the sexual and reproductive health survey responses of young Japanese women based on recruitment methods: social media-based and conventional methods. From July 2012 to March 2013 (9 months), we invited women of ages 16-35 years in Kanagawa, Japan, to complete a Web-based questionnaire. They were recruited through either a social media-based (social networking site, SNS, group) or by conventional methods (conventional group). All participants enrolled were required to fill out and submit their responses through a Web-based questionnaire about their sexual and reproductive health for cervical cancer prevention. Of the 243 participants, 52.3% (127/243) were recruited by SNS, whereas 47.7% (116/243) were recruited by conventional methods. We found no differences between recruitment methods in responses to behaviors and attitudes to sexual and reproductive health survey, although more participants from the conventional group (15%, 14/95) chose not to answer the age of first intercourse compared with those from the SNS group (5.2%, 6/116; P=.03). No differences were found between recruitment methods in the responses of young Japanese women to a Web-based sexual and reproductive health survey. ©Yoko Motoki, Etsuko Miyagi, Masataka Taguri, Mikiko Asai-Sato, Takayuki Enomoto, John Dennis Wark, Suzanne Marie Garland. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 10.03.2017.

  9. Dynamic time warping-based averaging framework for functional near-infrared spectroscopy brain imaging studies

    NASA Astrophysics Data System (ADS)

    Zhu, Li; Najafizadeh, Laleh

    2017-06-01

    We investigate the problem related to the averaging procedure in functional near-infrared spectroscopy (fNIRS) brain imaging studies. Typically, to reduce noise and to empower the signal strength associated with task-induced activities, recorded signals (e.g., in response to repeated stimuli or from a group of individuals) are averaged through a point-by-point conventional averaging technique. However, due to the existence of variable latencies in recorded activities, the use of the conventional averaging technique can lead to inaccuracies and loss of information in the averaged signal, which may result in inaccurate conclusions about the functionality of the brain. To improve the averaging accuracy in the presence of variable latencies, we present an averaging framework that employs dynamic time warping (DTW) to account for the temporal variation in the alignment of fNIRS signals to be averaged. As a proof of concept, we focus on the problem of localizing task-induced active brain regions. The framework is extensively tested on experimental data (obtained from both block design and event-related design experiments) as well as on simulated data. In all cases, it is shown that the DTW-based averaging technique outperforms the conventional-based averaging technique in estimating the location of task-induced active regions in the brain, suggesting that such advanced averaging methods should be employed in fNIRS brain imaging studies.

  10. Converting a conventional wired-halogen illuminated indirect ophthalmoscope to a wireless-light emitting diode illuminated indirect ophthalmoscope in less than 1000/- rupees.

    PubMed

    Kothari, Mihir; Kothari, Kedar; Kadam, Sanjay; Mota, Poonam; Chipade, Snehal

    2015-01-01

    To report the "do it yourself" method of converting an existing wired-halogen indirect ophthalmoscope (IO) to a wireless-light emitting diode (LED) IO and report the preferences of the patients and the ophthalmologists. In this prospective observational study, a conventional IO was converted to wireless-LED IO using easily available, affordable electrical components. Conventional and the converted IO were then used to perform photo-stress test and take the feedback of subjects and the ophthalmologists regarding its handling and illumination characteristics. The cost of conversion to wireless-LED was 815/- rupees. Twenty-nine subjects, mean age 34.3 [formula in text] 10 years with normal eyes were recruited in the study. Between the two illumination systems, there was no statistical difference in the magnitude of the visual acuity loss and the time to recovery of acuity and the bleached vision on photo-stress test, although the visual recovery was clinically faster with LED illumination. The heat sensation was more with halogen illumination than the LED (P = 0.009). The ophthalmologists rated wireless-LED IO higher than wired-halogen IO on the handling, examination comfort, patient's visual comfort and quality of the image. Twenty-two (81%) ophthalmologists wanted to change over to wireless-LED IO. Converting to wireless-LED IO is easy, cost-effective and preferred over a wired-halogen indirect ophthalmoscope.

  11. Agricultural waste material as potential adsorbent for sequestering heavy metal ions from aqueous solutions - a review.

    PubMed

    Sud, Dhiraj; Mahajan, Garima; Kaur, M P

    2008-09-01

    Heavy metal remediation of aqueous streams is of special concern due to recalcitrant and persistency of heavy metals in environment. Conventional treatment technologies for the removal of these toxic heavy metals are not economical and further generate huge quantity of toxic chemical sludge. Biosorption is emerging as a potential alternative to the existing conventional technologies for the removal and/or recovery of metal ions from aqueous solutions. The major advantages of biosorption over conventional treatment methods include: low cost, high efficiency, minimization of chemical or biological sludge, regeneration of biosorbents and possibility of metal recovery. Cellulosic agricultural waste materials are an abundant source for significant metal biosorption. The functional groups present in agricultural waste biomass viz. acetamido, alcoholic, carbonyl, phenolic, amido, amino, sulphydryl groups etc. have affinity for heavy metal ions to form metal complexes or chelates. The mechanism of biosorption process includes chemisorption, complexation, adsorption on surface, diffusion through pores and ion exchange etc. The purpose of this review article is to provide the scattered available information on various aspects of utilization of the agricultural waste materials for heavy metal removal. Agricultural waste material being highly efficient, low cost and renewable source of biomass can be exploited for heavy metal remediation. Further these biosorbents can be modified for better efficiency and multiple reuses to enhance their applicability at industrial scale.

  12. Comparisons of two methods of harvesting biomass for energy

    Treesearch

    W.F. Watson; B.J. Stokes; I.W. Savelle

    1986-01-01

    Two harvesting methods for utilization of understory biomass were tested against a conventional harvesting method to determine relative costs. The conventional harvesting method tested removed all pine 6 inches diameter at breast height (DBH) and larger and hardwood sawlogs as tree length logs. The two intensive harvesting methods were a one-pass and a two-pass method...

  13. A randomized controlled trial of the different impression methods for the complete denture fabrication: Patient reported outcomes.

    PubMed

    Jo, Ayami; Kanazawa, Manabu; Sato, Yusuke; Iwaki, Maiko; Akiba, Norihisa; Minakuchi, Shunsuke

    2015-08-01

    To compare the effect of conventional complete dentures (CD) fabricated using two different impression methods on patient-reported outcomes in a randomized controlled trial (RCT). A cross-over RCT was performed with edentulous patients, required maxillomandibular CDs. Mandibular CDs were fabricated using two different methods. The conventional method used a custom tray border moulded with impression compound and a silicone. The simplified used a stock tray and an alginate. Participants were randomly divided into two groups. The C-S group had the conventional method used first, followed by the simplified. The S-C group was in the reverse order. Adjustment was performed four times. A wash out period was set for 1 month. The primary outcome was general patient satisfaction, measured using visual analogue scales, and the secondary outcome was oral health-related quality of life, measured using the Japanese version of the Oral Health Impact Profile for edentulous (OHIP-EDENT-J) questionnaire scores. Twenty-four participants completed the trial. With regard to general patient satisfaction, the conventional method was significantly more acceptable than the simplified. No significant differences were observed between the two methods in the OHIP-EDENT-J scores. This study showed CDs fabricated with a conventional method were significantly more highly rated for general patient satisfaction than a simplified. CDs, fabricated with the conventional method that included a preliminary impression made using alginate in a stock tray and subsequently a final impression made using silicone in a border moulded custom tray resulted in higher general patient satisfaction. UMIN000009875. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Conceptual design of single turbofan engine powered light aircraft

    NASA Technical Reports Server (NTRS)

    Snyder, F. S.; Voorhees, C. G.; Heinrich, A. M.; Baisden, D. N.

    1977-01-01

    The conceptual design of a four place single turbofan engine powered light aircraft was accomplished utilizing contemporary light aircraft conventional design techniques as a means of evaluating the NASA-Ames General Aviation Synthesis Program (GASP) as a preliminary design tool. In certain areas, disagreement or exclusion were found to exist between the results of the conventional design and GASP processes. Detail discussion of these points along with the associated contemporary design methodology are presented.

  15. 47 CFR 90.609 - Special limitations on amendment of applications for assignment or transfer of authorizations for...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) [Reserved] (b) A license to operate a conventional or trunked radio system may not be assigned or... new system or to an existing licensee that has loaded its system to 70 mobiles per channel and is... combined with an existing SMR system above 800 MHz authorized to operate in the trunked mode by assignment...

  16. [Intellectual property rights in Costa Rica in the light of the Biodiversity Convention].

    PubMed

    Salazar, R; Cabrera, J A

    1996-04-01

    This report analyzes intellectual property rights and acquisition of biological samples in light of the Biological Diversity Convention, with emphasis on Costa Rica. It examines the legal framework which exists for the protection of biological resources in this country, especially evaluating the law regarding protection of biota, which was approved in 1992. This includes information regarding access to genetic resources, and regulation for the aforementioned law. It examines the Biological Diversity Convention which was signed at the Rio Summit in 1992, whose objectives and goals, above all, emphasize the subject of distribution of benefits to be derived from the utilization of biological resources.

  17. A new method to improve network topological similarity search: applied to fold recognition

    PubMed Central

    Lhota, John; Hauptman, Ruth; Hart, Thomas; Ng, Clara; Xie, Lei

    2015-01-01

    Motivation: Similarity search is the foundation of bioinformatics. It plays a key role in establishing structural, functional and evolutionary relationships between biological sequences. Although the power of the similarity search has increased steadily in recent years, a high percentage of sequences remain uncharacterized in the protein universe. Thus, new similarity search strategies are needed to efficiently and reliably infer the structure and function of new sequences. The existing paradigm for studying protein sequence, structure, function and evolution has been established based on the assumption that the protein universe is discrete and hierarchical. Cumulative evidence suggests that the protein universe is continuous. As a result, conventional sequence homology search methods may be not able to detect novel structural, functional and evolutionary relationships between proteins from weak and noisy sequence signals. To overcome the limitations in existing similarity search methods, we propose a new algorithmic framework—Enrichment of Network Topological Similarity (ENTS)—to improve the performance of large scale similarity searches in bioinformatics. Results: We apply ENTS to a challenging unsolved problem: protein fold recognition. Our rigorous benchmark studies demonstrate that ENTS considerably outperforms state-of-the-art methods. As the concept of ENTS can be applied to any similarity metric, it may provide a general framework for similarity search on any set of biological entities, given their representation as a network. Availability and implementation: Source code freely available upon request Contact: lxie@iscb.org PMID:25717198

  18. Upgrading and Refining of Crude Oils and Petroleum Products by Ionizing Irradiation.

    PubMed

    Zaikin, Yuriy A; Zaikina, Raissa F

    2016-06-01

    A general trend in the oil industry is a decrease in the proven reserves of light crude oils so that any increase in future oil exploration is associated with high-viscous sulfuric oils and bitumen. Although the world reserves of heavy oil are much greater than those of sweet light oils, their exploration at present is less than 12 % of the total oil recovery. One of the main constraints is very high expenses for the existing technologies of heavy oil recovery, upgrading, transportation, and refining. Heavy oil processing by conventional methods is difficult and requires high power inputs and capital investments. Effective and economic processing of high viscous oil and oil residues needs not only improvements of the existing methods, such as thermal, catalytic and hydro-cracking, but the development of new technological approaches for upgrading and refining of any type of problem oil feedstock. One of the perspective approaches to this problem is the application of ionizing irradiation for high-viscous oil processing. Radiation methods for upgrading and refining high-viscous crude oils and petroleum products in a wide temperature range, oil desulfurization, radiation technology for refining used oil products, and a perspective method for gasoline radiation isomerization are discussed in this paper. The advantages of radiation technology are simple configuration of radiation facilities, low capital and operational costs, processing at lowered temperatures and nearly atmospheric pressure without the use of any catalysts, high production rates, relatively low energy consumption, and flexibility to the type of oil feedstock.

  19. Evaluation of the White Test for the Intraoperative Detection of Bile Leakage

    PubMed Central

    Leelawat, Kawin; Chaiyabutr, Kittipong; Subwongcharoen, Somboon; Treepongkaruna, Sa-ad

    2012-01-01

    We assess whether the White test is better than the conventional bile leakage test for the intraoperative detection of bile leakage in hepatectomized patients. This study included 30 patients who received elective liver resection. Both the conventional bile leakage test (injecting an isotonic sodium chloride solution through the cystic duct) and the White test (injecting a fat emulsion solution through the cystic duct) were carried out in the same patients. The detection of bile leakage was compared between the conventional method and the White test. A bile leak was demonstrated in 8 patients (26.7%) by the conventional method and in 19 patients (63.3%) by the White test. In addition, the White test detected a significantly higher number of bile leakage sites compared with the conventional method (Wilcoxon signed-rank test; P < 0.001). The White test is better than the conventional test for the intraoperative detection of bile leakage. Based on our study, we recommend that surgeons investigating bile leakage sites during liver resections should use the White test instead of the conventional bile leakage test. PMID:22547901

  20. Evaluation of the white test for the intraoperative detection of bile leakage.

    PubMed

    Leelawat, Kawin; Chaiyabutr, Kittipong; Subwongcharoen, Somboon; Treepongkaruna, Sa-Ad

    2012-01-01

    We assess whether the White test is better than the conventional bile leakage test for the intraoperative detection of bile leakage in hepatectomized patients. This study included 30 patients who received elective liver resection. Both the conventional bile leakage test (injecting an isotonic sodium chloride solution through the cystic duct) and the White test (injecting a fat emulsion solution through the cystic duct) were carried out in the same patients. The detection of bile leakage was compared between the conventional method and the White test. A bile leak was demonstrated in 8 patients (26.7%) by the conventional method and in 19 patients (63.3%) by the White test. In addition, the White test detected a significantly higher number of bile leakage sites compared with the conventional method (Wilcoxon signed-rank test; P < 0.001). The White test is better than the conventional test for the intraoperative detection of bile leakage. Based on our study, we recommend that surgeons investigating bile leakage sites during liver resections should use the White test instead of the conventional bile leakage test.

  1. Digital Versus Conventional Impressions in Fixed Prosthodontics: A Review.

    PubMed

    Ahlholm, Pekka; Sipilä, Kirsi; Vallittu, Pekka; Jakonen, Minna; Kotiranta, Ulla

    2018-01-01

    To conduct a systematic review to evaluate the evidence of possible benefits and accuracy of digital impression techniques vs. conventional impression techniques. Reports of digital impression techniques versus conventional impression techniques were systematically searched for in the following databases: Cochrane Central Register of Controlled Trials, PubMed, and Web of Science. A combination of controlled vocabulary, free-text words, and well-defined inclusion and exclusion criteria guided the search. Digital impression accuracy is at the same level as conventional impression methods in fabrication of crowns and short fixed dental prostheses (FDPs). For fabrication of implant-supported crowns and FDPs, digital impression accuracy is clinically acceptable. In full-arch impressions, conventional impression methods resulted in better accuracy compared to digital impressions. Digital impression techniques are a clinically acceptable alternative to conventional impression methods in fabrication of crowns and short FDPs. For fabrication of implant-supported crowns and FDPs, digital impression systems also result in clinically acceptable fit. Digital impression techniques are faster and can shorten the operation time. Based on this study, the conventional impression technique is still recommended for full-arch impressions. © 2016 by the American College of Prosthodontists.

  2. Stable modeling based control methods using a new RBF network.

    PubMed

    Beyhan, Selami; Alci, Musa

    2010-10-01

    This paper presents a novel model with radial basis functions (RBFs), which is applied successively for online stable identification and control of nonlinear discrete-time systems. First, the proposed model is utilized for direct inverse modeling of the plant to generate the control input where it is assumed that inverse plant dynamics exist. Second, it is employed for system identification to generate a sliding-mode control input. Finally, the network is employed to tune PID (proportional + integrative + derivative) controller parameters automatically. The adaptive learning rate (ALR), which is employed in the gradient descent (GD) method, provides the global convergence of the modeling errors. Using the Lyapunov stability approach, the boundedness of the tracking errors and the system parameters are shown both theoretically and in real time. To show the superiority of the new model with RBFs, its tracking results are compared with the results of a conventional sigmoidal multi-layer perceptron (MLP) neural network and the new model with sigmoid activation functions. To see the real-time capability of the new model, the proposed network is employed for online identification and control of a cascaded parallel two-tank liquid-level system. Even though there exist large disturbances, the proposed model with RBFs generates a suitable control input to track the reference signal better than other methods in both simulations and real time. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  3. In silico comparison of the reproducibility of full-arch implant provisional restorations to final restoration between a 3D Scan/CAD/CAM technique and the conventional method.

    PubMed

    Mino, Takuya; Maekawa, Kenji; Ueda, Akihiro; Higuchi, Shizuo; Sejima, Junichi; Takeuchi, Tetsuo; Hara, Emilio Satoshi; Kimura-Ono, Aya; Sonoyama, Wataru; Kuboki, Takuo

    2015-04-01

    The aim of this article was to investigate the accuracy in the reproducibility of full-arch implant provisional restorations to final restorations between a 3D Scan/CAD/CAM technique and the conventional method. We fabricated two final restorations for rehabilitation of maxillary and mandibular complete edentulous area and performed a computer-based comparative analysis of the accuracy in the reproducibility of the provisional restoration to final restoration between a 3D scanning and CAD/CAM (Scan/CAD/CAM) technique and the conventional silicone-mold transfer technique. Final restorations fabricated either by the conventional or Scan/CAD/CAM method were successfully installed in the patient. The total concave/convex volume discrepancy observed with the Scan/CAD/CAM technique was 503.50mm(3) and 338.15 mm(3) for maxillary and mandibular implant-supported prostheses (ISPs), respectively. On the other hand, total concave/convex volume discrepancy observed with the conventional method was markedly high (1106.84 mm(3) and 771.23 mm(3) for maxillary and mandibular ISPs, respectively). The results of the present report suggest that Scan/CAD/CAM method enables a more precise and accurate transfer of provisional restorations to final restorations compared to the conventional method. Copyright © 2014 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  4. A Simple and Novel Method to Attain Retrograde Ureteral Access after Previous Cohen Cross-Trigonal Ureteral Reimplantation

    PubMed Central

    Adam, Ahmed

    2017-01-01

    Objective To describe a simple, novel method to achieve ureteric access in the Cohen crossed reimplanted ureter, which will allow retrograde working access via the conventional transurethral method. Materials and Methods Under cystoscopic vision, suprapubic needle puncture was performed. The needle was directed (bevel facing) towards the desired ureteric orifice (UO). A guidewire (with a floppy-tip) was then inserted into the suprapubic needle passing into the bladder, and then easily passed into the crossed-reimplanted UO. The distal end of the guidewire was then removed through the urethra with cystoscopic grasping forceps. The straightened ureter then easily facilitated ureteroscopy access, retrograde pyelogram studies, and JJ stent insertion in a conventional transurethral method. Results The UO and ureter were aligned in a more conventional orthotopic course, to allow for conventional transurethral working access. Conclusion A novel method to access the Cohen crossed reimplanted ureter was described. All previously published methods of accessing the crossed ureter were critically appraised. PMID:29463976

  5. Microwave-assisted Derivatization of Fatty Acids for Its Measurement in Milk Using High-Performance Liquid Chromatography.

    PubMed

    Shrestha, Rojeet; Miura, Yusuke; Hirano, Ken-Ichi; Chen, Zhen; Okabe, Hiroaki; Chiba, Hitoshi; Hui, Shu-Ping

    2018-01-01

    Fatty acid (FA) profiling of milk has important applications in human health and nutrition. Conventional methods for the saponification and derivatization of FA are time-consuming and laborious. We aimed to develop a simple, rapid, and economical method for the determination of FA in milk. We applied a beneficial approach of microwave-assisted saponification (MAS) of milk fats and microwave-assisted derivatization (MAD) of FA to its hydrazides, integrated with HPLC-based analysis. The optimal conditions for MAS and MAD were determined. Microwave irradiation significantly reduced the sample preparation time from 80 min in the conventional method to less than 3 min. We used three internal standards for the measurement of short-, medium- and long-chain FA. The proposed method showed satisfactory analytical sensitivity, recovery and reproducibility. There was a significant correlation in the milk FA concentrations between the proposed and conventional methods. Being quick, economic, and convenient, the proposed method for the milk FA measurement can be substitute for the convention method.

  6. Self-calibration method without joint iteration for distributed small satellite SAR systems

    NASA Astrophysics Data System (ADS)

    Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan

    2013-12-01

    The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.

  7. An analysis method for flavan-3-ols using high performance liquid chromatography coupled with a fluorescence detector.

    PubMed

    Wang, Liuqing; Yamashita, Yoko; Saito, Akiko; Ashida, Hitoshi

    2017-07-01

    Procyanidins belong to a family of flavan-3-ols, which consist of monomers, (+)-catechin and (-)-epicatechin, and their oligomers and polymers, and are distributed in many plant-derived foods. Procyanidins are reported to have many beneficial physiological activities, such as antihypertensive and anticancer effects. However, the bioavailability of procyanidins is not well understood owing to a lack of convenient and high-sensitive analysis methods. The aim of this study was to develop an improved method for determining procyanidin content in both food materials and biological samples. High performance liquid chromatography (HPLC) coupled with a fluorescence detector was used in this study. The limits of detection (LODs) of (+)-catechin, (-)-epicatechin, procyanidin B2, procyanidin C1, and cinnamtannin A2 were 3.0×10 -3  ng, 4.0×10 -3  ng, 14.0×10 -3  ng, 18.5×10 -3  ng, and 23.0×10 -3  ng, respectively; the limits of quantification (LOQs) were 10.0×10 -3  ng, 29.0×10 -3  ng, 28.5×10 -3  ng, 54.1×10 -3  ng, and 115.0×10 -3  ng, respectively. The LOD and LOQ values indicated that the sensitivity of the fluorescence detector method was around 1000 times higher than that of conventional HPLC coupled with a UV-detector. We applied the developed method to measure procyanidins in black soybean seed coat extract (BE) prepared from soybeans grown under three different fertilization conditions, namely, conventional farming, basal manure application, and intertillage. The amount of flavan-3-ols in these BEs decreased in the order intertillage > basal manure application > conventional farming. Commercially available BE was orally administered to mice at a dose of 250 mg/kg body weight, and we measured the blood flavan-3-ol content. Data from plasma analysis indicated that up to the tetramer oligomerization, procyanidins were detectable and flavan-3-ols mainly existed in conjugated forms in the plasma. In conclusion, we developed a highly sensitive and convenient analytical method for the analysis of flavan-3-ols, and applied this technique to investigate the bioavailability of flavan-3-ols in biological samples and to measure flavan-3-ol content in food material and plants. Copyright © 2017. Published by Elsevier B.V.

  8. Multienergy CT acquisition and reconstruction with a stepped tube potential scan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Le; Xing, Yuxiang, E-mail: xingyx@mail.tsinghua.edu.cn

    Purpose: Based on an energy-dependent property of matter, one may obtain a pseudomonochromatic attenuation map, a material composition image, an electron-density distribution, and an atomic number image using a dual- or multienergy computed tomography (CT) scan. Dual- and multienergy CT scans broaden the potential of x-ray CT imaging. The development of such systems is very useful in both medical and industrial investigations. In this paper, the authors propose a new dual- and multienergy CT system design (segmental multienergy CT, SegMECT) using an innovative scanning scheme that is conveniently implemented on a conventional single-energy CT system. The two-step-energy dual-energy CT canmore » be regarded as a special case of SegMECT. A special reconstruction method is proposed to support SegMECT. Methods: In their SegMECT, a circular trajectory in a CT scan is angularly divided into several arcs. The x-ray source is set to a different tube voltage for each arc of the trajectory. Thus, the authors only need to make a few step changes to the x-ray energy during the scan to complete a multienergy data acquisition. With such a data set, the image reconstruction might suffer from severe limited-angle artifacts if using conventional reconstruction methods. To solve the problem, they present a new prior-image-based reconstruction technique using a total variance norm of a quotient image constraint. On the one hand, the prior extracts structural information from all of the projection data. On the other hand, the effect from a possibly imprecise intensity level of the prior can be mitigated by minimizing the total variance of a quotient image. Results: The authors present a new scheme for a SegMECT configuration and establish a reconstruction method for such a system. Both numerical simulation and a practical phantom experiment are conducted to validate the proposed reconstruction method and the effectiveness of the system design. The results demonstrate that the proposed SegMECT can provide both attenuation images and material decomposition images of reasonable image quality. Compared to existing methods, the new system configuration demonstrates advantages in simplicity of implementation, system cost, and dose control. Conclusions: This proposed SegMECT imaging approach has great potential for practical applications. It can be readily realized on a conventional CT system.« less

  9. Learning free energy landscapes using artificial neural networks.

    PubMed

    Sidky, Hythem; Whitmer, Jonathan K

    2018-03-14

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  10. Learning free energy landscapes using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Sidky, Hythem; Whitmer, Jonathan K.

    2018-03-01

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  11. The online community based decision making support system for mitigating biased decision making

    NASA Astrophysics Data System (ADS)

    Kang, Sunghyun; Seo, Jiwan; Choi, Seungjin; Kim, Junho; Han, Sangyong

    2016-10-01

    As the Internet technology and social media advance, various information and opinions are shared and distributed through the online communities. However, the existence of implicit and explicit bias of opinions may have a potential influence on the outcomes. Compared to the importance of mitigating biased information, the study in this field is relatively young and does not address many important issues. In this paper we propose the noble approach to mitigate the biased opinions using conventional machine learning methods. The proposed method extracts the useful features such as inclination and sentiment of the community members. They are classified based on their previous behavior, and the propensity of the members is understood. This information on each community and its members is very useful and improve the ability to make an unbiased decision. The proposed method presented in this paper is shown to have the ability to assist optimal, fair and good decision making while also reducing the influence of implicit bias.

  12. Geographic information system (GIS)-based image analysis for assessing growth of Physarum polycephalum on a solid medium.

    PubMed

    Tran, Hanh T M; Stephenson, Steven L; Tullis, Jason A

    2015-01-01

    The conventional method used to assess growth of the plasmodium of the slime mold Physarum polycephalum in solid culture is to measure the extent of plasmodial expansion from the point of inoculation by using a ruler. However, plasmodial growth is usually rather irregular, so the values obtained are not especially accurate. Similar challenges exist in quantification of the growth of a fungal mycelium. In this paper, we describe a method that uses geographic information system software to obtain highly accurate estimates of plasmodial growth over time. This approach calculates plasmodial area from images obtained at particular intervals following inoculation. In addition, the correlation between plasmodial area and its dry cell weight value was determined. The correlation could be used for biomass estimation without the need of having to terminate the cultures in question. The method described herein is simple but effective and could also be used for growth measurements of other microorganisms such as fungi on solid media.

  13. Evaluation of actinide biosorption by microorganisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Happel, A.M.

    1996-06-01

    Conventional methods for removing metals from aqueous solutions include chemical precipitation, chemical oxidation or reduction, ion exchange, reverse osmosis, electrochemical treatment and evaporation. The removal of radionuclides from aqueous waste streams has largely relied on ion exchange methods which can be prohibitively costly given increasingly stringent regulatory effluent limits. The use of microbial cells as biosorbants for heavy metals offers a potential alternative to existing methods for decontamination or recovery of heavy metals from a variety of industrial waste streams and contaminated ground waters. The toxicity and the extreme and variable conditions present in many radionuclide containing waste streams maymore » preclude the use of living microorganisms and favor the use of non-living biomass for the removal of actinides from these waste streams. In the work presented here, we have examined the biosorption of uranium by non-living, non-metabolizing microbial biomass thus avoiding the problems associated with living systems. We are investigating biosorption with the long term goal of developing microbial technologies for the remediation of actinides.« less

  14. Multi-Mounted X-Ray Computed Tomography

    PubMed Central

    Fu, Jian; Liu, Zhenzhong; Wang, Jingzheng

    2016-01-01

    Most existing X-ray computed tomography (CT) techniques work in single-mounted mode and need to scan the inspected objects one by one. It is time-consuming and not acceptable for the inspection in a large scale. In this paper, we report a multi-mounted CT method and its first engineering implementation. It consists of a multi-mounted scanning geometry and the corresponding algebraic iterative reconstruction algorithm. This approach permits the CT rotation scanning of multiple objects simultaneously without the increase of penetration thickness and the signal crosstalk. Compared with the conventional single-mounted methods, it has the potential to improve the imaging efficiency and suppress the artifacts from the beam hardening and the scatter. This work comprises a numerical study of the method and its experimental verification using a dataset measured with a developed multi-mounted X-ray CT prototype system. We believe that this technique is of particular interest for pushing the engineering applications of X-ray CT. PMID:27073911

  15. Ultrastructural changes of sheep cumulus-oocyte complexes following different methods of vitrification.

    PubMed

    Ebrahimi, Bita; Valojerdi, Mojtaba Rezazadeh; Eftekhari-Yazdi, Poopak; Baharvand, Hossein

    2012-05-01

    To determine the ultrastructural changes of sheep cumulus-oocyte complexes (COCs) following different methods of vitrification, good quality isolated COCs (GV stage) were randomly divided into the non-vitrified control, conventional straw, cryotop and solid surface vitrification groups. In both conventional and cryotop methods, vitrified COCs were respectively loaded by conventional straws and cryotops, and then plunged directly into liquid nitrogen (LN2); whereas in the solid surface group, vitrified COCs were first loaded by cryotops and then cooled before plunging into LN2. Post-warming survivability and ultrastructural changes of healthy COCs in the cryotop group especially in comparison with the conventional group revealed better viability rate and good preservation of the ooplasm organization. However in all vitrification groups except the cryotop group, mitochondria were clumped. Solely in the conventional straw group, the mitochondria showed different densities and were extremely distended. Moreover in the latter group, plenty of large irregular connected vesicles in the ooplasm were observed and in some parts their membrane ruptured. Also, in the conventional and solid surface vitrification groups, cumulus cells projections became retracted from the zona pellucida in some parts. In conclusion, the cryotop vitrification method as compared with other methods seems to have a good post-warming survivability and shows less deleterious effects on the ultrastructure of healthy vitrified-warmed sheep COCs.

  16. Visualization and characterization of engineered nanoparticles in complex environmental and food matrices using atmospheric scanning electron microscopy.

    PubMed

    Luo, P; Morrison, I; Dudkiewicz, A; Tiede, K; Boyes, E; O'Toole, P; Park, S; Boxall, A B

    2013-04-01

    Imaging and characterization of engineered nanoparticles (ENPs) in water, soils, sediment and food matrices is very important for research into the risks of ENPs to consumers and the environment. However, these analyses pose a significant challenge as most existing techniques require some form of sample manipulation prior to imaging and characterization, which can result in changes in the ENPs in a sample and in the introduction of analytical artefacts. This study therefore explored the application of a newly designed instrument, the atmospheric scanning electron microscope (ASEM), which allows the direct characterization of ENPs in liquid matrices and which therefore overcomes some of the limitations associated with existing imaging methods. ASEM was used to characterize the size distribution of a range of ENPs in a selection of environmental and food matrices, including supernatant of natural sediment, test medium used in ecotoxicology studies, bovine serum albumin and tomato soup under atmospheric conditions. The obtained imaging results were compared to results obtained using conventional imaging by transmission electron microscope (TEM) and SEM as well as to size distribution data derived from nanoparticle tracking analysis (NTA). ASEM analysis was found to be a complementary technique to existing methods that is able to visualize ENPs in complex liquid matrices and to provide ENP size information without extensive sample preparation. ASEM images can detect ENPs in liquids down to 30 nm and to a level of 1 mg L(-1) (9×10(8) particles mL(-1) , 50 nm Au ENPs). The results indicate ASEM is a highly complementary method to existing approaches for analyzing ENPs in complex media and that its use will allow those studying to study ENP behavior in situ, something that is currently extremely challenging to do. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  17. VOLATILE CONSTITUENTS OF GINGER OIL PREPARED ACCORDING TO IRANIAN TRADITIONAL MEDICINE AND CONVENTIONAL METHOD: A COMPARATIVE STUDY.

    PubMed

    Shirooye, Pantea; Mokaberinejad, Roshanak; Ara, Leila; Hamzeloo-Moghadam, Maryam

    2016-01-01

    Herbal medicines formulated as oils were believed to possess more powerful effects than their original plants in Iranian Traditional Medicine (ITM). One of the popular oils suggested for treatment of various indications was ginger oil. In the present study, to suggest a more convenient method of oil preparation (compared to the traditional method), ginger oil has been prepared according to both the traditional and conventional maceration methods and the volatile oil constituents have been compared. Ginger oil was obtained in sesame oil according to both the traditional way and the conventional (maceration) methods. The volatile oil of dried ginger and both oils were obtained by hydro-distillation and analyzed by gas chromatography/mass spectroscopy. Fifty five, fifty nine and fifty one components consisting 94 %, 94 % and 98 % of the total compounds were identified in the volatile oil of ginger, traditional and conventional oils, respectively. The most dominant compounds of the traditional and conventional oils were almost similar; however they were different from ginger essential oil which has also been to possess limited amounts of anti-inflammatory components. It was concluded that ginger oil could be prepared through maceration method and used for indications mentioned in ITM.

  18. Software safety - A user's practical perspective

    NASA Technical Reports Server (NTRS)

    Dunn, William R.; Corliss, Lloyd D.

    1990-01-01

    Software safety assurance philosophy and practices at the NASA Ames are discussed. It is shown that, to be safe, software must be error-free. Software developments on two digital flight control systems and two ground facility systems are examined, including the overall system and software organization and function, the software-safety issues, and their resolution. The effectiveness of safety assurance methods is discussed, including conventional life-cycle practices, verification and validation testing, software safety analysis, and formal design methods. It is concluded (1) that a practical software safety technology does not yet exist, (2) that it is unlikely that a set of general-purpose analytical techniques can be developed for proving that software is safe, and (3) that successful software safety-assurance practices will have to take into account the detailed design processes employed and show that the software will execute correctly under all possible conditions.

  19. Evaluation of a Consistent LES/PDF Method Using a Series of Experimental Spray Flames

    NASA Astrophysics Data System (ADS)

    Heye, Colin; Raman, Venkat

    2012-11-01

    A consistent method for the evolution of the joint-scalar probability density function (PDF) transport equation is proposed for application to large eddy simulation (LES) of turbulent reacting flows containing evaporating spray droplets. PDF transport equations provide the benefit of including the chemical source term in closed form, however, additional terms describing LES subfilter mixing must be modeled. The recent availability of detailed experimental measurements provide model validation data for a wide range of evaporation rates and combustion regimes, as is well-known to occur in spray flames. In this work, the experimental data will used to investigate the impact of droplet mass loading and evaporation rates on the subfilter scalar PDF shape in comparison with conventional flamelet models. In addition, existing model term closures in the PDF transport equations are evaluated with a focus on their validity in the presence of regime changes.

  20. A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography.

    PubMed

    Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa

    2017-03-01

    This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential for reducing or removing other artifacts caused by instrument instability, detector non-linearity, etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.

  1. Transformation of metal-organic frameworks for molecular sieving membranes

    PubMed Central

    Li, Wanbin; Zhang, Yufan; Zhang, Congyang; Meng, Qin; Xu, Zehai; Su, Pengcheng; Li, Qingbiao; Shen, Chong; Fan, Zheng; Qin, Lei; Zhang, Guoliang

    2016-01-01

    The development of simple, versatile strategies for the synthesis of metal-organic framework (MOF)-derived membranes are of increasing scientific interest, but challenges exist in understanding suitable fabrication mechanisms. Here we report a route for the complete transformation of a series of MOF membranes and particles, based on multivalent cation substitution. Through our approach, the effective pore size can be reduced through the immobilization of metal salt residues in the cavities, and appropriate MOF crystal facets can be exposed, to achieve competitive molecular sieving capabilities. The method can also be used more generally for the synthesis of a variety of MOF membranes and particles. Importantly, we design and synthesize promising MOF membranes candidates that are hard to achieve through conventional methods. For example, our CuBTC/MIL-100 membrane exhibits 89, 171, 241 and 336 times higher H2 permeance than that of CO2, O2, N2 and CH4, respectively. PMID:27090597

  2. Efficient kinetic method for fluid simulation beyond the Navier-Stokes equation.

    PubMed

    Zhang, Raoyang; Shan, Xiaowen; Chen, Hudong

    2006-10-01

    We present a further theoretical extension to the kinetic-theory-based formulation of the lattice Boltzmann method of Shan [J. Fluid Mech. 550, 413 (2006)]. In addition to the higher-order projection of the equilibrium distribution function and a sufficiently accurate Gauss-Hermite quadrature in the original formulation, a regularization procedure is introduced in this paper. This procedure ensures a consistent order of accuracy control over the nonequilibrium contributions in the Galerkin sense. Using this formulation, we construct a specific lattice Boltzmann model that accurately incorporates up to third-order hydrodynamic moments. Numerical evidence demonstrates that the extended model overcomes some major defects existing in conventionally known lattice Boltzmann models, so that fluid flows at finite Knudsen number Kn can be more quantitatively simulated. Results from force-driven Poiseuille flow simulations predict the Knudsen's minimum and the asymptotic behavior of flow flux at large Kn.

  3. Uncertainties in land use data

    NASA Astrophysics Data System (ADS)

    Castilla, G.; Hay, G. J.

    2006-11-01

    This paper deals with the description and assessment of uncertainties in gridded land use data derived from Remote Sensing observations, in the context of hydrological studies. Land use is a categorical regionalised variable returning the main socio-economic role each location has, where the role is inferred from the pattern of occupation of land. There are two main uncertainties surrounding land use data, positional and categorical. This paper focuses on the second one, as the first one has in general less serious implications and is easier to tackle. The conventional method used to asess categorical uncertainty, the confusion matrix, is criticised in depth, the main critique being its inability to inform on a basic requirement to propagate uncertainty through distributed hydrological models, namely the spatial distribution of errors. Some existing alternative methods are reported, and finally the need for metadata is stressed as a more reliable means to assess the quality, and hence the uncertainty, of these data.

  4. Optofluidic time-stretch quantitative phase microscopy.

    PubMed

    Guo, Baoshan; Lei, Cheng; Wu, Yi; Kobayashi, Hirofumi; Ito, Takuro; Yalikun, Yaxiaer; Lee, Sangwook; Isozaki, Akihiro; Li, Ming; Jiang, Yiyue; Yasumoto, Atsushi; Di Carlo, Dino; Tanaka, Yo; Yatomi, Yutaka; Ozeki, Yasuyuki; Goda, Keisuke

    2018-03-01

    Innovations in optical microscopy have opened new windows onto scientific research, industrial quality control, and medical practice over the last few decades. One of such innovations is optofluidic time-stretch quantitative phase microscopy - an emerging method for high-throughput quantitative phase imaging that builds on the interference between temporally stretched signal and reference pulses by using dispersive properties of light in both spatial and temporal domains in an interferometric configuration on a microfluidic platform. It achieves the continuous acquisition of both intensity and phase images with a high throughput of more than 10,000 particles or cells per second by overcoming speed limitations that exist in conventional quantitative phase imaging methods. Applications enabled by such capabilities are versatile and include characterization of cancer cells and microalgal cultures. In this paper, we review the principles and applications of optofluidic time-stretch quantitative phase microscopy and discuss its future perspective. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Need for new technologies for detection of adventitious agents in vaccines and other biological products.

    PubMed

    Mallet, Laurent; Gisonni-Lex, Lucy

    2014-01-01

    From an industrial perspective, the conventional in vitro and in vivo assays used for detection of viral contaminants have shown their limitations, as illustrated by the unfortunate detection of porcine circovirus contamination in a licensed rotavirus vaccine. This contamination event illustrates the gaps within the existing adventitious agent strategy and the potential use of new broader molecular detection methods. This paper serves to summarize current testing approaches and challenges, along with opportunities for the use of these new technologies. Testing of biological products is required to ensure the safety of patients. Recently, a licensed vaccine was found to be contaminated with a virus. This contamination did not cause a safety concern to the patients; however, it highlights the need for using new testing methods to control our biological products. This paper introduces the benefits of these new tests and outlines the challenges with the current tests. © PDA, Inc. 2014.

  6. Microwave Absorption Properties of Iron Nanoparticles Prepared by Ball-Milling

    NASA Astrophysics Data System (ADS)

    Chu, Xuan T. A.; Ta, Bach N.; Ngo, Le T. H.; Do, Manh H.; Nguyen, Phuc X.; Nam, Dao N. H.

    2016-05-01

    A nanopowder of iron was prepared using a high-energy ball milling method, which is capable of producing nanoparticles at a reasonably larger scale compared to conventional chemical methods. Analyses using x-ray diffraction and magnetic measurements indicate that the iron nanoparticles are a single phase of a body-centered cubic structure and have quite stable magnetic characteristics in the air. The iron nanoparticles were then mixed with paraffin and pressed into flat square plates for free-space microwave transmission and reflection measurements in the 4-8 GHz range. Without an Al backing plate, the Fe nanoparticles seem to only weakly absorb microwave radiation. The reflected signal S 11 drops to zero and a very large negative value of reflection loss ( RL) are observed for Al-backed samples, suggesting the existence of a phase matching resonance near frequency f ˜ 6 GHz.

  7. New transurethral system for interstitial radiation of prostate cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumgartner, G.; Callahan, D.; McKiel, C.F. Jr.

    Direct endoscopic implantation of radioactive materials for carcinoma of the prostate without an open operation was accomplished by the use of modified existing transurethral instrumentation and techniques. The closed approach seems applicable particularly to the geriatric population, which is afflicted more commonly but is frequently not treated because of concurrent diseases or because the patient had transurethral resection of the prostate as a diagnostic procedure. Eleven patients were implanted using the transurethral route. Implantations were accomplished successfully with extremely low morbidity. Along with more conventional dosimetry studies, computer tomography was used to assess the placement of seeds. The direct visualizationmore » of the method suggests a potential for greater precision of seed placement as illustrated by computer tomography. In addition, this new instrumentation and method offers a low-risk procedure for carcinoma of the prostate that can be performed on an outpatient basis for selected patients.« less

  8. Comparison of sonochemiluminescence images using image analysis techniques and identification of acoustic pressure fields via simulation.

    PubMed

    Tiong, T Joyce; Chandesa, Tissa; Yap, Yeow Hong

    2017-05-01

    One common method to determine the existence of cavitational activity in power ultrasonics systems is by capturing images of sonoluminescence (SL) or sonochemiluminescence (SCL) in a dark environment. Conventionally, the light emitted from SL or SCL was detected based on the number of photons. Though this method is effective, it could not identify the sonochemical zones of an ultrasonic systems. SL/SCL images, on the other hand, enable identification of 'active' sonochemical zones. However, these images often provide just qualitative data as the harvesting of light intensity data from the images is tedious and require high resolution images. In this work, we propose a new image analysis technique using pseudo-colouring images to quantify the SCL zones based on the intensities of the SCL images and followed by comparison of the active SCL zones with COMSOL simulated acoustic pressure zones. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Diffusion imaging quality control via entropy of principal direction distribution.

    PubMed

    Farzinfar, Mahshid; Oguz, Ipek; Smith, Rachel G; Verde, Audrey R; Dietrich, Cheryl; Gupta, Aditya; Escolar, Maria L; Piven, Joseph; Pujol, Sonia; Vachet, Clement; Gouttard, Sylvain; Gerig, Guido; Dager, Stephen; McKinstry, Robert C; Paterson, Sarah; Evans, Alan C; Styner, Martin A

    2013-11-15

    Diffusion MR imaging has received increasing attention in the neuroimaging community, as it yields new insights into the microstructural organization of white matter that are not available with conventional MRI techniques. While the technology has enormous potential, diffusion MRI suffers from a unique and complex set of image quality problems, limiting the sensitivity of studies and reducing the accuracy of findings. Furthermore, the acquisition time for diffusion MRI is longer than conventional MRI due to the need for multiple acquisitions to obtain directionally encoded Diffusion Weighted Images (DWI). This leads to increased motion artifacts, reduced signal-to-noise ratio (SNR), and increased proneness to a wide variety of artifacts, including eddy-current and motion artifacts, "venetian blind" artifacts, as well as slice-wise and gradient-wise inconsistencies. Such artifacts mandate stringent Quality Control (QC) schemes in the processing of diffusion MRI data. Most existing QC procedures are conducted in the DWI domain and/or on a voxel level, but our own experiments show that these methods often do not fully detect and eliminate certain types of artifacts, often only visible when investigating groups of DWI's or a derived diffusion model, such as the most-employed diffusion tensor imaging (DTI). Here, we propose a novel regional QC measure in the DTI domain that employs the entropy of the regional distribution of the principal directions (PD). The PD entropy quantifies the scattering and spread of the principal diffusion directions and is invariant to the patient's position in the scanner. High entropy value indicates that the PDs are distributed relatively uniformly, while low entropy value indicates the presence of clusters in the PD distribution. The novel QC measure is intended to complement the existing set of QC procedures by detecting and correcting residual artifacts. Such residual artifacts cause directional bias in the measured PD and here called dominant direction artifacts. Experiments show that our automatic method can reliably detect and potentially correct such artifacts, especially the ones caused by the vibrations of the scanner table during the scan. The results further indicate the usefulness of this method for general quality assessment in DTI studies. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Diffusion imaging quality control via entropy of principal direction distribution

    PubMed Central

    Oguz, Ipek; Smith, Rachel G.; Verde, Audrey R.; Dietrich, Cheryl; Gupta, Aditya; Escolar, Maria L.; Piven, Joseph; Pujol, Sonia; Vachet, Clement; Gouttard, Sylvain; Gerig, Guido; Dager, Stephen; McKinstry, Robert C.; Paterson, Sarah; Evans, Alan C.; Styner, Martin A.

    2013-01-01

    Diffusion MR imaging has received increasing attention in the neuroimaging community, as it yields new insights into the microstructural organization of white matter that are not available with conventional MRI techniques. While the technology has enormous potential, diffusion MRI suffers from a unique and complex set of image quality problems, limiting the sensitivity of studies and reducing the accuracy of findings. Furthermore, the acquisition time for diffusion MRI is longer than conventional MRI due to the need for multiple acquisitions to obtain directionally encoded Diffusion Weighted Images (DWI). This leads to increased motion artifacts, reduced signal-to-noise ratio (SNR), and increased proneness to a wide variety of artifacts, including eddy-current and motion artifacts, “venetian blind” artifacts, as well as slice-wise and gradient-wise inconsistencies. Such artifacts mandate stringent Quality Control (QC) schemes in the processing of diffusion MRI data. Most existing QC procedures are conducted in the DWI domain and/or on a voxel level, but our own experiments show that these methods often do not fully detect and eliminate certain types of artifacts, often only visible when investigating groups of DWI's or a derived diffusion model, such as the most-employed diffusion tensor imaging (DTI). Here, we propose a novel regional QC measure in the DTI domain that employs the entropy of the regional distribution of the principal directions (PD). The PD entropy quantifies the scattering and spread of the principal diffusion directions and is invariant to the patient's position in the scanner. High entropy value indicates that the PDs are distributed relatively uniformly, while low entropy value indicates the presence of clusters in the PD distribution. The novel QC measure is intended to complement the existing set of QC procedures by detecting and correcting residual artifacts. Such residual artifacts cause directional bias in the measured PD and here called dominant direction artifacts. Experiments show that our automatic method can reliably detect and potentially correct such artifacts, especially the ones caused by the vibrations of the scanner table during the scan. The results further indicate the usefulness of this method for general quality assessment in DTI studies. PMID:23684874

  11. A Simple Deep Learning Method for Neuronal Spike Sorting

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Wu, Haifeng; Zeng, Yu

    2017-10-01

    Spike sorting is one of key technique to understand brain activity. With the development of modern electrophysiology technology, some recent multi-electrode technologies have been able to record the activity of thousands of neuronal spikes simultaneously. The spike sorting in this case will increase the computational complexity of conventional sorting algorithms. In this paper, we will focus spike sorting on how to reduce the complexity, and introduce a deep learning algorithm, principal component analysis network (PCANet) to spike sorting. The introduced method starts from a conventional model and establish a Toeplitz matrix. Through the column vectors in the matrix, we trains a PCANet, where some eigenvalue vectors of spikes could be extracted. Finally, support vector machine (SVM) is used to sort spikes. In experiments, we choose two groups of simulated data from public databases availably and compare this introduced method with conventional methods. The results indicate that the introduced method indeed has lower complexity with the same sorting errors as the conventional methods.

  12. Accuracy assessment methods of tissue marker clip placement after 11-gauge vacuum-assisted stereotactic breast biopsy: comparison of measurements using direct and conventional methods.

    PubMed

    Yatake, Hidetoshi; Sawai, Yuka; Nishi, Toshio; Nakano, Yoshiaki; Nishimae, Ayaka; Katsuda, Toshizo; Yabunaka, Koichi; Takeda, Yoshihiro; Inaji, Hideo

    2017-07-01

    The objective of the study was to compare direct measurement with a conventional method for evaluation of clip placement in stereotactic vacuum-assisted breast biopsy (ST-VAB) and to evaluate the accuracy of clip placement using the direct method. Accuracy of clip placement was assessed by measuring the distance from a residual calcification of a targeted calcification clustered to a clip on a mammogram after ST-VAB. Distances in the craniocaudal (CC) and mediolateral oblique (MLO) views were measured in 28 subjects with mammograms recorded twice or more after ST-VAB. The difference in the distance between the first and second measurements was defined as the reproducibility and was compared with that from a conventional method using a mask system with overlap of transparent film on the mammogram. The 3D clip-to-calcification distance was measured using the direct method in 71 subjects. The reproducibility of the direct method was higher than that of the conventional method in CC and MLO views (P = 0.002, P < 0.001). The median 3D clip-to-calcification distance was 2.8 mm, with an interquartile range of 2.0-4.8 mm and a range of 1.1-36.3 mm. The direct method used in this study was more accurate than the conventional method, and gave a median 3D distance of 2.8 mm between the calcification and clip.

  13. Efficacy of Conventional Laser Irradiation Versus a New Method for Gingival Depigmentation (Sieve Method): A Clinical Trial.

    PubMed

    Houshmand, Behzad; Janbakhsh, Noushin; Khalilian, Fatemeh; Talebi Ardakani, Mohammad Reza

    2017-01-01

    Introduction: Diode laser irradiation has recently shown promising results for treatment of gingival pigmentation. This study sought to compare the efficacy of 2 diode laser irradiation protocols for treatment of gingival pigmentations, namely the conventional method and the sieve method. Methods: In this split-mouth clinical trial, 15 patients with gingival pigmentation were selected and their pigmentation intensity was determined using Dummett's oral pigmentation index (DOPI) in different dental regions. Diode laser (980 nm wavelength and 2 W power) was irradiated through a stipple pattern (sieve method) and conventionally in the other side of the mouth. Level of pain and satisfaction with the outcome (both patient and periodontist) were measured using a 0-10 visual analog scale (VAS) for both methods. Patients were followed up at 2 weeks, one month and 3 months. Pigmentation levels were compared using repeated measures of analysis of variance (ANOVA). The difference in level of pain and satisfaction between the 2 groups was analyzed by sample t test and general estimate equation model. Results: No significant differences were found regarding the reduction of pigmentation scores and pain and scores between the 2 groups. The difference in satisfaction with the results at the three time points was significant in both conventional and sieve methods in patients ( P = 0.001) and periodontists ( P = 0.015). Conclusion: Diode laser irradiation in both methods successfully eliminated gingival pigmentations. The sieve method was comparable to conventional technique, offering no additional advantage.

  14. Complementary and alternative medicine: attitudes, knowledge and use among surgeons and anaesthesiologists in Hungary.

    PubMed

    Soós, Sándor Árpád; Jeszenői, Norbert; Darvas, Katalin; Harsányi, László

    2016-11-08

    Despite their worldwide popularity the question of using non-conventional treatments is a source of controversy among medical professionals. Although these methods may have potential benefits it presents a problem when patients use non-conventional treatments in the perioperative period without informing their attending physician about it and this may cause adverse events and complications. To prevent this, physicians need to have a profound knowledge about non-conventional treatments. An anonymous questionnaire was distributed among surgeons and anaesthesiologists working in Hungarian university clinics and in selected city or county hospitals. Questionnaires were distributed by post, online or in person. Altogether 258 questionnaires were received from 22 clinical and hospital departments. Anaesthesiologists and surgeons use reflexology, Traditional Chinese Medicine, herbal medicine and manual therapy most frequently in their clinical practice. Traditional Chinese Medicine was considered to be the most scientifically sound method, while homeopathy was perceived as the least well-grounded method. Neural therapy was the least well-known method among our subjects. Among the subjects of our survey only 3.1 % of perioperative care physicians had some qualifications in non-conventional medicine, 12.4 % considered themselves to be well-informed in this topic and 48.4 % would like to study some complementary method. Women were significantly more interested in alternative treatments than men, p = 0.001427; OR: 2.2765. Anaesthesiologists would be significantly more willing to learn non-conventional methods than surgeons. 86.4 % of the participants thought that non-conventional treatments should be evaluated from the point of view of evidence. Both surgeons and anaesthesiologists accept the application of integrative medicine and they also approve of the idea of teaching these methods at universities. According to perioperative care physicians, non-conventional methods should be evaluated based on evidence. They also expressed a willingness to learn about those treatments that meet the criteria of evidence and apply these in their clinical practice.

  15. Can electronic medical images replace hard-copy film? Defining and testing the equivalence of diagnostic tests.

    PubMed

    Obuchowski, N A

    2001-10-15

    Electronic medical images are an efficient and convenient format in which to display, store and transmit radiographic information. Before electronic images can be used routinely to screen and diagnose patients, however, it must be shown that readers have the same diagnostic performance with this new format as traditional hard-copy film. Currently, there exist no suitable definitions of diagnostic equivalence. In this paper we propose two criteria for diagnostic equivalence. The first criterion ('population equivalence') considers the variability between and within readers, as well as the mean reader performance. This criterion is useful for most applications. The second criterion ('individual equivalence') involves a comparison of the test results for individual patients and is necessary when patients are followed radiographically over time. We present methods for testing both individual and population equivalence. The properties of the proposed methods are assessed in a Monte Carlo simulation study. Data from a mammography screening study is used to illustrate the proposed methods and compare them with results from more conventional methods of assessing equivalence and inter-procedure agreement. Copyright 2001 John Wiley & Sons, Ltd.

  16. Short communication: separation and quantification of caseins and casein macropeptide using ion-exchange chromatography.

    PubMed

    Holland, B; Rahimi Yazdi, S; Ion Titapiccolo, G; Corredig, M

    2010-03-01

    The aim of this work was to improve an existing method to separate and quantify the 4 major caseins from milk samples (i.e., containing whey proteins) using ion-exchange chromatography. The separation process was carried out using a mini-preparative cation exchange column (1 or 5mL of column volume), using urea acetate as elution buffer at pH 3.5 with a NaCl gradient. All 4 major caseins were separated, and the purity of each peak was assessed using sodium dodecyl sulfate-PAGE. Purified casein fractions were also added to raw milk to confirm their elution volumes. The quantification was carried out using purified caseins in buffer as well as added directly to fresh skim milk. This method can also be employed to determine the decrease in kappa-casein and the release of the casein-macropeptide during enzymatic hydrolysis using rennet. In this case, the main advantage of using this method is the lack of organic solvents compared with the conventional method for separation of macropeptide (using reversed phase HPLC).

  17. ROKU: a novel method for identification of tissue-specific genes

    PubMed Central

    Kadota, Koji; Ye, Jiazhen; Nakai, Yuji; Terada, Tohru; Shimizu, Kentaro

    2006-01-01

    Background One of the important goals of microarray research is the identification of genes whose expression is considerably higher or lower in some tissues than in others. We would like to have ways of identifying such tissue-specific genes. Results We describe a method, ROKU, which selects tissue-specific patterns from gene expression data for many tissues and thousands of genes. ROKU ranks genes according to their overall tissue specificity using Shannon entropy and detects tissues specific to each gene if any exist using an outlier detection method. We evaluated the capacity for the detection of various specific expression patterns using synthetic and real data. We observed that ROKU was superior to a conventional entropy-based method in its ability to rank genes according to overall tissue specificity and to detect genes whose expression pattern are specific only to objective tissues. Conclusion ROKU is useful for the detection of various tissue-specific expression patterns. The framework is also directly applicable to the selection of diagnostic markers for molecular classification of multiple classes. PMID:16764735

  18. Thickness-Independent Ultrasonic Imaging Applied to Abrasive Cut-Off Wheels: An Advanced Aerospace Materials Characterization Method for the Abrasives Industry. A NASA Lewis Research Center Technology Transfer Case History

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Farmer, Donald A.

    1998-01-01

    Abrasive cut-off wheels are at times unintentionally manufactured with nonuniformity that is difficult to identify and sufficiently characterize without time-consuming, destructive examination. One particular nonuniformity is a density variation condition occurring around the wheel circumference or along the radius, or both. This density variation, depending on its severity, can cause wheel warpage and wheel vibration resulting in unacceptable performance and perhaps premature failure of the wheel. Conventional nondestructive evaluation methods such as ultrasonic c-scan imaging and film radiography are inaccurate in their attempts at characterizing the density variation because a superimposing thickness variation exists as well in the wheel. In this article, the single transducer thickness-independent ultrasonic imaging method, developed specifically to allow more accurate characterization of aerospace components, is shown to precisely characterize the extent of the density variation in a cut-off wheel having a superimposing thickness variation. The method thereby has potential as an effective quality control tool in the abrasives industry for the wheel manufacturer.

  19. Free and forced vibrations of a tyre using a wave/finite element approach

    NASA Astrophysics Data System (ADS)

    Waki, Y.; Mace, B. R.; Brennan, M. J.

    2009-06-01

    Free and forced vibrations of a tyre are predicted using a wave/finite element (WFE) approach. A short circumferential segment of the tyre is modelled using conventional finite element (FE) methods, a periodicity condition applied and the mass and stiffness matrices post-processed to yield wave properties. Since conventional FE methods are used, commercial FE packages and existing element libraries can be utilised. An eigenvalue problem is formulated in terms of the transfer matrix of the segment. Zhong's method is used to improve numerical conditioning. The eigenvalues and eigenvectors give the wavenumbers and wave mode shapes, which in turn define transformations between the physical and wave domains. A method is described by which the frequency dependent material properties of the rubber components of the tyre can be included without the need to remesh the structure. Expressions for the forced response are developed which are numerically well-conditioned. Numerical results for a smooth tyre are presented. Dispersion curves for real, imaginary and complex wavenumbers are shown. The propagating waves are associated with various forms of motion of the tread supported by the stiffness of the side wall. Various dispersion phenomena are observed, including curve veering, non-zero cut-off and waves for which the phase velocity and the group velocity have opposite signs. Results for the forced response are compared with experimental measurements and good agreement is seen. The forced response is numerically determined for both finite area and point excitations. It is seen that the size of area of the excitation is particularly important at high frequencies. When the size of the excitation area is small enough compared to the tread thickness, the response at high frequencies becomes stiffness-like (reactive) and the effect of shear stiffness becomes important.

  20. Effectiveness of en masse versus two-step retraction: a systematic review and meta-analysis.

    PubMed

    Rizk, Mumen Z; Mohammed, Hisham; Ismael, Omar; Bearn, David R

    2018-01-05

    This review aims to compare the effectiveness of en masse and two-step retraction methods during orthodontic space closure regarding anchorage preservation and anterior segment retraction and to assess their effect on the duration of treatment and root resorption. An electronic search for potentially eligible randomized controlled trials and prospective controlled trials was performed in five electronic databases up to July 2017. The process of study selection, data extraction, and quality assessment was performed by two reviewers independently. A narrative review is presented in addition to a quantitative synthesis of the pooled results where possible. The Cochrane risk of bias tool and the Newcastle-Ottawa Scale were used for the methodological quality assessment of the included studies. Eight studies were included in the qualitative synthesis in this review. Four studies were included in the quantitative synthesis. En masse/miniscrew combination showed a statistically significant standard mean difference regarding anchorage preservation - 2.55 mm (95% CI - 2.99 to - 2.11) and the amount of upper incisor retraction - 0.38 mm (95% CI - 0.70 to - 0.06) when compared to a two-step/conventional anchorage combination. Qualitative synthesis suggested that en masse retraction requires less time than two-step retraction with no difference in the amount of root resorption. Both en masse and two-step retraction methods are effective during the space closure phase. The en masse/miniscrew combination is superior to the two-step/conventional anchorage combination with regard to anchorage preservation and amount of retraction. Limited evidence suggests that anchorage reinforcement with a headgear produces similar results with both retraction methods. Limited evidence also suggests that en masse retraction may require less time and that no significant differences exist in the amount of root resorption between the two methods.

  1. Patient attitudes toward transaxillary robot-assisted thyroidectomy.

    PubMed

    Linos, Dimitrios; Kiriakopoulos, Andreas; Petralias, Athanassios

    2013-08-01

    The transaxillary robot-assisted technique constitutes an acceptable treatment option for patients requiring thyroidectomy. However, patients' attitudes toward this new technique have not yet been analyzed. A sample of 596 randomly selected patients who underwent thyroidectomy between January 2000 and March 2010 was assessed. We evaluated patients' attitudes toward transaxillary robot-assisted thyroidectomy, taking into account the validated Patient Scar Assessment Questionnaire, the SF-36 Health Survey Questionnaire, and 11 sociodemographic and surgical patient characteristics. Only 11.6 % of the patients would prefer to have been treated with the transaxillary method. Most patients had concerns that it would be a more painful procedure (39.2 %), and they expressed satisfaction with the existing esthetic outcome (29.1 %); other concerns were that the robotic approach would be of longer duration (25.4 %) and at higher cost (15.5 %). Nevertheless, the worse the appearance of the neck scar the more preferable is the new method (p = 0.025), a result that holds true irrespective of patients' physical health, the invasive procedure attained (conventional or minimal), and the presence of postoperative complications, among other characteristics. Patients diagnosed with a benign or uncertain neoplasm (p = 0.022) and younger patients (p = 0.003) held a more positive view of the new method. Patients who have undergone conventional thyroidectomy via the usual neck incision do not express a preference for the transaxillary method. The reasons given include various perceived disadvantages of the robotic procedure (increased pain, longer operative times, and higher cost). Younger patients, patients with poor appearance of their neck scar, and patients with benign thyroid pathology seem to hold a more positive attitude toward the robotic approach.

  2. A comparative analysis of the cryo-compression and cryo-adsorption hydrogen storage methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petitpas, G; Benard, P; Klebanoff, L E

    2014-07-01

    While conventional low-pressure LH₂ dewars have existed for decades, advanced methods of cryogenic hydrogen storage have recently been developed. These advanced methods are cryo-compression and cryo-adsorption hydrogen storage, which operate best in the temperature range 30–100 K. We present a comparative analysis of both approaches for cryogenic hydrogen storage, examining how pressure and/or sorbent materials are used to effectively increase onboard H₂ density and dormancy. We start by reviewing some basic aspects of LH₂ properties and conventional means of storing it. From there we describe the cryo-compression and cryo-adsorption hydrogen storage methods, and then explore the relationship between them, clarifyingmore » the materials science and physics of the two approaches in trying to solve the same hydrogen storage task (~5–8 kg H₂, typical of light duty vehicles). Assuming that the balance of plant and the available volume for the storage system in the vehicle are identical for both approaches, the comparison focuses on how the respective storage capacities, vessel weight and dormancy vary as a function of temperature, pressure and type of cryo-adsorption material (especially, powder MOF-5 and MIL-101). By performing a comparative analysis, we clarify the science of each approach individually, identify the regimes where the attributes of each can be maximized, elucidate the properties of these systems during refueling, and probe the possible benefits of a combined “hybrid” system with both cryo-adsorption and cryo-compression phenomena operating at the same time. In addition the relationships found between onboard H₂ capacity, pressure vessel and/or sorbent mass and dormancy as a function of rated pressure, type of sorbent material and fueling conditions are useful as general designing guidelines in future engineering efforts using these two hydrogen storage approaches.« less

  3. Evaluation of a micro/nanofluidic chip platform for the high-throughput detection of bacteria and their antibiotic resistance genes in post-neurosurgical meningitis.

    PubMed

    Zhang, Guojun; Zheng, Guanghui; Zhang, Yan; Ma, Ruimin; Kang, Xixiong

    2018-05-01

    Post-neurosurgical meningitis (PNM) is one of the most severe hospital-acquired infections worldwide, and a large number of pathogens, especially those possessing multi-resistance genes, are related to these infections. Existing methods for detecting bacteria and measuring their response to antibiotics lack sensitivity and stability, and laboratory-based detection methods are inconvenient, requiring at least 24h to complete. Rapid identification of bacteria and the determination of their susceptibility to antibiotics are urgently needed, in order to combat the emergence of multi-resistant bacterial strains. This study evaluated a novel, fast, and easy-to-use micro/nanofluidic chip platform (MNCP), which overcomes the difficulties of diagnosing bacterial infections in neurosurgery. This platform can identify 10 genus or species targets and 13 genetic resistance determinants within 1h, and it is very simple to operate. A total of 108 bacterium-containing cerebrospinal fluid (CSF) cultures were tested using the MNCP for the identification of bacteria and determinants of genetic resistance. The results were compared to those obtained with conventional identification and antimicrobial susceptibility testing methods. For the 108 CSF cultures, the concordance rate between the MNCP and the conventional identification method was 94.44%; six species attained 100% consistency. For the production of carbapenemase- and extended-spectrum beta-lactamase (ESBL)-related antibiotic resistance genes, both the sensitivity and specificity of the MNCP tests were high (>90.0%) and could fully meet the requirements of clinical diagnosis. The MNCP is fast, accurate, and easy to use, and has great clinical potential in the treatment of post-neurosurgical meningitis. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  4. [Role of BoBs technology in early missed abortion chorionic villi].

    PubMed

    Li, Z Y; Liu, X Y; Peng, P; Chen, N; Ou, J; Hao, N; Zhou, J; Bian, X M

    2018-05-25

    Objective: To investigate the value of bacterial artificial chromosome-on-beads (BoBs) technology in the genetic analysis of early missed abortion chorionic villi. Methods: Early missed abortion chorionic villi were detected with both conventional karyotyping method and BoBs technology in Peking Union Medical Hospital from July 2014 to March 2015. Compared the results of BoBs with conventional karyotyping analysis to evaluate the sensitivity, specificity and accuracy of this new method. Results: (1) A total of 161 samples were tested successfully in the technology of BoBs, 131 samples were tested successfully in the method of conventional karyotyping. (2) All of the cases obtained from BoBs results in (2.7±0.6) days and obtained from conventional karyotyping results in (22.5±1.9) days. There was significant statistical difference between the two groups ( t= 123.315, P< 0.01) . (3) Out of 161 cases tested in BoBs, 85 (52.8%, 85/161) cases had the abnormal chromosomes, including 79 cases chromosome number abnormality, 4 cases were chromosome segment deletion, 2 cases mosaic. Out of 131 cases tested successfully in conventional karyotyping, 79 (60.3%, 79/131) cases had the abnormal chromosomes including 62 cases chromosome number abnormality, 17 cases other chromosome number abnormality, and the rate of chromosome abnormality between two methods was no significant differences ( P =0.198) . (4) Conventional karyotyping results were served as the gold standard, the accuracy of BoBs for abnormal chromosomes was 82.4% (108/131) , analysed the normal chromosomes (52 cases) and chromosome number abnormality (62 cases) tested in conventional karyotyping, the accuracy of BoBs for chromosome number abnormality was 94.7% (108/114) . Conclusion: BoBs is a rapid reliable and easily operated method to test early missed abortion chorionic villi chromosomal abnormalities.

  5. Unsupervised learning on scientific ocean drilling datasets from the South China Sea

    NASA Astrophysics Data System (ADS)

    Tse, Kevin C.; Chiu, Hon-Chim; Tsang, Man-Yin; Li, Yiliang; Lam, Edmund Y.

    2018-06-01

    Unsupervised learning methods were applied to explore data patterns in multivariate geophysical datasets collected from ocean floor sediment core samples coming from scientific ocean drilling in the South China Sea. Compared to studies on similar datasets, but using supervised learning methods which are designed to make predictions based on sample training data, unsupervised learning methods require no a priori information and focus only on the input data. In this study, popular unsupervised learning methods including K-means, self-organizing maps, hierarchical clustering and random forest were coupled with different distance metrics to form exploratory data clusters. The resulting data clusters were externally validated with lithologic units and geologic time scales assigned to the datasets by conventional methods. Compact and connected data clusters displayed varying degrees of correspondence with existing classification by lithologic units and geologic time scales. K-means and self-organizing maps were observed to perform better with lithologic units while random forest corresponded best with geologic time scales. This study sets a pioneering example of how unsupervised machine learning methods can be used as an automatic processing tool for the increasingly high volume of scientific ocean drilling data.

  6. Application of a Dense Gas Technique for Sterilizing Soft Biomaterials

    PubMed Central

    Karajanagi, Sandeep S.; Yoganathan, Roshan; Mammucari, Raffaella; Park, Hyoungshin; Cox, Julian; Zeitels, Steven M.; Langer, Robert; Foster, Neil R.

    2017-01-01

    Sterilization of soft biomaterials such as hydrogels is challenging because existing methods such as gamma irradiation, steam sterilization, or ethylene oxide sterilization, while effective at achieving high sterility assurance levels (SAL), may compromise their physicochemical properties and biocompatibility. New methods that effectively sterilize soft biomaterials without compromising their properties are therefore required. In this report, a dense-carbon dioxide (CO2)-based technique was used to sterilize soft polyethylene glycol (PEG)-based hydrogels while retaining their structure and physicochemical properties. Conventional sterilization methods such as gamma irradiation and steam sterilization severely compromised the structure of the hydrogels. PEG hydrogels with high water content and low elastic shear modulus (a measure of stiffness) were deliberately inoculated with bacteria and spores and then subjected to dense CO2. The dense CO2-based methods effectively sterilized the hydrogels achieving a SAL of 10−7 without compromising the viscoelastic properties, pH, water-content, and structure of the gels. Furthermore, dense CO2-treated gels were biocompatible and non-toxic when implanted subcutaneously in ferrets. The application of novel dense CO2-based methods to sterilize soft biomaterials has implications in developing safe sterilization methods for soft biomedical implants such as dermal fillers and viscosupplements. PMID:21337339

  7. Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments.

    PubMed

    Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke E

    2018-03-01

    Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode, to explain the maximum variance of the data. Functional PARAFAC permits the entities in different modes to be smooth functions or curves, varying over a continuum, rather than a collection of unconnected responses. The existing functional PARAFAC methods handle functions of a one-dimensional argument (e.g., time) only. In this paper, we propose a new extension of functional PARAFAC for handling three-way data whose responses are sequenced along both a two-dimensional domain (e.g., a plane with x- and y-axis coordinates) and a one-dimensional argument. Technically, the proposed method combines PARAFAC with basis function expansion approximations, using a set of piecewise quadratic finite element basis functions for estimating two-dimensional smooth functions and a set of one-dimensional basis functions for estimating one-dimensional smooth functions. In a simulation study, the proposed method appeared to outperform the conventional PARAFAC. We apply the method to EEG data to demonstrate its empirical usefulness.

  8. Moving force identification based on modified preconditioned conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.; Nguyen, Andy

    2018-06-01

    This paper develops a modified preconditioned conjugate gradient (M-PCG) method for moving force identification (MFI) by improving the conjugate gradient (CG) and preconditioned conjugate gradient (PCG) methods with a modified Gram-Schmidt algorithm. The method aims to obtain more accurate and more efficient identification results from the responses of bridge deck caused by vehicles passing by, which are known to be sensitive to ill-posed problems that exist in the inverse problem. A simply supported beam model with biaxial time-varying forces is used to generate numerical simulations with various analysis scenarios to assess the effectiveness of the method. Evaluation results show that regularization matrix L and number of iterations j are very important influence factors to identification accuracy and noise immunity of M-PCG. Compared with the conventional counterpart SVD embedded in the time domain method (TDM) and the standard form of CG, the M-PCG with proper regularization matrix has many advantages such as better adaptability and more robust to ill-posed problems. More importantly, it is shown that the average optimal numbers of iterations of M-PCG can be reduced by more than 70% compared with PCG and this apparently makes M-PCG a preferred choice for field MFI applications.

  9. Control strategies for systems with limited actuators

    NASA Technical Reports Server (NTRS)

    Marcopoli, Vincent R.; Phillips, Stephen M.

    1994-01-01

    This work investigates the effects of actuator saturation in multi-input, multi-output (MIMO) control systems. The adverse system behavior introduced by the saturation nonlinearity is viewed here as resulting from two mechanisms: controller windup - a problem caused by the discrepancy between the limited actuator commands and the corresponding control signals, and directionality - the problem of how to use nonlimited actuators when a limited condition exists. The tracking mode and Hanus methods are two common strategies for dealing with the windup problem. It is seen that while these methods alleviate windup, performance problems remain due to plant directionality. Though high gain conventional antiwindup as well as more general linear methods have the potential to address both windup and directionality, no systematic design method for these schemes has emerged; most approaches used in practice are application driven. An alternative method of addressing the directionality problem is presented which involves the introduction of a control direction preserving nonlinearity to the Hanus antiwindup system. A nonlinearity is subsequently proposed which reduces the conservation inherent in the former direction-preserving approach, improving performance. The concept of multivariable sensitivity is seen to play a key role in the success of the new method.

  10. A hybrid degradation tendency measurement method for mechanical equipment based on moving window and Grey-Markov model

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Zhou, Jianzhong; Zheng, Yang; Liu, Han

    2017-11-01

    Accurate degradation tendency measurement is vital for the secure operation of mechanical equipment. However, the existing techniques and methodologies for degradation measurement still face challenges, such as lack of appropriate degradation indicator, insufficient accuracy, and poor capability to track the data fluctuation. To solve these problems, a hybrid degradation tendency measurement method for mechanical equipment based on a moving window and Grey-Markov model is proposed in this paper. In the proposed method, a 1D normalized degradation index based on multi-feature fusion is designed to assess the extent of degradation. Subsequently, the moving window algorithm is integrated with the Grey-Markov model for the dynamic update of the model. Two key parameters, namely the step size and the number of states, contribute to the adaptive modeling and multi-step prediction. Finally, three types of combination prediction models are established to measure the degradation trend of equipment. The effectiveness of the proposed method is validated with a case study on the health monitoring of turbine engines. Experimental results show that the proposed method has better performance, in terms of both measuring accuracy and data fluctuation tracing, in comparison with other conventional methods.

  11. Normalized Rotational Multiple Yield Surface Framework (NRMYSF) stress-strain curve prediction method based on small strain triaxial test data on undisturbed Auckland residual clay soils

    NASA Astrophysics Data System (ADS)

    Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.

    2018-04-01

    Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.

  12. A feasibility study of the destruction of chemical weapons by photocatalytic oxidation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hitchman, M.L.; Spackman, A.R.; Yusta, F.J.

    1997-01-01

    The destruction of existing arsenals or deposits of chemical weapons is an important obstacle on the way to the successful implementation of the Chemical Weapons Convention which was opened for signature in 1993. Many approaches have been proposed and none can be seen as panacea. Each has its merits and shortcomings. In this paper we review the different technologies and propose a new one, photocatalytic oxidation, which has the potential to fill an important gap; a cheap, small, mobile facility for chemical warfare agents which are difficult to transport or are deposited in a remote area. We report some relevantmore » experimental results with this technology for the destruction of chemical weapons. After many years of negotiation, a convention banning the production, possession and use of chemical weapons was opened for signature in Paris on January 13, 1993. The convention, once it is ratified, will provide a framework and a program for the destruction of chemical weapons by the nations party to it. The framework will cover such topics as definitions of terminology, general rules of verification and verification measures, level of destruction of chemical weapons, activities not prohibited under the convention, and investigations in cases of alleged use of chemical weapons. The program will require that countries with chemical weapons shall start their destruction not later than one year after they have ratified the convention, and that they shall complete it within a ten year period. For this period involved countries are required to declare their plans for destruction. These plans have to include a time schedule for the destruction process, an inventory of equipment and buildings to be destroyed, proposed measures for verification, safety measures to be observed during destruction, specification of the types of chemical weapons and the type and quantity of chemical fill to be destroyed, and specification of the destruction method. 38 refs.« less

  13. Cost effectiveness of conventional versus LANDSAT use data for hydrologic modeling

    NASA Technical Reports Server (NTRS)

    George, T. S.; Taylor, R. S.

    1982-01-01

    Six case studies were analyzed to investigate the cost effectiveness of using land use data obtained from LANDSAT as opposed to conventionally obtained data. A procedure was developed to determine the relative effectiveness of the two alternative means of acquiring data for hydrological modelling. The cost of conventionally acquired data ranged between $3,000 and $16,000 for the six test basins. Information based on LANDSAT imagery cost between $2,000 and $5,000. Results of the effectiveness analysis shows the differences between the two methods are insignificant. From the cost comparison and the act that each method, conventional and LANDSAT, is shown to be equally effective in developing land use data for hydrologic studies, the cost effectiveness of the conventional or LANDSAT method is found to be a function of basin size for the six test watersheds analyzed. The LANDSAT approach is cost effective for areas containing more than 10 square miles.

  14. SU-F-T-538: CyberKnife with MLC for Treatment of Large Volume Tumors: A Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bichay, T; Mayville, A

    2016-06-15

    Purpose: CyberKnife is a well-documented modality for SRS and SBRT treatments. Typical tumors are small and 1–5 fractions are usually used. We determined the feasibility of using CyberKnife, with an InCise multileaf collimator option, for larger tumors undergoing standard dose and fractionation. The intent was to understand the limitation of using this modality for other external beam radiation treatments. Methods: Five tumors from different anatomical sites with volumes from 127.8 cc to 1,320.5 cc were contoured and planned on a Multiplan V5.1 workstation. The target average diameter ranged from 7 cm to 13 cm. The dose fractionation was 1.8–2.0 Gy/fractionmore » and 25–45 fractions for total doses of 45–81 Gy. The sites planned were: pancreas, head and neck, prostate, anal, and esophagus. The plans were optimized to meet conventional dose constraints based on various RTOG protocols for conventional fractionation. Results: The Multiplan treatment planning system successfully generated clinically acceptable plans for all sites studied. The resulting dose distributions achieved reasonable target coverage, all greater than 95%, and satisfactory normal tissue sparing. Treatment times ranged from 9 minutes to 38 minutes, the longest being a head and neck plan with dual targets receiving different doses and with multiple adjacent critical structures. Conclusion: CyberKnife, with the InCise multileaf collimation option, can achieve acceptable dose distributions in large volume tumors treated with conventional dose and fractionation. Although treatment times are greater than conventional accelerator time; target coverage and dose to critical structures can be kept within a clinically acceptable range. While time limitations exist, when necessary CyberKnife can provide an alternative to traditional treatment modalities for large volume tumors.« less

  15. Interactive Dose Shaping - efficient strategies for CPU-based real-time treatment planning

    NASA Astrophysics Data System (ADS)

    Ziegenhein, P.; Kamerling, C. P.; Oelfke, U.

    2014-03-01

    Conventional intensity modulated radiation therapy (IMRT) treatment planning is based on the traditional concept of iterative optimization using an objective function specified by dose volume histogram constraints for pre-segmented VOIs. This indirect approach suffers from unavoidable shortcomings: i) The control of local dose features is limited to segmented VOIs. ii) Any objective function is a mathematical measure of the plan quality, i.e., is not able to define the clinically optimal treatment plan. iii) Adapting an existing plan to changed patient anatomy as detected by IGRT procedures is difficult. To overcome these shortcomings, we introduce the method of Interactive Dose Shaping (IDS) as a new paradigm for IMRT treatment planning. IDS allows for a direct and interactive manipulation of local dose features in real-time. The key element driving the IDS process is a two-step Dose Modification and Recovery (DMR) strategy: A local dose modification is initiated by the user which translates into modified fluence patterns. This also affects existing desired dose features elsewhere which is compensated by a heuristic recovery process. The IDS paradigm was implemented together with a CPU-based ultra-fast dose calculation and a 3D GUI for dose manipulation and visualization. A local dose feature can be implemented via the DMR strategy within 1-2 seconds. By imposing a series of local dose features, equal plan qualities could be achieved compared to conventional planning for prostate and head and neck cases within 1-2 minutes. The idea of Interactive Dose Shaping for treatment planning has been introduced and first applications of this concept have been realized.

  16. Diagnosis of apical hypertrophic cardiomyopathy: T-wave inversion and relative but not absolute apical left ventricular hypertrophy☆

    PubMed Central

    Flett, Andrew S.; Maestrini, Viviana; Milliken, Don; Fontana, Mariana; Treibel, Thomas A.; Harb, Rami; Sado, Daniel M.; Quarta, Giovanni; Herrey, Anna; Sneddon, James; Elliott, Perry; McKenna, William; Moon, James C.

    2015-01-01

    Background Diagnosis of apical HCM utilizes conventional wall thickness criteria. The normal left ventricular wall thins towards the apex such that normal values are lower in the apical versus the basal segments. The impact of this on the diagnosis of apical hypertrophic cardiomyopathy has not been evaluated. Methods We performed a retrospective review of 2662 consecutive CMR referrals, of which 75 patients were identified in whom there was abnormal T-wave inversion on ECG and a clinical suspicion of hypertrophic cardiomyopathy. These were retrospectively analyzed for imaging features consistent with cardiomyopathy, specifically: relative apical hypertrophy, left atrial dilatation, scar, apical cavity obliteration or apical aneurysm. For comparison, the same evaluation was performed in 60 healthy volunteers and 50 hypertensive patients. Results Of the 75 patients, 48 met conventional HCM diagnostic criteria and went on to act as another comparator group. Twenty-seven did not meet criteria for HCM and of these 5 had no relative apical hypertrophy and were not analyzed further. The remaining 22 patients had relative apical thickening with an apical:basal wall thickness ratio > 1 and a higher prevalence of features consistent with a cardiomyopathy than in the control groups with 54% having 2 or more of the 4 features. No individual in the healthy volunteer group had more than one feature and no hypertension patient had more than 2. Conclusion A cohort of individuals exist with T wave inversion, relative apical hypertrophy and additional imaging features of HCM suggesting an apical HCM phenotype not captured by existing diagnostic criteria. PMID:25666123

  17. Tobacco Packaging and Mass Media Campaigns: Research Needs for Articles 11 and 12 of the WHO Framework Convention on Tobacco Control

    PubMed Central

    2013-01-01

    Introduction: Communicating the health risks of smoking remains a primary objective of tobacco-control policy. Articles 11 and 12 of the World Health Organization’s Framework Convention on Tobacco Control establish standards for two important forms of communication: packaging regulations (Article 11), and mass media campaigns (Article 12). Methods: A narrative review approach was used to identify existing evidence in the areas of package labeling regulations (including health warnings, constituent and emission messages, and prohibitions on misleading information) and communication activities (including mass media campaigns and news media coverage). When available, recent reviews of the literature were used, updated with more recent high-quality studies from published literature. Results: Implementation of Articles 11 and 12 share several important research priorities: (a) identify existing consumer information needs and gaps, (b) research on the message source to identify effective types of content for health warnings and media campaigns, (c) research on how messages are processed and the extent to which the content and form of messages need to be tailored to different cultural and geographic groups, as well as subgroups within countries, and (d) research to identify the most cost-effective mix and best practices for sustaining health communications over time. Conclusion: A unifying theme of effective health communication through tobacco packaging and mass media campaigns is the need to provide salient, timely, and engaging reminders of the consequences of tobacco use in ways that motivate and support tobacco users trying to quit and make tobacco use less appealing for those at risk of taking it up. PMID:23042986

  18. Big Data and medicine: a big deal?

    PubMed

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  19. Quantum liquids get thin

    NASA Astrophysics Data System (ADS)

    Ferrier-Barbut, Igor; Pfau, Tilman

    2018-01-01

    A liquid exists when interactions that attract its constituent particles to each other are counterbalanced by a repulsion acting at higher densities. Other characteristics of liquids are short-range correlations and the existence of surface tension (1). Ultracold atom experiments provide a privileged platform with which to observe exotic states of matter, but the densities are far too low to obtain a conventional liquid because the atoms are too far apart to create repulsive forces arising from the Pauli exclusion principle of the atoms' internal electrons. The observation of quantum liquid droplets in an ultracold mixture of two quantum fluids is now reported on page 301 of this issue by Cabrera et al. (2) and a recent preprint by Semeghini et al. (3). Unlike conventional liquids, these liquids arise from a weak attraction and repulsive many-body correlations in the mixtures.

  20. Original Experimental Approach for Assessing Transport Fuel Stability.

    PubMed

    Bacha, Kenza; Ben Amara, Arij; Alves Fortunato, Maira; Wund, Perrine; Veyrat, Benjamin; Hayrault, Pascal; Vannier, Axel; Nardin, Michel; Starck, Laurie

    2016-10-21

    The study of fuel oxidation stability is an important issue for the development of future fuels. Diesel and kerosene fuel systems have undergone several technological changes to fulfill environmental and economic requirements. These developments have resulted in increasingly severe operating conditions whose suitability for conventional and alternative fuels needs to be addressed. For example, fatty acid methyl esters (FAMEs) introduced as biodiesel are more prone to oxidation and may lead to deposit formation. Although several methods exist to evaluate fuel stability (induction period, peroxides, acids, and insolubles), no technique allows one to monitor the real-time oxidation mechanism and to measure the formation of oxidation intermediates that may lead to deposit formation. In this article, we developed an advanced oxidation procedure (AOP) based on two existing reactors. This procedure allows the simulation of different oxidation conditions and the monitoring of the oxidation progress by the means of macroscopic parameters, such as total acid number (TAN) and advanced analytical methods like gas chromatography coupled to mass spectrometry (GC-MS) and Fourier Transform Infrared - Attenuated Total Reflection (FTIR-ATR). We successfully applied AOP to gain an in-depth understanding of the oxidation kinetics of a model molecule (methyl oleate) and commercial diesel and biodiesel fuels. These developments represent a key strategy for fuel quality monitoring during logistics and on-board utilization.

  1. Rate Change Graph Technology: Absolute Value Point Methodology

    NASA Astrophysics Data System (ADS)

    Strickland, Ken; Duvernois, Michael

    2011-10-01

    Absolute Value Point Methodology (AVPM) is a new theoretical tool for science research centered on Rate Change Graph Technology (RCGT). The modeling techniques of AVPM surpass conventional methods by extending the geometrical rules of mathematics. Exact geometrical structures of matter and energy become clearer revealing new ways to compile advanced data. RCGT mechanics is realized from geometrical intersections that are the result of plotting changing value vs. changing geometry. RCGT methods ignore size and value to perform an objective analysis in geometry. Value and size are then re-introduced back into the analytical system for a clear and concise solution. Available AVPM applications reveal that a massive amount of data from the Big Bang to vast super-clusters is untouched by human thought. Once scientists learn to design tools from RCGT Mechanics, new and formidable approaches to experimentation and theory may lead to new discoveries. In the creation of AVPM, it has become apparent there is a particle-world that exists between strings and our familiar universe. These unrealized particles in their own nature exhibit inflation like properties and may be the progenitor of the implements of our universe. Thus space, time, energy, motion, space-time and gravity are born from its existence and decay. This announcement will be the beginning of many new ideas from the study of RCGT mechanics.

  2. Time-reversed ultrasonically encoded (TRUE) focusing for deep-tissue optogenetic modulation

    NASA Astrophysics Data System (ADS)

    Brake, Joshua; Ruan, Haowen; Robinson, J. Elliott; Liu, Yan; Gradinaru, Viviana; Yang, Changhuei

    2018-02-01

    The problem of optical scattering was long thought to fundamentally limit the depth at which light could be focused through turbid media such as fog or biological tissue. However, recent work in the field of wavefront shaping has demonstrated that by properly shaping the input light field, light can be noninvasively focused to desired locations deep inside scattering media. This has led to the development of several new techniques which have the potential to enhance the capabilities of existing optical tools in biomedicine. Unfortunately, extending these methods to living tissue has a number of challenges related to the requirements for noninvasive guidestar operation, speed, and focusing fidelity. Of existing wavefront shaping methods, time-reversed ultrasonically encoded (TRUE) focusing is well suited for applications in living tissue since it uses ultrasound as a guidestar which enables noninvasive operation and provides compatibility with optical phase conjugation for high-speed operation. In this paper, we will discuss the results of our recent work to apply TRUE focusing for optogenetic modulation, which enables enhanced optogenetic stimulation deep in tissue with a 4-fold spatial resolution improvement in 800-micron thick acute brain slices compared to conventional focusing, and summarize future directions to further extend the impact of wavefront shaping technologies in biomedicine.

  3. Characterization and stability studies of a novel liposomal cyclosporin A prepared using the supercritical fluid method: comparison with the modified conventional Bangham method

    PubMed Central

    Karn, Pankaj Ranjan; Cho, Wonkyung; Park, Hee-Jun; Park, Jeong-Sook; Hwang, Sung-Joo

    2013-01-01

    A novel method to prepare cyclosporin A encapsulated liposomes was introduced using supercritical fluid of carbon dioxide (SCF-CO2) as an antisolvent. To investigate the strength of the newly developed SCF-CO2 method compared with the modified conventional Bangham method, particle size, zeta potential, and polydispersity index (PDI) of both liposomal formulations were characterized and compared. In addition, entrapment efficiency (EE) and drug loading (DL) characteristics were analyzed by reversed-phase high-performance liquid chromatography. Significantly larger particle size and PDI were revealed from the conventional method, while EE (%) and DL (%) did not exhibit any significant differences. The SCF-CO2 liposomes were found to be relatively smaller, multilamellar, and spherical with a smoother surface as determined by transmission electron microscopy. SCF-CO2 liposomes showed no significant differences in their particle size and PDI after more than 3 months, whereas conventional liposomes exhibited significant changes in their particle size. The initial yield (%), EE (%), and DL (%) of SCF-CO2 liposomes and conventional liposomes were 90.98 ± 2.94, 92.20 ± 1.36, 20.99 ± 0.84 and 90.72 ± 2.83, 90.24 ± 1.37, 20.47 ± 0.94, respectively, which changed after 14 weeks to 86.65 ± 0.30, 87.63 ± 0.72, 18.98 ± 0.22 and 75.04 ± 8.80, 84.59 ± 5.13, 15.94 ± 2.80, respectively. Therefore, the newly developed SCF-CO2 method could be a better alternative compared with the conventional method and may provide a promising approach for large-scale production of liposomes. PMID:23378759

  4. Developing new automated alternation flicker using optic disc photography for the detection of glaucoma progression

    PubMed Central

    Ahn, J; Yun, I S; Yoo, H G; Choi, J-J; Lee, M

    2017-01-01

    Purpose To evaluate a progression-detecting algorithm for a new automated matched alternation flicker (AMAF) in glaucoma patients. Methods Open-angle glaucoma patients with a baseline mean deviation of visual field (VF) test>−6 dB were included in this longitudinal and retrospective study. Functional progression was detected by two VF progression criteria and structural progression by both AMAF and conventional comparison methods using optic disc and retinal nerve fiber layer (RNFL) photography. Progression-detecting performances of AMAF and the conventional method were evaluated by an agreement between functional and structural progression criteria. RNFL thickness changes measured by optical coherence tomography (OCT) were compared between progressing and stable eyes determined by each method. Results Among 103 eyes, 47 (45.6%), 21 (20.4%), and 32 (31.1%) eyes were evaluated as glaucoma progression using AMAF, the conventional method, and guided progression analysis (GPA) of the VF test, respectively. The AMAF showed better agreement than the conventional method, using GPA of the VF test (κ=0.337; P<0.001 and κ=0.124; P=0.191, respectively). The rates of RNFL thickness decay using OCT were significantly different between the progressing and stable eyes when progression was determined by AMAF (−3.49±2.86 μm per year vs −1.83±3.22 μm per year; P=0.007) but not by the conventional method (−3.24±2.42 μm per year vs −2.42±3.33 μm per year; P=0.290). Conclusions The AMAF was better than the conventional comparison method in discriminating structural changes during glaucoma progression, and showed a moderate agreement with functional progression criteria. PMID:27662466

  5. Identification and Antifungal Susceptibility Testing of Candida Species: A Comparison of Vitek-2 System with Conventional and Molecular Methods.

    PubMed

    Kaur, Ravinder; Dhakad, Megh Singh; Goyal, Ritu; Haque, Absarul; Mukhopadhyay, Gauranga

    2016-01-01

    Candida infection is a major cause of morbidity and mortality in immunocompromised patients; an accurate and early identification is a prerequisite need to be taken as an effective measure for the management of patients. The purpose of this study was to compare the conventional identification of Candida species with identification by Vitek-2 system and the antifungal susceptibility testing (AST) by broth microdilution method with Vitek-2 AST system. A total of 172 Candida isolates were subjected for identification by the conventional methods, Vitek-2 system, restriction fragment length polymorphism, and random amplified polymorphic DNA analysis. AST was carried out as per the Clinical and Laboratory Standards Institute M27-A3 document and by Vitek-2 system. Candida albicans (82.51%) was the most common Candida species followed by Candida tropicalis (6.29%), Candida krusei (4.89%), Candida parapsilosis (3.49%), and Candida glabrata (2.79%). With Vitek-2 system, of the 172 isolates, 155 Candida isolates were correctly identified, 13 were misidentified, and four were with low discrimination. Whereas with conventional methods, 171 Candida isolates were correctly identified and only a single isolate of C. albicans was misidentified as C. tropicalis . The average measurement of agreement between the Vitek-2 system and conventional methods was >94%. Most of the isolates were susceptible to fluconazole (88.95%) and amphotericin B (97.67%). The measurement of agreement between the methods of AST was >94% for fluconazole and >99% for amphotericin B, which was statistically significant ( P < 0.01). The study confirmed the importance and reliability of conventional and molecular methods, and the acceptable agreements suggest Vitek-2 system an alternative method for speciation and sensitivity testing of Candida species infections.

  6. Color digital halftoning taking colorimetric color reproduction into account

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Suzuki, Toshiaki; Shimoyama, Nobukatsu; Miyake, Yoichi

    1996-01-01

    Taking colorimetric color reproduction into account, the conventional error diffusion method is modified for color digital half-toning. Assuming that the input to a bilevel color printer is given in CIE-XYZ tristimulus values or CIE-LAB values instead of the more conventional RGB or YMC values, two modified versions based on vector operation in (1) the XYZ color space and (2) the LAB color space were tested. Experimental results show that the modified methods, especially the method using the LAB color space, resulted in better color reproduction performance than the conventional methods. Spatial artifacts that appear in the modified methods are presented and analyzed. It is also shown that the modified method (2) with a thresholding technique achieves a good spatial image quality.

  7. Ferrous sulfate based low temperature synthesis and magnetic properties of nickel ferrite nanostructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tejabhiram, Y., E-mail: tejabhiram@gmail.com; Pradeep, R.; Helen, A.T.

    2014-12-15

    Highlights: • Novel low temperature synthesis of nickel ferrite nanoparticles. • Comparison with two conventional synthesis techniques including hydrothermal method. • XRD results confirm the formation of crystalline nickel ferrites at 110 °C. • Superparamagnetic particles with applications in drug delivery and hyperthermia. • Magnetic properties superior to conventional methods found in new process. - Abstract: We report a simple, low temperature and surfactant free co-precipitation method for the preparation of nickel ferrite nanostructures using ferrous sulfate as the iron precursor. The products obtained from this method were compared for their physical properties with nickel ferrites produced through conventional co-precipitationmore » and hydrothermal methods which used ferric nitrate as the iron precursor. X-ray diffraction analysis confirmed the synthesis of single phase inverse spinel nanocrystalline nickel ferrites at temperature as low as 110 °C in the low temperature method. Electron microscopy analysis on the samples revealed the formation of nearly spherical nanostructures in the size range of 20–30 nm which are comparable to other conventional methods. Vibrating sample magnetometer measurements showed the formation of superparamagnetic particles with high magnetic saturation 41.3 emu/g which corresponds well with conventional synthesis methods. The spontaneous synthesis of the nickel ferrite nanoparticles by the low temperature synthesis method was attributed to the presence of 0.808 kJ mol{sup −1} of excess Gibbs free energy due to ferrous sulfate precursor.« less

  8. 77 FR 59891 - Proposed Information Collection; Comment Request; Chemical Weapons Convention Declaration and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-01

    ... Request; Chemical Weapons Convention Declaration and Report Handbook and Forms AGENCY: Bureau of Industry.... Abstract The Chemical Weapons Convention Implementation Act of 1998 and Commerce Chemical Weapons... Chemical Weapons Convention (CWC), an international arms control treaty. II. Method of Collection Submitted...

  9. In vivo precision of conventional and digital methods of obtaining complete-arch dental impressions.

    PubMed

    Ender, Andreas; Attin, Thomas; Mehl, Albert

    2016-03-01

    Digital impression systems have undergone significant development in recent years, but few studies have investigated the accuracy of the technique in vivo, particularly compared with conventional impression techniques. The purpose of this in vivo study was to investigate the precision of conventional and digital methods for complete-arch impressions. Complete-arch impressions were obtained using 5 conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; digitized scannable vinylsiloxanether, VSES-D; and irreversible hydrocolloid, ALG) and 7 digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; Lava COS, LAV; Lava True Definition Scanner, T-Def; 3Shape Trios, TRI; and 3Shape Trios Color, TRC) techniques. Impressions were made 3 times each in 5 participants (N=15). The impressions were then compared within and between the test groups. The cast surfaces were measured point-to-point using the signed nearest neighbor method. Precision was calculated from the (90%-10%)/2 percentile value. The precision ranged from 12.3 μm (VSE) to 167.2 μm (ALG), with the highest precision in the VSE and VSES groups. The deviation pattern varied distinctly according to the impression method. Conventional impressions showed the highest accuracy across the complete dental arch in all groups, except for the ALG group. Conventional and digital impression methods differ significantly in the complete-arch accuracy. Digital impression systems had higher local deviations within the complete arch cast; however, they achieve equal and higher precision than some conventional impression materials. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  10. A Stereological Method for the Quantitative Evaluation of Cartilage Repair Tissue

    PubMed Central

    Nyengaard, Jens Randel; Lind, Martin; Spector, Myron

    2015-01-01

    Objective To implement stereological principles to develop an easy applicable algorithm for unbiased and quantitative evaluation of cartilage repair. Design Design-unbiased sampling was performed by systematically sectioning the defect perpendicular to the joint surface in parallel planes providing 7 to 10 hematoxylin–eosin stained histological sections. Counting windows were systematically selected and converted into image files (40-50 per defect). The quantification was performed by two-step point counting: (1) calculation of defect volume and (2) quantitative analysis of tissue composition. Step 2 was performed by assigning each point to one of the following categories based on validated and easy distinguishable morphological characteristics: (1) hyaline cartilage (rounded cells in lacunae in hyaline matrix), (2) fibrocartilage (rounded cells in lacunae in fibrous matrix), (3) fibrous tissue (elongated cells in fibrous tissue), (4) bone, (5) scaffold material, and (6) others. The ability to discriminate between the tissue types was determined using conventional or polarized light microscopy, and the interobserver variability was evaluated. Results We describe the application of the stereological method. In the example, we assessed the defect repair tissue volume to be 4.4 mm3 (CE = 0.01). The tissue fractions were subsequently evaluated. Polarized light illumination of the slides improved discrimination between hyaline cartilage and fibrocartilage and increased the interobserver agreement compared with conventional transmitted light. Conclusion We have applied a design-unbiased method for quantitative evaluation of cartilage repair, and we propose this algorithm as a natural supplement to existing descriptive semiquantitative scoring systems. We also propose that polarized light is effective for discrimination between hyaline cartilage and fibrocartilage. PMID:26069715

  11. Speckle-modulation for speckle reduction in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Liba, Orly; Lew, Matthew D.; SoRelle, Elliott D.; Dutta, Rebecca; Sen, Debasish; Moshfeghi, Darius M.; Chu, Steven; de la Zerda, Adam

    2018-02-01

    Optical coherence tomography (OCT) is a powerful biomedical imaging technology that relies on the coherent detection of backscattered light to image tissue morphology in vivo. As a consequence, OCT is susceptible to coherent noise, known as speckle noise, which imposes significant limitations on its diagnostic capabilities. Here we show Speckle- Modulating OCT (SM-OCT), a method based purely on light manipulation, which can remove speckle noise, including noise originating from sample multiple back-scattering. SM-OCT accomplishes this by creating and averaging an unlimited number of scans with uncorrelated speckle patterns, without compromising spatial resolution. The uncorrelated speckle patterns are created by scrambling the phase of the light with sub-resolution features using a moving ground-glass diffuser in the optical path of the sample arm. This method can be implemented in existing OCTs as a relatively low-cost add-on. SM-OCT speckle statistics follow the expected decrease in speckle contrast as the number of averaged scans increases. Within a scattering phantom, SM-OCT provides a 2.5-fold increase in effective resolution compared to conventional OCT. Using SM-OCT, we reveal small structures in the tissues of living animals, such as the inner stromal structure of a live mouse cornea, the fine structures inside the mouse pinna, and sweat ducts and Meissner's corpuscle in the human fingertip skin - features that are otherwise obscured by speckle noise when using conventional OCT or OCT with current state of the art speckle reduction methods. Our results indicate that SM-OCT has the potential to improve the current diagnostic and intra-operative capabilities of OCT.

  12. Feasibility of waveform inversion of Rayleigh waves for shallow shear-wave velocity using a genetic algorithm

    USGS Publications Warehouse

    Zeng, C.; Xia, J.; Miller, R.D.; Tsoflias, G.P.

    2011-01-01

    Conventional surface wave inversion for shallow shear (S)-wave velocity relies on the generation of dispersion curves of Rayleigh waves. This constrains the method to only laterally homogeneous (or very smooth laterally heterogeneous) earth models. Waveform inversion directly fits waveforms on seismograms, hence, does not have such a limitation. Waveforms of Rayleigh waves are highly related to S-wave velocities. By inverting the waveforms of Rayleigh waves on a near-surface seismogram, shallow S-wave velocities can be estimated for earth models with strong lateral heterogeneity. We employ genetic algorithm (GA) to perform waveform inversion of Rayleigh waves for S-wave velocities. The forward problem is solved by finite-difference modeling in the time domain. The model space is updated by generating offspring models using GA. Final solutions can be found through an iterative waveform-fitting scheme. Inversions based on synthetic records show that the S-wave velocities can be recovered successfully with errors no more than 10% for several typical near-surface earth models. For layered earth models, the proposed method can generate one-dimensional S-wave velocity profiles without the knowledge of initial models. For earth models containing lateral heterogeneity in which case conventional dispersion-curve-based inversion methods are challenging, it is feasible to produce high-resolution S-wave velocity sections by GA waveform inversion with appropriate priori information. The synthetic tests indicate that the GA waveform inversion of Rayleigh waves has the great potential for shallow S-wave velocity imaging with the existence of strong lateral heterogeneity. ?? 2011 Elsevier B.V.

  13. A Stereological Method for the Quantitative Evaluation of Cartilage Repair Tissue.

    PubMed

    Foldager, Casper Bindzus; Nyengaard, Jens Randel; Lind, Martin; Spector, Myron

    2015-04-01

    To implement stereological principles to develop an easy applicable algorithm for unbiased and quantitative evaluation of cartilage repair. Design-unbiased sampling was performed by systematically sectioning the defect perpendicular to the joint surface in parallel planes providing 7 to 10 hematoxylin-eosin stained histological sections. Counting windows were systematically selected and converted into image files (40-50 per defect). The quantification was performed by two-step point counting: (1) calculation of defect volume and (2) quantitative analysis of tissue composition. Step 2 was performed by assigning each point to one of the following categories based on validated and easy distinguishable morphological characteristics: (1) hyaline cartilage (rounded cells in lacunae in hyaline matrix), (2) fibrocartilage (rounded cells in lacunae in fibrous matrix), (3) fibrous tissue (elongated cells in fibrous tissue), (4) bone, (5) scaffold material, and (6) others. The ability to discriminate between the tissue types was determined using conventional or polarized light microscopy, and the interobserver variability was evaluated. We describe the application of the stereological method. In the example, we assessed the defect repair tissue volume to be 4.4 mm(3) (CE = 0.01). The tissue fractions were subsequently evaluated. Polarized light illumination of the slides improved discrimination between hyaline cartilage and fibrocartilage and increased the interobserver agreement compared with conventional transmitted light. We have applied a design-unbiased method for quantitative evaluation of cartilage repair, and we propose this algorithm as a natural supplement to existing descriptive semiquantitative scoring systems. We also propose that polarized light is effective for discrimination between hyaline cartilage and fibrocartilage.

  14. Video-Based Fingerprint Verification

    PubMed Central

    Qin, Wei; Yin, Yilong; Liu, Lili

    2013-01-01

    Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, “inside similarity” and “outside similarity” are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low. PMID:24008283

  15. Technological Advancements and Error Rates in Radiation Therapy Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less

  16. A thick lens of fresh groundwater in the southern Lihue Basin, Kauai, Hawaii, USA

    USGS Publications Warehouse

    Izuka, S.K.; Gingerich, S.B.

    2003-01-01

    A thick lens of fresh groundwater exists in a large region of low permeability in the southern Lihue Basin, Kauai, Hawaii, USA. The conventional conceptual model for groundwater occurence in Hawaii and other shield-volcano islands does not account for such a thick freshwater lens. In the conventional conceptual model, the lava-flow accumulations of which most shield volcanoes are built form large regions of relatively high permeability and thin freshwater lenses. In the southern Lihue Basin, basin-filling lavas and sediments form a large region of low regional hydraulic conductivity, which, in the moist climate of the basin, is saturated nearly to the land surface and water tables are hundreds of meters above sea level within a few kilometers from the coast. Such high water levels in shield-volcano islands were previously thought to exist only under perched or dike-impounded conditions, but in the southern Lihue Basin, high water levels exist in an apparently dike-free, fully saturated aquifer. A new conceptual model of groundwater occurrence in shield-volcano islands is needed to explain conditions in the southern Lihue Basin.

  17. A Comparison of Performance versus Presentation Based Methods of Instructing Pre-service Teachers in Media Competencies.

    ERIC Educational Resources Information Center

    Mattox, Daniel V., Jr.

    Research compared conventional and experimental methods of instruction in a teacher education media course. The conventional method relied upon factual presentations to heterogeneous groups, while the experimental utilized homogeneous clusters of students and stressed individualized instruction. A pretest-posttest, experimental-control group…

  18. Future Propulsion Opportunities for Commuter Airplanes

    NASA Technical Reports Server (NTRS)

    Strack, W. C.

    1982-01-01

    Commuter airplane propulsion opportunities are summarized. Consideration is given to advanced technology conventional turboprop engines, advanced propellers, and several unconventional alternatives: regenerative turboprops, rotaries, and diesels. Advanced versions of conventional turboprops (including propellers) offer 15-20 percent savings in fuel and 10-15 percent in DOC compared to the new crop of 1500-2000 SHP engines currently in development. Unconventional engines could boost the fuel savings to 30-40 percent. The conclusion is that several important opportunities exist and, therefore, powerplant technology need not plateau.

  19. Frequency-noise cancellation in semiconductor lasers by nonlinear heterodyne detection.

    PubMed

    Bondurant, R S; Welford, D; Alexander, S B; Chan, V W

    1986-12-01

    The bit-error-rate (BER) performance of conventional noncoherent, heterodyne frequency-shift-keyed (FSK) optical communications systems can be surpassed by the use of a differential FSK modulation format and nonlinear postdetection processing at the receiver. A BER floor exists for conventional frequency-shift keying because of the frequency noise of the transmitter and local oscillator. The use of differential frequency-shift keying with nonlinear postdetection processing suppresses this BER floor for the semiconductor laser system considered here.

  20. Electrochemical Processes Enhanced by Acoustic Liquid Manipulation

    NASA Technical Reports Server (NTRS)

    Oeftering, Richard C.

    2004-01-01

    Acoustic liquid manipulation is a family of techniques that employ the nonlinear acoustic effects of acoustic radiation pressure and acoustic streaming to manipulate the behavior of liquids. Researchers at the NASA Glenn Research Center are exploring new methods of manipulating liquids for a variety of space applications, and we have found that acoustic techniques may also be used in the normal Earth gravity environment to enhance the performance of existing fluid processes. Working in concert with the NASA Commercial Technology Office, the Great Lakes Industrial Technology Center, and Alchemitron Corporation (Elgin, IL), researchers at Glenn have applied nonlinear acoustic principles to industrial applications. Collaborating with Alchemitron Corporation, we have adapted the devices to create acoustic streaming in a conventional electroplating process.

Top