An Informal Overview of the Unitary Group Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonnad, V.; Escher, J.; Kruse, M.
The Unitary Groups Approach (UGA) is an elegant and conceptually unified approach to quantum structure calculations. It has been widely used in molecular structure calculations, and holds the promise of a single computational approach to structure calculations in a variety of different fields. We explore the possibility of extending the UGA to computations in atomic and nuclear structure as a simpler alternative to traditional Racah algebra-based approaches. We provide a simple introduction to the basic UGA and consider some of the issues in using the UGA with spin-dependent, multi-body Hamiltonians requiring multi-shell bases adapted to additional symmetries. While the UGAmore » is perfectly capable of dealing with such problems, it is seen that the complexity rises dramatically, and the UGA is not at this time, a simpler alternative to Racah algebra-based approaches.« less
USDA-ARS?s Scientific Manuscript database
Molecular detection of bacterial pathogens based on LAMP methods is a faster and simpler approach than conventional culture methods. Although different LAMP-based methods for pathogenic bacterial detection are available, a systematic comparison of these different LAMP assays has not been performed. ...
Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach
NASA Astrophysics Data System (ADS)
Pinto, Rafael S.; Saa, Alberto
2015-12-01
A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.
NASA Astrophysics Data System (ADS)
Peterson, Gary; Abeytunge, Sanjeewa; Eastman, Zachary; Rajadhyaksha, Milind
2012-02-01
Reflectance confocal microscopy with a line scanning approach potentially offers a smaller, simpler and less expensive approach than traditional methods of point scanning for imaging in living tissues. With one moving mechanical element (galvanometric scanner), a linear array detector and off-the-shelf optics, we designed a compact (102x102x76mm) line scanning confocal reflectance microscope (LSCRM) for imaging human tissues in vivo in a clinical setting. Custom-designed electronics, based on field programmable gate array (FPGA) logic has been developed. With 405 nm illumination and a custom objective lens of numerical aperture 0.5, lateral resolution was measured to be 0.8 um (calculated 0.64 um). The calculated optical sectioning is 3.2 um. Preliminary imaging shows nuclear and cellular detail in human skin and oral epithelium in vivo. Blood flow is also visualized in the deeper connective tissue (lamina propria) in oral mucosa. Since a line is confocal only in one dimension (parallel) but not in the other, the detection is more sensitive to multiply scattered out of focus background noise than in the traditional point scanning configuration. Based on the results of our translational studies thus far, a simpler, smaller and lower-cost approach based on a LSCRM appears to be promising for clinical imaging.
Efficient approach to the free energy of crystals via Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Navascués, G.; Velasco, E.
2015-08-01
We present a general approach to compute the absolute free energy of a system of particles with constrained center of mass based on the Monte Carlo thermodynamic coupling integral method. The version of the Frenkel-Ladd approach [J. Chem. Phys. 81, 3188 (1984)], 10.1063/1.448024, which uses a harmonic coupling potential, is recovered. Also, we propose a different choice, based on one-particle square-well coupling potentials, which is much simpler, more accurate, and free from some of the difficulties of the Frenkel-Ladd method. We apply our approach to hard spheres and compare with the standard harmonic method.
Agreeing on Validity Arguments
ERIC Educational Resources Information Center
Sireci, Stephen G.
2013-01-01
Kane (this issue) presents a comprehensive review of validity theory and reminds us that the focus of validation is on test score interpretations and use. In reacting to his article, I support the argument-based approach to validity and all of the major points regarding validation made by Dr. Kane. In addition, I call for a simpler, three-step…
A computational approach to climate science education with CLIMLAB
NASA Astrophysics Data System (ADS)
Rose, B. E. J.
2017-12-01
CLIMLAB is a Python-based software toolkit for interactive, process-oriented climate modeling for use in education and research. It is motivated by the need for simpler tools and more reproducible workflows with which to "fill in the gaps" between blackboard-level theory and the results of comprehensive climate models. With CLIMLAB you can interactively mix and match physical model components, or combine simpler process models together into a more comprehensive model. I use CLIMLAB in the classroom to put models in the hands of students (undergraduate and graduate), and emphasize a hierarchical, process-oriented approach to understanding the key emergent properties of the climate system. CLIMLAB is equally a tool for climate research, where the same needs exist for more robust, process-based understanding and reproducible computational results. I will give an overview of CLIMLAB and an update on recent developments, including: a full-featured, well-documented, interactive implementation of a widely-used radiation model (RRTM) packaging with conda-forge for compiler-free (and hassle-free!) installation on Mac, Windows and Linux interfacing with xarray for i/o and graphics with gridded model data a rich and growing collection of examples and self-computing lecture notes in Jupyter notebook format
Water balance models in one-month-ahead streamflow forecasting
Alley, William M.
1985-01-01
Techniques are tested that incorporate information from water balance models in making 1-month-ahead streamflow forecasts in New Jersey. The results are compared to those based on simple autoregressive time series models. The relative performance of the models is dependent on the month of the year in question. The water balance models are most useful for forecasts of April and May flows. For the stations in northern New Jersey, the April and May forecasts were made in order of decreasing reliability using the water-balance-based approaches, using the historical monthly means, and using simple autoregressive models. The water balance models were useful to a lesser extent for forecasts during the fall months. For the rest of the year the improvements in forecasts over those obtained using the simpler autoregressive models were either very small or the simpler models provided better forecasts. When using the water balance models, monthly corrections for bias are found to improve minimum mean-square-error forecasts as well as to improve estimates of the forecast conditional distributions.
Assessing alternative measures of wealth in health research.
Cubbin, Catherine; Pollack, Craig; Flaherty, Brian; Hayward, Mark; Sania, Ayesha; Vallone, Donna; Braveman, Paula
2011-05-01
We assessed whether it would be feasible to replace the standard measure of net worth with simpler measures of wealth in population-based studies examining associations between wealth and health. We used data from the 2004 Survey of Consumer Finances (respondents aged 25-64 years) and the 2004 Health and Retirement Survey (respondents aged 50 years or older) to construct logistic regression models relating wealth to health status and smoking. For our wealth measure, we used the standard measure of net worth as well as 9 simpler measures of wealth, and we compared results among the 10 models. In both data sets and for both health indicators, models using simpler wealth measures generated conclusions about the association between wealth and health that were similar to the conclusions generated by models using net worth. The magnitude and significance of the odds ratios were similar for the covariates in multivariate models, and the model-fit statistics for models using these simpler measures were similar to those for models using net worth. Our findings suggest that simpler measures of wealth may be acceptable in population-based studies of health.
Explicit solutions for exit-only radioactive decay chains
NASA Astrophysics Data System (ADS)
Yuan, Ding; Kernan, Warnick
2007-05-01
In this study, we extended Bateman's [Proc. Cambridge Philos. Soc. 15, 423 (1910)] original work for solving radioactive decay chains and explicitly derived analytic solutions for generic exit-only radioactive decay problems under given initial conditions. Instead of using the conventional Laplace transform for solving Bateman's equations, we used a much simpler algebraic approach. Finally, we discuss methods of breaking down certain classes of large decay chains into collections of simpler chains for easy handling.
Fast calculation of the line-spread-function by transversal directions decoupling
NASA Astrophysics Data System (ADS)
Parravicini, Jacopo; Tartara, Luca; Hasani, Elton; Tomaselli, Alessandra
2016-07-01
We propose a simplified method to calculate the optical spread function of a paradigmatic system constituted by a pupil-lens with a line-shaped illumination (‘line-spread-function’). Our approach is based on decoupling the two transversal directions of the beam and treating the propagation by means of the Fourier optics formalism. This requires simpler calculations with respect to the more usual Bessel-function-based method. The model is discussed and compared with standard calculation methods by carrying out computer simulations. The proposed approach is found to be much faster than the Bessel-function-based one (CPU time ≲ 5% of the standard method), while the results of the two methods present a very good mutual agreement.
Finding idle machines in a workstation-based distributed system
NASA Technical Reports Server (NTRS)
Theimer, Marvin M.; Lantz, Keith A.
1989-01-01
The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics, whereas a decentralized architecture is simpler to implement.
Expanded Processing Techniques for EMI Systems
2012-07-01
possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and mapping...possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and...54! Figure 4.25: Plots of simulated MetalMapper data for two oblate spheroidal targets
NASA Astrophysics Data System (ADS)
Hopp, L.; Ivanov, V. Y.
2010-12-01
There is still a debate in rainfall-runoff modeling over the advantage of using three-dimensional models based on partial differential equations describing variably saturated flow vs. models with simpler infiltration and flow routing algorithms. Fully explicit 3D models are computationally demanding but allow the representation of spatially complex domains, heterogeneous soils, conditions of ponded infiltration, and solute transport, among others. Models with simpler infiltration and flow routing algorithms provide faster run times and are likely to be more versatile in the treatment of extreme conditions such as soil drying but suffer from underlying assumptions and ad-hoc parameterizations. In this numerical study, we explore the question of whether these two model strategies are competing approaches or if they complement each other. As a 3D physics-based model we use HYDRUS-3D, a finite element model that numerically solves the Richards equation for variably-saturated water flow. As an example of a simpler model, we use tRIBS+VEGGIE that solves the 1D Richards equation for vertical flow and applies Dupuit-Forchheimer approximation for saturated lateral exchange and gravity-driven flow for unsaturated lateral exchange. The flow can be routed using either the D-8 (steepest descent) or D-infinity flow routing algorithms. We study lateral subsurface stormflow and moisture dynamics at the hillslope-scale, using a zero-order basin topography, as a function of storm size, antecedent moisture conditions and slope angle. The domain and soil characteristics are representative of a forested hillslope with conductive soils in a humid environment, where the major runoff generating process is lateral subsurface stormflow. We compare spatially integrated lateral subsurface flow at the downslope boundary as well as spatial patterns of soil moisture. We illustrate situations where both model approaches perform equally well and identify conditions under which the application of a fully-explicit 3D model may be required for a realistic description of the hydrologic response.
Multicasting in Wireless Communications (Ad-Hoc Networks): Comparison against a Tree-Based Approach
NASA Astrophysics Data System (ADS)
Rizos, G. E.; Vasiliadis, D. C.
2007-12-01
We examine on-demand multicasting in ad hoc networks. The Core Assisted Mesh Protocol (CAMP) is a well-known protocol for multicast routing in ad-hoc networks, generalizing the notion of core-based trees employed for internet multicasting into multicast meshes that have much richer connectivity than trees. On the other hand, wireless tree-based multicast routing protocols use much simpler structures for determining route paths, using only parent-child relationships. In this work, we compare the performance of the CAMP protocol against the performance of wireless tree-based multicast routing protocols, in terms of two important factors, namely packet delay and ratio of dropped packets.
Scalable graphene aptasensors for drug quantification
NASA Astrophysics Data System (ADS)
Vishnubhotla, Ramya; Ping, Jinglei; Gao, Zhaoli; Lee, Abigail; Saouaf, Olivia; Vrudhula, Amey; Johnson, A. T. Charlie
2017-11-01
Simpler and more rapid approaches for therapeutic drug-level monitoring are highly desirable to enable use at the point-of-care. We have developed an all-electronic approach for detection of the HIV drug tenofovir based on scalable fabrication of arrays of graphene field-effect transistors (GFETs) functionalized with a commercially available DNA aptamer. The shift in the Dirac voltage of the GFETs varied systematically with the concentration of tenofovir in deionized water, with a detection limit less than 1 ng/mL. Tests against a set of negative controls confirmed the specificity of the sensor response. This approach offers the potential for further development into a rapid and convenient point-of-care tool with clinically relevant performance.
Preparation of name and address data for record linkage using hidden Markov models
Churches, Tim; Christen, Peter; Lim, Kim; Zhu, Justin Xi
2002-01-01
Background Record linkage refers to the process of joining records that relate to the same entity or event in one or more data collections. In the absence of a shared, unique key, record linkage involves the comparison of ensembles of partially-identifying, non-unique data items between pairs of records. Data items with variable formats, such as names and addresses, need to be transformed and normalised in order to validly carry out these comparisons. Traditionally, deterministic rule-based data processing systems have been used to carry out this pre-processing, which is commonly referred to as "standardisation". This paper describes an alternative approach to standardisation, using a combination of lexicon-based tokenisation and probabilistic hidden Markov models (HMMs). Methods HMMs were trained to standardise typical Australian name and address data drawn from a range of health data collections. The accuracy of the results was compared to that produced by rule-based systems. Results Training of HMMs was found to be quick and did not require any specialised skills. For addresses, HMMs produced equal or better standardisation accuracy than a widely-used rule-based system. However, acccuracy was worse when used with simpler name data. Possible reasons for this poorer performance are discussed. Conclusion Lexicon-based tokenisation and HMMs provide a viable and effort-effective alternative to rule-based systems for pre-processing more complex variably formatted data such as addresses. Further work is required to improve the performance of this approach with simpler data such as names. Software which implements the methods described in this paper is freely available under an open source license for other researchers to use and improve. PMID:12482326
The kinetic stabilizer: a route to simpler tandem mirror systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Post, R F
2001-02-02
As we enter the new millennium there is a growing urgency to address the issue of finding long-range solutions to the world's energy needs. Fusion offers such a solution, provided economically viable means can be found to extract useful energy from fusion reactions. While the magnetic confinement approach to fusion has a long and productive history, to date the mainline approaches to magnetic confinement, namely closed systems such as the tokamak, appear to many as being too large and complex to be acceptable economically, despite the impressive progress that has made toward the achievement of fusion-relevant confinement parameters. Thus theremore » is a growing feeling that it is imperative to search for new and simpler approaches to magnetic fusion, ones that might lead to smaller and more economically attractive fusion power plants.« less
Application of powder densification models to the consolidation processing of composites
NASA Technical Reports Server (NTRS)
Wadley, H. N. G.; Elzey, D. M.
1991-01-01
Unidirectional fiber reinforced metal matrix composite tapes (containing a single layer of parallel fibers) can now be produced by plasma deposition. These tapes can be stacked and subjected to a thermomechanical treatment that results in a fully dense near net shape component. The mechanisms by which this consolidation step occurs are explored, and models to predict the effect of different thermomechanical conditions (during consolidation) upon the kinetics of densification are developed. The approach is based upon a methodology developed by Ashby and others for the simpler problem of HIP of spherical powders. The complex problem is devided into six, much simpler, subproblems, and then their predicted contributions are added to densification. The initial problem decomposition is to treat the two extreme geometries encountered (contact deformation occurring between foils and shrinkage of isolated, internal pores). Deformation of these two geometries is modelled for plastic, power law creep and diffusional flow. The results are reported in the form of a densification map.
Anytime query-tuned kernel machine classifiers via Cholesky factorization
NASA Technical Reports Server (NTRS)
DeCoste, D.
2002-01-01
We recently demonstrated 2 to 64-fold query-time speedups of Support Vector Machine and Kernel Fisher classifiers via a new computational geometry method for anytime output bounds (DeCoste,2002). This new paper refines our approach in two key ways. First, we introduce a simple linear algebra formulation based on Cholesky factorization, yielding simpler equations and lower computational overhead. Second, this new formulation suggests new methods for achieving additional speedups, including tuning on query samples. We demonstrate effectiveness on benchmark datasets.
Efficient 3D movement-based kernel density estimator and application to wildlife ecology
Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.
2014-01-01
We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.
Repressing the effects of variable speed harmonic orders in operational modal analysis
NASA Astrophysics Data System (ADS)
Randall, R. B.; Coats, M. D.; Smith, W. A.
2016-10-01
Discrete frequency components such as machine shaft orders can disrupt the operation of normal Operational Modal Analysis (OMA) algorithms. With constant speed machines, they have been removed using time synchronous averaging (TSA). This paper compares two approaches for varying speed machines. In one method, signals are transformed into the order domain, and after the removal of shaft speed related components by a cepstral notching method, are transformed back to the time domain to allow normal OMA. In the other simpler approach an exponential shortpass lifter is applied directly in the time domain cepstrum to enhance the modal information at the expense of other disturbances. For simulated gear signals with speed variations of both ±5% and ±15%, the simpler approach was found to give better results The TSA method is shown not to work in either case. The paper compares the results with those obtained using a stationary random excitation.
Complexity in Language Learning and Treatment
ERIC Educational Resources Information Center
Thompson, Cynthia K.
2007-01-01
Purpose: To introduce a Clinical Forum focused on the Complexity Account of Treatment Efficacy (C. K. Thompson, L. P. Shapiro, S. Kiran, & J. Sobecks, 2003), a counterintuitive but effective approach for treating language disorders. This approach espouses training "complex" structures to promote generalized improvement of simpler, linguistically…
A comparative study of four major approaches to predicting ATES performance
NASA Astrophysics Data System (ADS)
Doughty, C.; Buscheck, T. A.; Bodvarsson, G. S.; Tsang, C. F.
1982-09-01
The International Energy Agency test problem involving Aquifer Thermal Energy Storage was solved using four approaches: the numerical model PF (formerly CCC), the simpler numerical model SFM, and two graphical characterization schemes. Each of the four techniques, with the advantages and disadvantages of each, are discussed.
Adjoint-based optimization of PDEs in moving domains
NASA Astrophysics Data System (ADS)
Protas, Bartosz; Liao, Wenyuan
2008-02-01
In this investigation we address the problem of adjoint-based optimization of PDE systems in moving domains. As an example we consider the one-dimensional heat equation with prescribed boundary temperatures and heat fluxes. We discuss two methods of deriving an adjoint system necessary to obtain a gradient of a cost functional. In the first approach we derive the adjoint system after mapping the problem to a fixed domain, whereas in the second approach we derive the adjoint directly in the moving domain by employing methods of the noncylindrical calculus. We show that the operations of transforming the system from a variable to a fixed domain and deriving the adjoint do not commute and that, while the gradient information contained in both systems is the same, the second approach results in an adjoint problem with a simpler structure which is therefore easier to implement numerically. This approach is then used to solve a moving boundary optimization problem for our model system.
Controlled porous pattern of anodic aluminum oxide by foils laminate approach.
Wang, Gou-Jen; Peng, Chi-Sheng
2006-04-01
A novel, much simpler, and low-cost method to fabricate the porous pattern of the anodic aluminum oxide (AAO) based on the aluminum foils laminate approach was carried out. During our experiments, it was found that the pores of the AAO on the upper foil grew bi-directionally from both the top and the bottom surfaces. Experimental results further indicate that the upward porous pattern of the upper foil is determined by the surface structure of the bottom surface of the upper foil. The porous pattern of AAO can be controlled by a pre-made pattern on the bottom surface. Furthermore, no Aluminum (Al) layer removing process is required in this novel laminate method.
NASA Technical Reports Server (NTRS)
Hastrup, Rolf; Weinberg, Aaron; Mcomber, Robert
1991-01-01
Results of on-going studies to develop navigation/telecommunications network concepts to support future robotic and human missions to Mars are presented. The performance and connectivity improvements provided by the relay network will permit use of simpler, lower performance, and less costly telecom subsystems for the in-situ mission exploration elements. Orbiting relay satellites can serve as effective navigation aids by supporting earth-based tracking as well as providing Mars-centered radiometric data for mission elements approaching, in orbit, or on the surface of Mars. The relay satellite orbits may be selected to optimize navigation aid support and communication coverage for specific mission sets.
NASA Astrophysics Data System (ADS)
Hastrup, Rolf; Weinberg, Aaron; McOmber, Robert
1991-09-01
Results of on-going studies to develop navigation/telecommunications network concepts to support future robotic and human missions to Mars are presented. The performance and connectivity improvements provided by the relay network will permit use of simpler, lower performance, and less costly telecom subsystems for the in-situ mission exploration elements. Orbiting relay satellites can serve as effective navigation aids by supporting earth-based tracking as well as providing Mars-centered radiometric data for mission elements approaching, in orbit, or on the surface of Mars. The relay satellite orbits may be selected to optimize navigation aid support and communication coverage for specific mission sets.
TOPICS IN THEORY OF GENERALIZED PARTON DISTRIBUTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radyushkin, Anatoly V.
Several topics in the theory of generalized parton distributions (GPDs) are reviewed. First, we give a brief overview of the basics of the theory of generalized parton distributions and their relationship with simpler phenomenological functions, viz. form factors, parton densities and distribution amplitudes. Then, we discuss recent developments in building models for GPDs that are based on the formalism of double distributions (DDs). A special attention is given to a careful analysis of the singularity structure of DDs. The DD formalism is applied to construction of a model GPDs with a singular Regge behavior. Within the developed DD-based approach, wemore » discuss the structure of GPD sum rules. It is shown that separation of DDs into the so-called ``plus'' part and the $D$-term part may be treated as a renormalization procedure for the GPD sum rules. This approach is compared with an alternative prescription based on analytic regularization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian; Maier, Joscha; Sawall, Stefan
2016-07-15
Purpose: To introduce and evaluate an increment matrix approach (IMA) describing the signal statistics of energy-selective photon counting detectors including spatial–spectral correlations between energy bins of neighboring detector pixels. The importance of the occurring correlations for image-based material decomposition is studied. Methods: An IMA describing the counter increase patterns in a photon counting detector is proposed. This IMA has the potential to decrease the number of required random numbers compared to Monte Carlo simulations by pursuing an approach based on convolutions. To validate and demonstrate the IMA, an approximate semirealistic detector model is provided, simulating a photon counting detector inmore » a simplified manner, e.g., by neglecting count rate-dependent effects. In this way, the spatial–spectral correlations on the detector level are obtained and fed into the IMA. The importance of these correlations in reconstructed energy bin images and the corresponding detector performance in image-based material decomposition is evaluated using a statistically optimal decomposition algorithm. Results: The results of IMA together with the semirealistic detector model were compared to other models and measurements using the spectral response and the energy bin sensitivity, finding a good agreement. Correlations between the different reconstructed energy bin images could be observed, and turned out to be of weak nature. These correlations were found to be not relevant in image-based material decomposition. An even simpler simulation procedure based on the energy bin sensitivity was tested instead and yielded similar results for the image-based material decomposition task, as long as the fact that one incident photon can increase multiple counters across neighboring detector pixels is taken into account. Conclusions: The IMA is computationally efficient as it required about 10{sup 2} random numbers per ray incident on a detector pixel instead of an estimated 10{sup 8} random numbers per ray as Monte Carlo approaches would need. The spatial–spectral correlations as described by IMA are not important for the studied image-based material decomposition task. Respecting the absolute photon counts and thus the multiple counter increases by a single x-ray photon, the same material decomposition performance could be obtained with a simpler detector description using the energy bin sensitivity.« less
Analysis of Slug Tests in Formations of High Hydraulic Conductivity
Butler, J.J.; Garnett, E.J.; Healey, J.M.
2003-01-01
A new procedure is presented for the analysis of slug tests performed in partially penetrating wells in formations of high hydraulic conductivity. This approach is a simple, spreadsheet-based implementation of existing models that can be used for analysis of tests from confined or unconfined aquifers. Field examples of tests exhibiting oscillatory and nonoscillatory behavior are used to illustrate the procedure and to compare results with estimates obtained using alternative approaches. The procedure is considerably simpler than recently proposed methods for this hydrogeologic setting. Although the simplifications required by the approach can introduce error into hydraulic-conductivity estimates, this additional error becomes negligible when appropriate measures are taken in the field. These measures are summarized in a set of practical field guidelines for slug tests in highly permeable aquifers.
Analysis of slug tests in formations of high hydraulic conductivity.
Butler, James J; Garnett, Elizabeth J; Healey, John M
2003-01-01
A new procedure is presented for the analysis of slug tests performed in partially penetrating wells in formations of high hydraulic conductivity. This approach is a simple, spreadsheet-based implementation of existing models that can be used for analysis of tests from confined or unconfined aquifers. Field examples of tests exhibiting oscillatory and nonoscillatory behavior are used to illustrate the procedure and to compare results with estimates obtained using alternative approaches. The procedure is considerably simpler than recently proposed methods for this hydrogeologic setting. Although the simplifications required by the approach can introduce error into hydraulic-conductivity estimates, this additional error becomes negligible when appropriate measures are taken in the field. These measures are summarized in a set of practical field guidelines for slug tests in highly permeable aquifers.
Billing code algorithms to identify cases of peripheral artery disease from administrative data
Fan, Jin; Arruda-Olson, Adelaide M; Leibson, Cynthia L; Smith, Carin; Liu, Guanghui; Bailey, Kent R; Kullo, Iftikhar J
2013-01-01
Objective To construct and validate billing code algorithms for identifying patients with peripheral arterial disease (PAD). Methods We extracted all encounters and line item details including PAD-related billing codes at Mayo Clinic Rochester, Minnesota, between July 1, 1997 and June 30, 2008; 22 712 patients evaluated in the vascular laboratory were divided into training and validation sets. Multiple logistic regression analysis was used to create an integer code score from the training dataset, and this was tested in the validation set. We applied a model-based code algorithm to patients evaluated in the vascular laboratory and compared this with a simpler algorithm (presence of at least one of the ICD-9 PAD codes 440.20–440.29). We also applied both algorithms to a community-based sample (n=4420), followed by a manual review. Results The logistic regression model performed well in both training and validation datasets (c statistic=0.91). In patients evaluated in the vascular laboratory, the model-based code algorithm provided better negative predictive value. The simpler algorithm was reasonably accurate for identification of PAD status, with lesser sensitivity and greater specificity. In the community-based sample, the sensitivity (38.7% vs 68.0%) of the simpler algorithm was much lower, whereas the specificity (92.0% vs 87.6%) was higher than the model-based algorithm. Conclusions A model-based billing code algorithm had reasonable accuracy in identifying PAD cases from the community, and in patients referred to the non-invasive vascular laboratory. The simpler algorithm had reasonable accuracy for identification of PAD in patients referred to the vascular laboratory but was significantly less sensitive in a community-based sample. PMID:24166724
Doing More, Feeling Better: A Behavioural Approach to Helping a Woman Overcome Low Mood and Anxiety
ERIC Educational Resources Information Center
Stuart, Simon; Graham, Christopher D.; Butler, Sarah
2014-01-01
A substantial body of literature exists concerning the adaptation of Cognitive Behavioural Therapy for people with learning disabilities. However, it is possible that cognitive approaches have been prioritised at the expense of behavioural techniques that are simpler and more effective. This case conceptualisation considers a behaviourally focused…
78 FR 16808 - Connect America Fund; High-Cost Universal Service Support
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-19
... to use one regression to generate a single cap on total loop costs for each study area. A single cap.... * * * A preferable, and simpler, approach would be to develop one conditional quantile model for aggregate.... Total universal service support for such carriers was approaching $2 billion annually--more than 40...
Noise transmission and reduction in turboprop aircraft
NASA Astrophysics Data System (ADS)
MacMartin, Douglas G.; Basso, Gordon L.; Leigh, Barry
1994-09-01
There is considerable interest in reducing the cabin noise environment in turboprop aircraft. Various approaches have been considered at deHaviland Inc., including passive tuned-vibration absorbers, speaker-based noise cancellation, and structural vibration control of the fuselage. These approaches will be discussed briefly. In addition to controlling the noise, a method of predicting the internal noise is required both to evaluate potential noise reduction approaches, and to validate analytical design models. Instead of costly flight tests, or carrying out a ground simulation of the propeller pressure field, a much simpler reciprocal technique can be used. A capacitive scanner is used to measure the fuselage vibration response on a deHaviland Dash-8 fuselage, due to an internal noise source. The approach is validated by comparing this reciprocal noise transmission measurement with the direct measurement. The fuselage noise transmission information is then combined with computer predictions of the propeller pressure field data to predict the internal noise at two points.
Carbohydrate recognition: A minimalistic approach to binding
NASA Astrophysics Data System (ADS)
Kubik, Stefan
2012-09-01
Synthetic receptors with properties resembling those of carbohydrate-binding proteins are known, but they are structurally rather complex. Elaborate structures are, however, not always required to bind carbohydrates in water -- much simpler compounds can be just as effective.
Contingency designs for attitude determination of TRMM
NASA Technical Reports Server (NTRS)
Crassidis, John L.; Andrews, Stephen F.; Markley, F. Landis; Ha, Kong
1995-01-01
In this paper, several attitude estimation designs are developed for the Tropical Rainfall Measurement Mission (TRMM) spacecraft. A contingency attitude determination mode is required in the event of a primary sensor failure. The final design utilizes a full sixth-order Kalman filter. However, due to initial software concerns, the need to investigate simpler designs was required. The algorithms presented in this paper can be utilized in place of a full Kalman filter, and require less computational burden. These algorithms are based on filtered deterministic approaches and simplified Kalman filter approaches. Comparative performances of all designs are shown by simulating the TRMM spacecraft in mission mode. Comparisons of the simulation results indicate that comparable accuracy with respect to a full Kalman filter design is possible.
Mechatronics by Analogy and Application to Legged Locomotion
NASA Astrophysics Data System (ADS)
Ragusila, Victor
A new design methodology for mechatronic systems, dubbed as Mechatronics by Analogy (MbA), is introduced and applied to designing a leg mechanism. The new methodology argues that by establishing a similarity relation between a complex system and a number of simpler models it is possible to design the former using the analysis and synthesis means developed for the latter. The methodology provides a framework for concurrent engineering of complex systems while maintaining the transparency of the system behaviour through making formal analogies between the system and those with more tractable dynamics. The application of the MbA methodology to the design of a monopod robot leg, called the Linkage Leg, is also studied. A series of simulations show that the dynamic behaviour of the Linkage Leg is similar to that of a combination of a double pendulum and a spring-loaded inverted pendulum, based on which the system kinematic, dynamic, and control parameters can be designed concurrently. The first stage of Mechatronics by Analogy is a method of extracting significant features of system dynamics through simpler models. The goal is to determine a set of simpler mechanisms with similar dynamic behaviour to that of the original system in various phases of its motion. A modular bond-graph representation of the system is determined, and subsequently simplified using two simplification algorithms. The first algorithm determines the relevant dynamic elements of the system for each phase of motion, and the second algorithm finds the simple mechanism described by the remaining dynamic elements. In addition to greatly simplifying the controller for the system, using simpler mechanisms with similar behaviour provides a greater insight into the dynamics of the system. This is seen in the second stage of the new methodology, which concurrently optimizes the simpler mechanisms together with a control system based on their dynamics. Once the optimal configuration of the simpler system is determined, the original mechanism is optimized such that its dynamic behaviour is analogous. It is shown that, if this analogy is achieved, the control system designed based on the simpler mechanisms can be directly implemented to the more complex system, and their dynamic behaviours are close enough for the system performance to be effectively the same. Finally it is shown that, for the employed objective of fast legged locomotion, the proposed methodology achieves a better design than Reduction-by-Feedback, a competing methodology that uses control layers to simplify the dynamics of the system.
Roller Coasters without Differential Equations--A Newtonian Approach to Constrained Motion
ERIC Educational Resources Information Center
Muller, Rainer
2010-01-01
Within the context of Newton's equation, we present a simple approach to the constrained motion of a body forced to move along a specified trajectory. Because the formalism uses a local frame of reference, it is simpler than other methods, making more complicated geometries accessible. No Lagrangian multipliers are necessary to determine the…
Inertial navigation without accelerometers
NASA Astrophysics Data System (ADS)
Boehm, M.
The Kennedy-Thorndike (1932) experiment points to the feasibility of fiber-optic inertial velocimeters, to which state-of-the-art technology could furnish substantial sensitivity and accuracy improvements. Velocimeters of this type would obviate the use of both gyros and accelerometers, and allow inertial navigation to be conducted together with vehicle attitude control, through the derivation of rotation rates from the ratios of the three possible velocimeter pairs. An inertial navigator and reference system based on this approach would probably have both fewer components and simpler algorithms, due to the obviation of the first level of integration in classic inertial navigators.
An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352
An iterative approach for the optimization of pavement maintenance management at the network level.
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.
Membrane-Based Characterization of a Gas Component — A Transient Sensor Theory
Lazik, Detlef
2014-01-01
Based on a multi-gas solution-diffusion problem for a dense symmetrical membrane this paper presents a transient theory of a planar, membrane-based sensor cell for measuring gas from both initial conditions: dynamic and thermodynamic equilibrium. Using this theory, the ranges for which previously developed, simpler approaches are valid will be discussed; these approaches are of vital interest for membrane-based gas sensor applications. Finally, a new theoretical approach is introduced to identify varying gas components by arranging sensor cell pairs resulting in a concentration independent gas-specific critical time. Literature data for the N2, O2, Ar, CH4, CO2, H2 and C4H10 diffusion coefficients and solubilities for a polydimethylsiloxane membrane were used to simulate gas specific sensor responses. The results demonstrate the influence of (i) the operational mode; (ii) sensor geometry and (iii) gas matrices (air, Ar) on that critical time. Based on the developed theory the case-specific suitable membrane materials can be determined and both operation and design options for these sensors can be optimized for individual applications. The results of mixing experiments for different gases (O2, CO2) in a gas matrix of air confirmed the theoretical predictions. PMID:24608004
Operational Retrievals of Evapotranspiration: Are we there yet?
NASA Astrophysics Data System (ADS)
Neale, C. M. U.; Anderson, M. C.; Hain, C.; Schull, M.; Isidro, C., Sr.; Goncalves, I. Z.
2017-12-01
Remote sensing based retrievals of evapotranspiration (ET) have progressed significantly over the last two decades with the improvement of methods and algorithms and the availability of multiple satellite sensors with shortwave and thermal infrared bands on polar orbiting platforms. The modeling approaches include simpler vegetation index (VI) based methods such as the reflectance-based crop coefficient approach coupled with surface reference evapotranspiration estimates to derive actual evapotranspiration of crops or, direct inputs to the Penman-Monteith equation through VI relationships with certain input variables. Methods that are more complex include one-layer or two-layer energy balance approaches that make use of both shortwave and longwave spectral band information to estimate different inputs to the energy balance equation. These models mostly differ in the estimation of sensible heat fluxes. For continental and global scale applications, other satellite-based products such as solar radiation, vegetation leaf area and cover are used as inputs, along with gridded re-analysis weather information. This presentation will review the state-of-the-art in satellite-based evapotranspiration estimation, giving examples of existing efforts to obtain operational ET retrievals over continental and global scales and discussing difficulties and challenges.
Controlled-Root Approach To Digital Phase-Locked Loops
NASA Technical Reports Server (NTRS)
Stephens, Scott A.; Thomas, J. Brooks
1995-01-01
Performance tailored more flexibly and directly to satisfy design requirements. Controlled-root approach improved method for analysis and design of digital phase-locked loops (DPLLs). Developed rigorously from first principles for fully digital loops, making DPLL theory and design simpler and more straightforward (particularly for third- or fourth-order DPLL) and controlling performance more accurately in case of high gain.
Atwood’s machine with a massive string
NASA Astrophysics Data System (ADS)
Lemos, Nivaldo A.
2017-11-01
The dynamics of Atwood’s machine with a string of significant mass are described by the Lagrangian formalism, providing an eloquent example of how the Lagrangian approach is a great deal simpler and so much more expedient than the Newtonian treatment.
Earth observation data based rapid flood-extent modelling for tsunami-devastated coastal areas
NASA Astrophysics Data System (ADS)
Hese, Sören; Heyer, Thomas
2016-04-01
Earth observation (EO)-based mapping and analysis of natural hazards plays a critical role in various aspects of post-disaster aid management. Spatial very high-resolution Earth observation data provide important information for managing post-tsunami activities on devastated land and monitoring re-cultivation and reconstruction. The automatic and fast use of high-resolution EO data for rapid mapping is, however, complicated by high spectral variability in densely populated urban areas and unpredictable textural and spectral land-surface changes. The present paper presents the results of the SENDAI project, which developed an automatic post-tsunami flood-extent modelling concept using RapidEye multispectral satellite data and ASTER Global Digital Elevation Model Version 2 (GDEM V2) data of the eastern coast of Japan (captured after the Tohoku earthquake). In this paper, the authors developed both a bathtub-modelling approach and a cost-distance approach, and integrated the roughness parameters of different land-use types to increase the accuracy of flood-extent modelling. Overall, the accuracy of the developed models reached 87-92%, depending on the analysed test site. The flood-modelling approach was explained and results were compared with published approaches. We came to the conclusion that the cost-factor-based approach reaches accuracy comparable to published results from hydrological modelling. However the proposed cost-factor approach is based on a much simpler dataset, which is available globally.
Segmentation by fusion of histogram-based k-means clusters in different color spaces.
Mignotte, Max
2008-05-01
This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.
Influence of scattering processes on electron quantum states in nanowires
Galenchik, Vadim; Borzdov, Andrei; Borzdov, Vladimir; Komarov, Fadei
2007-01-01
In the framework of quantum perturbation theory the self-consistent method of calculation of electron scattering rates in nanowires with the one-dimensional electron gas in the quantum limit is worked out. The developed method allows both the collisional broadening and the quantum correlations between scattering events to be taken into account. It is an alternativeper seto the Fock approximation for the self-energy approach based on Green’s function formalism. However this approach is free of mathematical difficulties typical to the Fock approximation. Moreover, the developed method is simpler than the Fock approximation from the computational point of view. Using the approximation of stable one-particle quantum states it is proved that the electron scattering processes determine the dependence of electron energy versus its wave vector.
NASA Astrophysics Data System (ADS)
Daniel, M.; Lemonsu, Aude; Déqué, M.; Somot, S.; Alias, A.; Masson, V.
2018-06-01
Most climate models do not explicitly model urban areas and at best describe them as rock covers. Nonetheless, the very high resolutions reached now by the regional climate models may justify and require a more realistic parameterization of surface exchanges between urban canopy and atmosphere. To quantify the potential impact of urbanization on the regional climate, and evaluate the benefits of a detailed urban canopy model compared with a simpler approach, a sensitivity study was carried out over France at a 12-km horizontal resolution with the ALADIN-Climate regional model for 1980-2009 time period. Different descriptions of land use and urban modeling were compared, corresponding to an explicit modeling of cities with the urban canopy model TEB, a conventional and simpler approach representing urban areas as rocks, and a vegetated experiment for which cities are replaced by natural covers. A general evaluation of ALADIN-Climate was first done, that showed an overestimation of the incoming solar radiation but satisfying results in terms of precipitation and near-surface temperatures. The sensitivity analysis then highlighted that urban areas had a significant impact on modeled near-surface temperature. A further analysis on a few large French cities indicated that over the 30 years of simulation they all induced a warming effect both at daytime and nighttime with values up to + 1.5 °C for the city of Paris. The urban model also led to a regional warming extending beyond the urban areas boundaries. Finally, the comparison to temperature observations available for Paris area highlighted that the detailed urban canopy model improved the modeling of the urban heat island compared with a simpler approach.
Connell, N A; Goddard, A R; Philp, I; Bray, J
1998-05-01
We describe the processes involved in the development of an information system which can assess how care given by a number of agencies could be monitored by those agencies. In particular, it addresses the problem of sharing information as the boundaries of each agency are crossed. It focuses on the care of one specific patient group--the rehabilitation of elderly patients in the community, which provided an ideal multi-agency setting. It also describes: how a stakeholder participative approach to information system development was undertaken, based in part on the Soft Systems Methodology (SSM) approach (Checkland, 1981, 1990); some of the difficulties encountered in using such an approach; and the ways in which these were addressed. The paper goes on to describe an assessment tool called SCARS (the Southampton Community Ability Rating Scale). It concludes by reflecting on the management lessons arising from this project. It also observes, inter alia, how stakeholders have a strong preference for simpler, non-IT based systems, and comments on the difficulties encountered by stakeholders in attempting to reconcile their perceptions of the needs of their discipline or specialty with a more patient-centred approach of an integrated system.
A new approach to hand-based authentication
NASA Astrophysics Data System (ADS)
Amayeh, G.; Bebis, G.; Erol, A.; Nicolescu, M.
2007-04-01
Hand-based authentication is a key biometric technology with a wide range of potential applications both in industry and government. Traditionally, hand-based authentication is performed by extracting information from the whole hand. To account for hand and finger motion, guidance pegs are employed to fix the position and orientation of the hand. In this paper, we consider a component-based approach to hand-based verification. Our objective is to investigate the discrimination power of different parts of the hand in order to develop a simpler, faster, and possibly more accurate and robust verification system. Specifically, we propose a new approach which decomposes the hand in different regions, corresponding to the fingers and the back of the palm, and performs verification using information from certain parts of the hand only. Our approach operates on 2D images acquired by placing the hand on a flat lighting table. Using a part-based representation of the hand allows the system to compensate for hand and finger motion without using any guidance pegs. To decompose the hand in different regions, we use a robust methodology based on morphological operators which does not require detecting any landmark points on the hand. To capture the geometry of the back of the palm and the fingers in suffcient detail, we employ high-order Zernike moments which are computed using an effcient methodology. The proposed approach has been evaluated on a database of 100 subjects with 10 images per subject, illustrating promising performance. Comparisons with related approaches using the whole hand for verification illustrate the superiority of the proposed approach. Moreover, qualitative comparisons with state-of-the-art approaches indicate that the proposed approach has comparable or better performance.
High-Fidelity Dynamic Modeling of Spacecraft in the Continuum--Rarefied Transition Regime
NASA Astrophysics Data System (ADS)
Turansky, Craig P.
The state of the art of spacecraft rarefied aerodynamics seldom accounts for detailed rigid-body dynamics. In part because of computational constraints, simpler models based upon the ballistic and drag coefficients are employed. Of particular interest is the continuum-rarefied transition regime of Earth's thermosphere where gas dynamic simulation is difficult yet wherein many spacecraft operate. The feasibility of increasing the fidelity of modeling spacecraft dynamics is explored by coupling rarefied aerodynamics with rigid-body dynamics modeling similar to that traditionally used for aircraft in atmospheric flight. Presented is a framework of analysis and guiding principles which capitalize on the availability of increasing computational methods and resources. Aerodynamic force inputs for modeling spacecraft in two dimensions in a rarefied flow are provided by analytical equations in the free-molecular regime, and the direct simulation Monte Carlo method in the transition regime. The application of the direct simulation Monte Carlo method to this class of problems is examined in detail with a new code specifically designed for engineering-level rarefied aerodynamic analysis. Time-accurate simulations of two distinct geometries in low thermospheric flight and atmospheric entry are performed, demonstrating non-linear dynamics that cannot be predicted using simpler approaches. The results of this straightforward approach to the aero-orbital coupled-field problem highlight the possibilities for future improvements in drag prediction, control system design, and atmospheric science. Furthermore, a number of challenges for future work are identified in the hope of stimulating the development of a new subfield of spacecraft dynamics.
Wave particle duality, the observer and retrocausality
NASA Astrophysics Data System (ADS)
Narasimhan, Ashok; Kafatos, Menas C.
2017-05-01
We approach wave particle duality, the role of the observer and implications on Retrocausality, by starting with the results of a well verified quantum experiment. We analyze how some current theoretical approaches interpret these results. We then provide an alternative theoretical framework that is consistent with the observations and in many ways simpler than usual attempts to account for retrocausality, involving a non-local conscious Observer.
Feedback linearization of singularly perturbed systems based on canonical similarity transformations
NASA Astrophysics Data System (ADS)
Kabanov, A. A.
2018-05-01
This paper discusses the problem of feedback linearization of a singularly perturbed system in a state-dependent coefficient form. The result is based on the introduction of a canonical similarity transformation. The transformation matrix is constructed from separate blocks for fast and slow part of an original singularly perturbed system. The transformed singular perturbed system has a linear canonical form that significantly simplifies a control design problem. Proposed similarity transformation allows accomplishing linearization of the system without considering the virtual output (as it is needed for normal form method), a technique of a transition from phase coordinates of the transformed system to state variables of the original system is simpler. The application of the proposed approach is illustrated through example.
Multiple Reaction Equilibria--With Pencil and Paper: A Class Problem on Coal Methanation.
ERIC Educational Resources Information Center
Helfferich, Friedrich G.
1989-01-01
Points out a different and much simpler approach for the study of equilibria of multiple and heterogeneous chemical reactions. A simulation on coal methanation is used to teach the technique. An example and the methodology used are provided. (MVL)
Hybrid feedback feedforward: An efficient design of adaptive neural network control.
Pan, Yongping; Liu, Yiqi; Xu, Bin; Yu, Haoyong
2016-04-01
This paper presents an efficient hybrid feedback feedforward (HFF) adaptive approximation-based control (AAC) strategy for a class of uncertain Euler-Lagrange systems. The control structure includes a proportional-derivative (PD) control term in the feedback loop and a radial-basis-function (RBF) neural network (NN) in the feedforward loop, which mimics the human motor learning control mechanism. At the presence of discontinuous friction, a sigmoid-jump-function NN is incorporated to improve control performance. The major difference of the proposed HFF-AAC design from the traditional feedback AAC (FB-AAC) design is that only desired outputs, rather than both tracking errors and desired outputs, are applied as RBF-NN inputs. Yet, such a slight modification leads to several attractive properties of HFF-AAC, including the convenient choice of an approximation domain, the decrease of the number of RBF-NN inputs, and semiglobal practical asymptotic stability dominated by control gains. Compared with previous HFF-AAC approaches, the proposed approach possesses the following two distinctive features: (i) all above attractive properties are achieved by a much simpler control scheme; (ii) the bounds of plant uncertainties are not required to be known. Consequently, the proposed approach guarantees a minimum configuration of the control structure and a minimum requirement of plant knowledge for the AAC design, which leads to a sharp decrease of implementation cost in terms of hardware selection, algorithm realization and system debugging. Simulation results have demonstrated that the proposed HFF-AAC can perform as good as or even better than the traditional FB-AAC under much simpler control synthesis and much lower computational cost. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nicolaou, K C; Chen, Pengxi; Zhu, Shugao; Cai, Quan; Erande, Rohan D; Li, Ruofan; Sun, Hongbao; Pulukuri, Kiran Kumar; Rigol, Stephan; Aujay, Monette; Sandoval, Joseph; Gavrilyuk, Julia
2017-11-01
A streamlined total synthesis of the naturally occurring antitumor agents trioxacarcins is described, along with its application to the construction of a series of designed analogues of these complex natural products. Biological evaluation of the synthesized compounds revealed a number of highly potent, and yet structurally simpler, compounds that are effective against certain cancer cell lines, including a drug-resistant line. A novel one-step synthesis of anthraquinones and chloro anthraquinones from simple ketone precursors and phenylselenyl chloride is also described. The reported work, featuring novel chemistry and cascade reactions, has potential applications in cancer therapy, including targeted approaches as in antibody-drug conjugates.
Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.
Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J
2012-09-01
Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples. Copyright © 2012 Elsevier Inc. All rights reserved.
Towards Run-time Assurance of Advanced Propulsion Algorithms
NASA Technical Reports Server (NTRS)
Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy
2014-01-01
This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.
Lu, Xin; Soto, Marcelo A; Thévenaz, Luc
2017-07-10
A method based on coherent Rayleigh scattering distinctly evaluating temperature and strain is proposed and experimentally demonstrated for distributed optical fiber sensing. Combining conventional phase-sensitive optical time-domain domain reflectometry (ϕOTDR) and ϕOTDR-based birefringence measurements, independent distributed temperature and strain profiles are obtained along a polarization-maintaining fiber. A theoretical analysis, supported by experimental data, indicates that the proposed system for temperature-strain discrimination is intrinsically better conditioned than an equivalent existing approach that combines classical Brillouin sensing with Brillouin dynamic gratings. This is due to the higher sensitivity of coherent Rayleigh scatting compared to Brillouin scattering, thus offering better performance and lower temperature-strain uncertainties in the discrimination. Compared to the Brillouin-based approach, the ϕOTDR-based system here proposed requires access to only one fiber-end, and a much simpler experimental layout. Experimental results validate the full discrimination of temperature and strain along a 100 m-long elliptical-core polarization-maintaining fiber with measurement uncertainties of ~40 mK and ~0.5 με, respectively. These values agree very well with the theoretically expected measurand resolutions.
NASA Technical Reports Server (NTRS)
Padavala, Satyasrinivas; Palazzolo, Alan B.; Vallely, Pat; Ryan, Steve
1994-01-01
An improved dynamic analysis for liquid annular seals with arbitrary profile based on a method, first proposed by Nelson and Nguyen, is presented. An improved first order solution that incorporates a continuous interpolation of perturbed quantities in the circumferential direction, is presented. The original method uses an approximation scheme for circumferential gradients, based on Fast Fourier Transforms (FFT). A simpler scheme based on cubic splines is found to be computationally more efficient with better convergence at higher eccentricities. A new approach of computing dynamic coefficients based on external specified load is introduced. This improved analysis is extended to account for arbitrarily varying seal profile in both axial and circumferential directions. An example case of an elliptical seal with varying degrees of axial curvature is analyzed. A case study based on actual operating clearances of an interstage seal of the Space Shuttle Main Engine High Pressure Oxygen Turbopump is presented.
A CNN Regression Approach for Real-Time 2D/3D Registration.
Shun Miao; Wang, Z Jane; Rui Liao
2016-05-01
In this paper, we present a Convolutional Neural Network (CNN) regression approach to address the two major limitations of existing intensity-based 2-D/3-D registration technology: 1) slow computation and 2) small capture range. Different from optimization-based methods, which iteratively optimize the transformation parameters over a scalar-valued metric function representing the quality of the registration, the proposed method exploits the information embedded in the appearances of the digitally reconstructed radiograph and X-ray images, and employs CNN regressors to directly estimate the transformation parameters. An automatic feature extraction step is introduced to calculate 3-D pose-indexed features that are sensitive to the variables to be regressed while robust to other factors. The CNN regressors are then trained for local zones and applied in a hierarchical manner to break down the complex regression task into multiple simpler sub-tasks that can be learned separately. Weight sharing is furthermore employed in the CNN regression model to reduce the memory footprint. The proposed approach has been quantitatively evaluated on 3 potential clinical applications, demonstrating its significant advantage in providing highly accurate real-time 2-D/3-D registration with a significantly enlarged capture range when compared to intensity-based methods.
Blur identification by multilayer neural network based on multivalued neurons.
Aizenberg, Igor; Paliy, Dmitriy V; Zurada, Jacek M; Astola, Jaakko T
2008-05-01
A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones.
Rethinking 'rational imitation' in 14-month-old infants: a perceptual distraction approach.
Beisert, Miriam; Zmyj, Norbert; Liepelt, Roman; Jung, Franziska; Prinz, Wolfgang; Daum, Moritz M
2012-01-01
In their widely noticed study, Gergely, Bekkering, and Király (2002) showed that 14-month-old infants imitated an unusual action only if the model freely chose to perform this action and not if the choice of the action could be ascribed to external constraints. They attributed this kind of selective imitation to the infants' capacity of understanding the principle of rational action. In the current paper, we present evidence that a simpler approach of perceptual distraction may be more appropriate to explain their results. When we manipulated the saliency of context stimuli in the two original conditions, the results were exactly opposite to what rational imitation predicts. Based on these findings, we reject the claim that the notion of rational action plays a key role in selective imitation in 14-month-olds.
Segmentation of discrete vector fields.
Li, Hongyu; Chen, Wenbin; Shen, I-Fan
2006-01-01
In this paper, we propose an approach for 2D discrete vector field segmentation based on the Green function and normalized cut. The method is inspired by discrete Hodge Decomposition such that a discrete vector field can be broken down into three simpler components, namely, curl-free, divergence-free, and harmonic components. We show that the Green Function Method (GFM) can be used to approximate the curl-free and the divergence-free components to achieve our goal of the vector field segmentation. The final segmentation curves that represent the boundaries of the influence region of singularities are obtained from the optimal vector field segmentations. These curves are composed of piecewise smooth contours or streamlines. Our method is applicable to both linear and nonlinear discrete vector fields. Experiments show that the segmentations obtained using our approach essentially agree with human perceptual judgement.
An FPGA-Based People Detection System
NASA Astrophysics Data System (ADS)
Nair, Vinod; Laprise, Pierre-Olivier; Clark, James J.
2005-12-01
This paper presents an FPGA-based system for detecting people from video. The system is designed to use JPEG-compressed frames from a network camera. Unlike previous approaches that use techniques such as background subtraction and motion detection, we use a machine-learning-based approach to train an accurate detector. We address the hardware design challenges involved in implementing such a detector, along with JPEG decompression, on an FPGA. We also present an algorithm that efficiently combines JPEG decompression with the detection process. This algorithm carries out the inverse DCT step of JPEG decompression only partially. Therefore, it is computationally more efficient and simpler to implement, and it takes up less space on the chip than the full inverse DCT algorithm. The system is demonstrated on an automated video surveillance application and the performance of both hardware and software implementations is analyzed. The results show that the system can detect people accurately at a rate of about[InlineEquation not available: see fulltext.] frames per second on a Virtex-II 2V1000 using a MicroBlaze processor running at[InlineEquation not available: see fulltext.], communicating with dedicated hardware over FSL links.
An experimental investigation of the flow physics of high-lift systems
NASA Technical Reports Server (NTRS)
Thomas, Flint O.; Nelson, R. C.
1995-01-01
This progress report, a series of viewgraphs, outlines experiments on the flow physics of confluent boundary layers for high lift systems. The design objective is to design high lift systems with improved C(sub Lmax) for landing approach and improved take-off L/D and simultaneously reduce acquisition and maintenance costs. In effect, achieve improved performance with simpler designs. The research objectives include: establish the role of confluent boundary layer flow physics in high-lift production; contrast confluent boundary layer structure for optimum and non-optimum C(sub L) cases; formation of a high quality, detailed archival data base for CFD/modeling; and examination of the role of relaminarization and streamline curvature.
NASA Astrophysics Data System (ADS)
Gelmini, A.; Gottardi, G.; Moriyama, T.
2017-10-01
This work presents an innovative computational approach for the inversion of wideband ground penetrating radar (GPR) data. The retrieval of the dielectric characteristics of sparse scatterers buried in a lossy soil is performed by combining a multi-task Bayesian compressive sensing (MT-BCS) solver and a frequency hopping (FH) strategy. The developed methodology is able to benefit from the regularization capabilities of the MT-BCS as well as to exploit the multi-chromatic informative content of GPR measurements. A set of numerical results is reported in order to assess the effectiveness of the proposed GPR inverse scattering technique, as well as to compare it to a simpler single-task implementation.
Kamensky, David; Evans, John A; Hsu, Ming-Chen; Bazilevs, Yuri
2017-11-01
This paper discusses a method of stabilizing Lagrange multiplier fields used to couple thin immersed shell structures and surrounding fluids. The method retains essential conservation properties by stabilizing only the portion of the constraint orthogonal to a coarse multiplier space. This stabilization can easily be applied within iterative methods or semi-implicit time integrators that avoid directly solving a saddle point problem for the Lagrange multiplier field. Heart valve simulations demonstrate applicability of the proposed method to 3D unsteady simulations. An appendix sketches the relation between the proposed method and a high-order-accurate approach for simpler model problems.
Simple, Flexible, Trigonometric Taper Equations
Charles E. Thomas; Bernard R. Parresol
1991-01-01
There have been numerous approaches to modeling stem form in recent decades. The majority have concentrated on the simpler coniferous bole form and have become increasingly complex mathematical expressions. Use of trigonometric equations provides a simple expression of taper that is flexible enough to fit both coniferous and hard-wood bole forms. As an illustration, we...
Performance evaluation of objective quality metrics for HDR image compression
NASA Astrophysics Data System (ADS)
Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic
2014-09-01
Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.
High-Order Central WENO Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new third- and fifth-order Godunov-type central schemes for approximating solutions of the Hamilton-Jacobi (HJ) equation in an arbitrary number of space dimensions. These are the first central schemes for approximating solutions of the HJ equations with an order of accuracy that is greater than two. In two space dimensions we present two versions for the third-order scheme: one scheme that is based on a genuinely two-dimensional Central WENO reconstruction, and another scheme that is based on a simpler dimension-by-dimension reconstruction. The simpler dimension-by-dimension variant is then extended to a multi-dimensional fifth-order scheme. Our numerical examples in one, two and three space dimensions verify the expected order of accuracy of the schemes.
Regular paths in SparQL: querying the NCI Thesaurus.
Detwiler, Landon T; Suciu, Dan; Brinkley, James F
2008-11-06
OWL, the Web Ontology Language, provides syntax and semantics for representing knowledge for the semantic web. Many of the constructs of OWL have a basis in the field of description logics. While the formal underpinnings of description logics have lead to a highly computable language, it has come at a cognitive cost. OWL ontologies are often unintuitive to readers lacking a strong logic background. In this work we describe GLEEN, a regular path expression library, which extends the RDF query language SparQL to support complex path expressions over OWL and other RDF-based ontologies. We illustrate the utility of GLEEN by showing how it can be used in a query-based approach to defining simpler, more intuitive views of OWL ontologies. In particular we show how relatively simple GLEEN-enhanced SparQL queries can create views of the OWL version of the NCI Thesaurus that match the views generated by the web-based NCI browser.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobina, C.B.; Silva, E.R.C. da; Lima, A.M.N.
This paper investigates the PWM operation of a four switch three phase inverter (FSTPI), in the case of digital implementation. Different switching sequence strategies for vector control are described and a digital scalar method is also presented. The influence of different switching patterns on the output voltage symmetry, current waveform and switching frequency are examined. The results obtained by employing the vector and scalar strategies are compared and a relationship between them is established. This comparison is based on analytical study and is corroborated either by the computer simulations and by the experimental results. The vector approach makes ease themore » understanding and analysis of the FSTPI, as well the choice of a PWM pattern. However, similar results may be obtained through the scalar approach, which has a simpler implementation. The experimental results of the use of the FSTPI and digital PWM to control an induction motor are presented.« less
NASA Technical Reports Server (NTRS)
Wen, John T.; Kreutz-Delgado, Kenneth; Bayard, David S.
1992-01-01
A new class of joint level control laws for all-revolute robot arms is introduced. The analysis is similar to a recently proposed energy-like Liapunov function approach, except that the closed-loop potential function is shaped in accordance with the underlying joint space topology. This approach gives way to a much simpler analysis and leads to a new class of control designs which guarantee both global asymptotic stability and local exponential stability. When Coulomb and viscous friction and parameter uncertainty are present as model perturbations, a sliding mode-like modification of the control law results in a robustness-enhancing outer loop. Adaptive control is formulated within the same framework. A linear-in-the-parameters formulation is adopted and globally asymptotically stable adaptive control laws are derived by simply replacing unknown model parameters by their estimates (i.e., certainty equivalence adaptation).
Na, Hyungjoo; Eun, Youngkee; Kim, Min-Ook; Choi, Jungwook; Kim, Jongbaeg
2015-01-01
We report a unique approach for the patterned growth of single-crystalline tungsten oxide (WOx) nanowires based on localized stress-induction. Ions implanted into the desired growth area of WOx thin films lead to a local increase in the compressive stress, leading to the growth of nanowire at lower temperatures (600 °C vs. 750–900 °C) than for equivalent non-implanted samples. Nanowires were successfully grown on the microscale patterns using wafer-level ion implantation and on the nanometer scale patterns using a focused ion beam (FIB). Experimental results show that nanowire growth is influenced by a number of factors including the dose of the implanted ions and their atomic radius. The implanted-ion-assisted, stress-induced method proposed here for the patterned growth of WOx nanowires is simpler than alternative approaches and enhances the compatibility of the process by reducing the growth temperature. PMID:26666843
Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill
2012-01-01
In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsumoto, Munehisa; Akai, Hisazumi; Doi, Shotaro
2016-06-07
A classical spin model derived ab initio for rare-earth-based permanent magnet compounds is presented. Our target compound, NdFe{sub 12}N, is a material that goes beyond today's champion magnet compound Nd{sub 2}Fe{sub 14}B in its intrinsic magnetic properties with a simpler crystal structure. Calculated temperature dependence of the magnetization and the anisotropy field agrees with the latest experimental results in the leading order. Having put the realistic observables under our numerical control, we propose that engineering 5d-electron-mediated indirect exchange coupling between 4f-electrons in Nd and 3d-electrons from Fe would most critically help enhance the material's utility over the operation-temperature range.
Molitor, John
2012-03-01
Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.
Haeufle, D F B; Günther, M; Wunner, G; Schmitt, S
2014-01-01
In biomechanics and biorobotics, muscles are often associated with reduced movement control effort and simplified control compared to technical actuators. This is based on evidence that the nonlinear muscle properties positively influence movement control. It is, however, open how to quantify the simplicity aspect of control effort and compare it between systems. Physical measures, such as energy consumption, stability, or jerk, have already been applied to compare biological and technical systems. Here a physical measure of control effort based on information entropy is presented. The idea is that control is simpler if a specific movement is generated with less processed sensor information, depending on the control scheme and the physical properties of the systems being compared. By calculating the Shannon information entropy of all sensor signals required for control, an information cost function can be formulated allowing the comparison of models of biological and technical control systems. Exemplarily applied to (bio-)mechanical models of hopping, the method reveals that the required information for generating hopping with a muscle driven by a simple reflex control scheme is only I=32 bits versus I=660 bits with a DC motor and a proportional differential controller. This approach to quantifying control effort captures the simplicity of a control scheme and can be used to compare completely different actuators and control approaches.
NASA Astrophysics Data System (ADS)
Kwon, Chung-Jin; Kim, Sung-Joong; Han, Woo-Young; Min, Won-Kyoung
2005-12-01
The rotor position and speed estimation of permanent-magnet synchronous motor(PMSM) was dealt with. By measuring the phase voltages and currents of the PMSM drive, two diagonally recurrent neural network(DRNN) based observers, a neural current observer and a neural velocity observer were developed. DRNN which has self-feedback of the hidden neurons ensures that the outputs of DRNN contain the whole past information of the system even if the inputs of DRNN are only the present states and inputs of the system. Thus the structure of DRNN may be simpler than that of feedforward and fully recurrent neural networks. If the backpropagation method was used for the training of the DRNN the problem of slow convergence arise. In order to reduce this problem, recursive prediction error(RPE) based learning method for the DRNN was presented. The simulation results show that the proposed approach gives a good estimation of rotor speed and position, and RPE based training has requires a shorter computation time compared to backpropagation based training.
Conditional Subspace Clustering of Skill Mastery: Identifying Skills that Separate Students
ERIC Educational Resources Information Center
Nugent, Rebecca; Ayers, Elizabeth; Dean, Nema
2009-01-01
In educational research, a fundamental goal is identifying which skills students have mastered, which skills they have not, and which skills they are in the process of mastering. As the number of examinees, items, and skills increases, the estimation of even simple cognitive diagnosis models becomes difficult. We adopt a faster, simpler approach:…
Approximate analysis of thermal convection in a crystal-growth cell for Spacelab 3
NASA Technical Reports Server (NTRS)
Dressler, R. F.
1982-01-01
The transient and steady thermal convection in microgravity is described. The approach is applicable to many three dimensional flows in containers of various shapes with various thermal gradients imposed. The method employs known analytical solutions to two dimensional thermal flows in simpler geometries, and does not require recourse to numerical calculations by computer.
Colorectal Cancer Deaths Attributable to Nonuse of Screening in the United States
Meester, Reinier G.S.; Doubeni, Chyke A.; Lansdorp-Vogelaar, Iris; Goede, S.L.; Levin, Theodore R.; Quinn, Virginia P.; van Ballegooijen, Marjolein; Corley, Douglas A.; Zauber, Ann G.
2015-01-01
Purpose Screening is a major contributor to colorectal cancer (CRC) mortality reductions in the U.S., but is underutilized. We estimated the fraction of CRC deaths attributable to nonuse of screening to demonstrate the potential benefits from targeted interventions. Methods The established MISCAN-colon microsimulation model was used to estimate the population attributable fraction (PAF) in people aged ≥50 years. The model incorporates long-term patterns and effects of screening by age and type of screening test. PAF for 2010 was estimated using currently available data on screening uptake; PAF was also projected assuming constant future screening rates to incorporate lagged effects from past increases in screening uptake. We also computed PAF using Levin's formula to gauge how this simpler approach differs from the model-based approach. Results There were an estimated 51,500 CRC deaths in 2010, about 63% (N∼32,200) of which were attributable to non-screening. The PAF decreases slightly to 58% in 2020. Levin's approach yielded a considerably more conservative PAF of 46% (N∼23,600) for 2010. Conclusions The majority of current U.S. CRC deaths are attributable to non-screening. This underscores the potential benefits of increasing screening uptake in the population. Traditional methods of estimating PAF underestimated screening effects compared with model-based approaches. PMID:25721748
The NASA/Baltimore Applications Project: An experiment in technology transfer
NASA Technical Reports Server (NTRS)
Golden, T. S.
1981-01-01
Conclusions drawn from the experiment thus far are presented. The problems of a large city most often do not require highly sophisticated solutions; the simpler the solution, the better. A problem focused approach is a greater help to the city than a product focused approach. Most problem situations involve several individuals or organized groups within the city. Mutual trust and good interpersonal relationships between the technologist and the administrator is as important for solving problems as technological know-how.
Lobach, Irvna; Fan, Ruzone; Carroll, Raymond T.
2011-01-01
With the advent of dense single nucleotide polymorphism genotyping, population-based association studies have become the major tools for identifying human disease genes and for fine gene mapping of complex traits. We develop a genotype-based approach for association analysis of case-control studies of gene-environment interactions in the case when environmental factors are measured with error and genotype data are available on multiple genetic markers. To directly use the observed genotype data, we propose two genotype-based models: genotype effect and additive effect models. Our approach offers several advantages. First, the proposed risk functions can directly incorporate the observed genotype data while modeling the linkage disequihbrium information in the regression coefficients, thus eliminating the need to infer haplotype phase. Compared with the haplotype-based approach, an estimating procedure based on the proposed methods can be much simpler and significantly faster. In addition, there is no potential risk due to haplotype phase estimation. Further, by fitting the proposed models, it is possible to analyze the risk alleles/variants of complex diseases, including their dominant or additive effects. To model measurement error, we adopt the pseudo-likelihood method by Lobach et al. [2008]. Performance of the proposed method is examined using simulation experiments. An application of our method is illustrated using a population-based case-control study of association between calcium intake with the risk of colorectal adenoma development. PMID:21031455
Berton, Paula; Lana, Nerina B; Ríos, Juan M; García-Reyes, Juan F; Altamirano, Jorgelina C
2016-01-28
Green chemistry principles for developing methodologies have gained attention in analytical chemistry in recent decades. A growing number of analytical techniques have been proposed for determination of organic persistent pollutants in environmental and biological samples. In this light, the current review aims to present state-of-the-art sample preparation approaches based on green analytical principles proposed for the determination of polybrominated diphenyl ethers (PBDEs) and metabolites (OH-PBDEs and MeO-PBDEs) in environmental and biological samples. Approaches to lower the solvent consumption and accelerate the extraction, such as pressurized liquid extraction, microwave-assisted extraction, and ultrasound-assisted extraction, are discussed in this review. Special attention is paid to miniaturized sample preparation methodologies and strategies proposed to reduce organic solvent consumption. Additionally, extraction techniques based on alternative solvents (surfactants, supercritical fluids, or ionic liquids) are also commented in this work, even though these are scarcely used for determination of PBDEs. In addition to liquid-based extraction techniques, solid-based analytical techniques are also addressed. The development of greener, faster and simpler sample preparation approaches has increased in recent years (2003-2013). Among green extraction techniques, those based on the liquid phase predominate over those based on the solid phase (71% vs. 29%, respectively). For solid samples, solvent assisted extraction techniques are preferred for leaching of PBDEs, and liquid phase microextraction techniques are mostly used for liquid samples. Likewise, green characteristics of the instrumental analysis used after the extraction and clean-up steps are briefly discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Competitive control of cognition in rhesus monkeys.
Kowaguchi, Mayuka; Patel, Nirali P; Bunnell, Megan E; Kralik, Jerald D
2016-12-01
The brain has evolved different approaches to solve problems, but the mechanisms that determine which approach to take remain unclear. One possibility is that control progresses from simpler processes, such as associative learning, to more complex ones, such as relational reasoning, when the simpler ones prove inadequate. Alternatively, control could be based on competition between the processes. To test between these possibilities, we posed the support problem to rhesus monkeys using a tool-use paradigm, in which subjects could pull an object (the tool) toward themselves to obtain an otherwise out-of-reach goal item. We initially provided one problem exemplar as a choice: for the correct option, a food item placed on the support tool; for the incorrect option, the food item placed off the tool. Perceptual cues were also correlated with outcome: e.g., red, triangular tool correct, blue, rectangular tool incorrect. Although the monkeys simply needed to touch the tool to register a response, they immediately pulled it, reflecting a relational reasoning process between themselves and another object (R self-other ), rather than an associative one between the arbitrary touch response and reward (A resp-reward ). Probe testing then showed that all four monkeys used a conjunction of perceptual features to select the correct option, reflecting an associative process between stimuli and reward (A stim-reward ). We then added a second problem exemplar and subsequent testing revealed that the monkeys switched to using the on/off relationship, reflecting a relational reasoning process between two objects (R other-other ). Because behavior appeared to reflect R self-other rather than A resp-reward , and A stim-reward prior to R other-other , our results suggest that cognitive processes are selected via competitive control dynamics. Copyright © 2016 Elsevier B.V. All rights reserved.
Machine learning approaches to the social determinants of health in the health and retirement study.
Seligman, Benjamin; Tuljapurkar, Shripad; Rehkopf, David
2018-04-01
Social and economic factors are important predictors of health and of recognized importance for health systems. However, machine learning, used elsewhere in the biomedical literature, has not been extensively applied to study relationships between society and health. We investigate how machine learning may add to our understanding of social determinants of health using data from the Health and Retirement Study. A linear regression of age and gender, and a parsimonious theory-based regression additionally incorporating income, wealth, and education, were used to predict systolic blood pressure, body mass index, waist circumference, and telomere length. Prediction, fit, and interpretability were compared across four machine learning methods: linear regression, penalized regressions, random forests, and neural networks. All models had poor out-of-sample prediction. Most machine learning models performed similarly to the simpler models. However, neural networks greatly outperformed the three other methods. Neural networks also had good fit to the data ( R 2 between 0.4-0.6, versus <0.3 for all others). Across machine learning models, nine variables were frequently selected or highly weighted as predictors: dental visits, current smoking, self-rated health, serial-seven subtractions, probability of receiving an inheritance, probability of leaving an inheritance of at least $10,000, number of children ever born, African-American race, and gender. Some of the machine learning methods do not improve prediction or fit beyond simpler models, however, neural networks performed well. The predictors identified across models suggest underlying social factors that are important predictors of biological indicators of chronic disease, and that the non-linear and interactive relationships between variables fundamental to the neural network approach may be important to consider.
Simultaneous Co-Clustering and Classification in Customers Insight
NASA Astrophysics Data System (ADS)
Anggistia, M.; Saefuddin, A.; Sartono, B.
2017-04-01
Building predictive model based on the heterogeneous dataset may yield many problems, such as less precise in parameter and prediction accuracy. Such problem can be solved by segmenting the data into relatively homogeneous groups and then build a predictive model for each cluster. The advantage of using this strategy usually gives result in simpler models, more interpretable, and more actionable without any loss in accuracy and reliability. This work concerns on marketing data set which recorded a customer behaviour across products. There are some variables describing customer and product as attributes. The basic idea of this approach is to combine co-clustering and classification simultaneously. The objective of this research is to analyse the customer across product characteristics, so the marketing strategy implemented precisely.
Sound Asleep: Processing and Retention of Slow Oscillation Phase-Targeted Stimuli
Cox, Roy; Korjoukov, Ilia; de Boer, Marieke; Talamini, Lucia M.
2014-01-01
The sleeping brain retains some residual information processing capacity. Although direct evidence is scarce, a substantial literature suggests the phase of slow oscillations during deep sleep to be an important determinant for stimulus processing. Here, we introduce an algorithm for predicting slow oscillations in real-time. Using this approach to present stimuli directed at both oscillatory up and down states, we show neural stimulus processing depends importantly on the slow oscillation phase. During ensuing wakefulness, however, we did not observe differential brain or behavioral responses to these stimulus categories, suggesting no enduring memories were formed. We speculate that while simpler forms of learning may occur during sleep, neocortically based memories are not readily established during deep sleep. PMID:24999803
Sound asleep: processing and retention of slow oscillation phase-targeted stimuli.
Cox, Roy; Korjoukov, Ilia; de Boer, Marieke; Talamini, Lucia M
2014-01-01
The sleeping brain retains some residual information processing capacity. Although direct evidence is scarce, a substantial literature suggests the phase of slow oscillations during deep sleep to be an important determinant for stimulus processing. Here, we introduce an algorithm for predicting slow oscillations in real-time. Using this approach to present stimuli directed at both oscillatory up and down states, we show neural stimulus processing depends importantly on the slow oscillation phase. During ensuing wakefulness, however, we did not observe differential brain or behavioral responses to these stimulus categories, suggesting no enduring memories were formed. We speculate that while simpler forms of learning may occur during sleep, neocortically based memories are not readily established during deep sleep.
Similarity solution of the Boussinesq equation
NASA Astrophysics Data System (ADS)
Lockington, D. A.; Parlange, J.-Y.; Parlange, M. B.; Selker, J.
Similarity transforms of the Boussinesq equation in a semi-infinite medium are available when the boundary conditions are a power of time. The Boussinesq equation is reduced from a partial differential equation to a boundary-value problem. Chen et al. [Trans Porous Media 1995;18:15-36] use a hodograph method to derive an integral equation formulation of the new differential equation which they solve by numerical iteration. In the present paper, the convergence of their scheme is improved such that numerical iteration can be avoided for all practical purposes. However, a simpler analytical approach is also presented which is based on Shampine's transformation of the boundary value problem to an initial value problem. This analytical approximation is remarkably simple and yet more accurate than the analytical hodograph approximations.
Data Synchronization Discrepancies in a Formation Flight Control System
NASA Technical Reports Server (NTRS)
Ryan, Jack; Hanson, Curtis E.; Norlin, Ken A.; Allen, Michael J.; Schkolnik, Gerard (Technical Monitor)
2001-01-01
Aircraft hardware-in-the-loop simulation is an invaluable tool to flight test engineers; it reveals design and implementation flaws while operating in a controlled environment. Engineers, however, must always be skeptical of the results and analyze them within their proper context. Engineers must carefully ascertain whether an anomaly that occurs in the simulation will also occur in flight. This report presents a chronology illustrating how misleading simulation timing problems led to the implementation of an overly complex position data synchronization guidance algorithm in place of a simpler one. The report illustrates problems caused by the complex algorithm and how the simpler algorithm was chosen in the end. Brief descriptions of the project objectives, approach, and simulation are presented. The misleading simulation results and the conclusions then drawn are presented. The complex and simple guidance algorithms are presented with flight data illustrating their relative success.
Eng, K.; Tasker, Gary D.; Milly, P.C.D.
2005-01-01
Region-of-influence (RoI) approaches for estimating streamflow characteristics at ungaged sites were applied and evaluated in a case study of the 50-year peak discharge in the Gulf-Atlantic Rolling Plains of the southeastern United States. Linear regression against basin characteristics was performed for each ungaged site considered based on data from a region of influence containing the n closest gages in predictor variable (PRoI) or geographic (GRoI) space. Augmentation of this count based cutoff by a distance based cutoff also was considered. Prediction errors were evaluated for an independent (split-sampled) dataset. For the dataset and metrics considered here: (1) for either PRoI or GRoI, optimal results were found when the simpler count based cutoff, rather than the distance augmented cutoff, was used; (2) GRoI produced lower error than PRoI when applied indiscriminately over the entire study region; (3) PRoI performance improved considerably when RoI was restricted to predefined geographic subregions.
Strategic directions for agent-based modeling: avoiding the YAAWN syndrome
O’Sullivan, David; Evans, Tom; Manson, Steven; Metcalf, Sara; Ligmann-Zielinska, Arika; Bone, Chris
2015-01-01
In this short communication, we examine how agent-based modeling has become common in land change science and is increasingly used to develop case studies for particular times and places. There is a danger that the research community is missing a prime opportunity to learn broader lessons from the use of agent-based modeling (ABM), or at the very least not sharing these lessons more widely. How do we find an appropriate balance between empirically rich, realistic models and simpler theoretically grounded models? What are appropriate and effective approaches to model evaluation in light of uncertainties not only in model parameters but also in model structure? How can we best explore hybrid model structures that enable us to better understand the dynamics of the systems under study, recognizing that no single approach is best suited to this task? Under what circumstances – in terms of model complexity, model evaluation, and model structure – can ABMs be used most effectively to lead to new insight for stakeholders? We explore these questions in the hope of helping the growing community of land change scientists using models in their research to move from ‘yet another model’ to doing better science with models. PMID:27158257
The detection of 4 vital signs of in-patients Using fuzzy database
NASA Astrophysics Data System (ADS)
Haris Rangkuti, A.; Erlisa Rasjid, Zulfany
2014-03-01
Actually in order to improve in the performance of the Hospital's administrator, by serve patients effectively and efficiently, the role of information technology become the dominant support. Especially when it comes to patient's conditions, such that it will be reported to a physician as soon as possible, including monitoring the patient's conditions regularly. For this reason it is necessary to have a Hospital Monitoring Information System, that is able to provide information about the patient's condition which is based on the four vital signs, temperature, blood pressure, pulse, and respiration. To monitor the 4 vital signs, the concept of fuzzy logic is used, where the vital signs number approaches 1 then the patient is close to recovery, and on the contrary, when the vital signs number approaches 0 then the patient still has problems. This system also helps nurses to provide answers to the relatives of patients, who wants to know the development of the patient's condition, including the recovery percentage based on the average of Fuzzy max from the 4 vital signs. Using Fuzzy-based monitoring system, the monitoring of the patient's condition becomes simpler and easier.
Strategic directions for agent-based modeling: avoiding the YAAWN syndrome.
O'Sullivan, David; Evans, Tom; Manson, Steven; Metcalf, Sara; Ligmann-Zielinska, Arika; Bone, Chris
In this short communication, we examine how agent-based modeling has become common in land change science and is increasingly used to develop case studies for particular times and places. There is a danger that the research community is missing a prime opportunity to learn broader lessons from the use of agent-based modeling (ABM), or at the very least not sharing these lessons more widely. How do we find an appropriate balance between empirically rich, realistic models and simpler theoretically grounded models? What are appropriate and effective approaches to model evaluation in light of uncertainties not only in model parameters but also in model structure? How can we best explore hybrid model structures that enable us to better understand the dynamics of the systems under study, recognizing that no single approach is best suited to this task? Under what circumstances - in terms of model complexity, model evaluation, and model structure - can ABMs be used most effectively to lead to new insight for stakeholders? We explore these questions in the hope of helping the growing community of land change scientists using models in their research to move from 'yet another model' to doing better science with models.
Modular microfluidic systems using reversibly attached PDMS fluid control modules
NASA Astrophysics Data System (ADS)
Skafte-Pedersen, Peder; Sip, Christopher G.; Folch, Albert; Dufva, Martin
2013-05-01
The use of soft lithography-based poly(dimethylsiloxane) (PDMS) valve systems is the dominating approach for high-density microscale fluidic control. Integrated systems enable complex flow control and large-scale integration, but lack modularity. In contrast, modular systems are attractive alternatives to integration because they can be tailored for different applications piecewise and without redesigning every element of the system. We present a method for reversibly coupling hard materials to soft lithography defined systems through self-aligning O-ring features thereby enabling easy interfacing of complex-valve-based systems with simpler detachable units. Using this scheme, we demonstrate the seamless interfacing of a PDMS-based fluid control module with hard polymer chips. In our system, 32 self-aligning O-ring features protruding from the PDMS fluid control module form chip-to-control module interconnections which are sealed by tightening four screws. The interconnection method is robust and supports complex fluidic operations in the reversibly attached passive chip. In addition, we developed a double-sided molding method for fabricating PDMS devices with integrated through-holes. The versatile system facilitates a wide range of applications due to the modular approach, where application specific passive chips can be readily attached to the flow control module.
Synthetic Approaches to the Lamellarins—A Comprehensive Review
Imbri, Dennis; Tauber, Johannes; Opatz, Till
2014-01-01
The present review discusses the known synthetic routes to the lamellarin alkaloids published until 2014. It begins with syntheses of the structurally simpler type-II lamellarins and then focuses on the larger class of the 5,6-saturated and -unsaturated type-I lamellarins. The syntheses are grouped by the strategy employed for the assembly of the central pyrrole ring. PMID:25528958
Karim, Mohammad Ehsanul; Gustafson, Paul; Petkau, John; Tremlett, Helen
2016-01-01
In time-to-event analyses of observational studies of drug effectiveness, incorrect handling of the period between cohort entry and first treatment exposure during follow-up may result in immortal time bias. This bias can be eliminated by acknowledging a change in treatment exposure status with time-dependent analyses, such as fitting a time-dependent Cox model. The prescription time-distribution matching (PTDM) method has been proposed as a simpler approach for controlling immortal time bias. Using simulation studies and theoretical quantification of bias, we compared the performance of the PTDM approach with that of the time-dependent Cox model in the presence of immortal time. Both assessments revealed that the PTDM approach did not adequately address immortal time bias. Based on our simulation results, another recently proposed observational data analysis technique, the sequential Cox approach, was found to be more useful than the PTDM approach (Cox: bias = −0.002, mean squared error = 0.025; PTDM: bias = −1.411, mean squared error = 2.011). We applied these approaches to investigate the association of β-interferon treatment with delaying disability progression in a multiple sclerosis cohort in British Columbia, Canada (Long-Term Benefits and Adverse Effects of Beta-Interferon for Multiple Sclerosis (BeAMS) Study, 1995–2008). PMID:27455963
From Awareness to Action: Determining the climate sensitivities that influence decision makers
NASA Astrophysics Data System (ADS)
Brown, C.
2017-12-01
Through the growth of computing power and analytical methods, a range of valuable and innovative tools allow the exhaustive exploration of a water system's response to a limitless set of scenarios. Similarly, possible adaptive actions can be evaluated across this broad set of possible futures. Finally, an ever increasing set of performance indicators is available to judge the relative value of a particular action over others. However, it's unclear whether this is improving the flow of actionable information or further cluttering it. This presentation will share lessons learned and other intuitions from a set of experiences engaging with public and private water managers and investors in the use of robustness-based climate vulnerability and adaptation analysis. Based on this background, a case for reductionism and focus on financial vulnerability will be forwarded. In addition, considerations for simpler, practical approaches for smaller water utilities will be discussed.
NASA Technical Reports Server (NTRS)
Haimes, Robert; Follen, Gregory J.
1998-01-01
CAPRI is a CAD-vendor neutral application programming interface designed for the construction of analysis and design systems. By allowing access to the geometry from within all modules (grid generators, solvers and post-processors) such tasks as meshing on the actual surfaces, node enrichment by solvers and defining which mesh faces are boundaries (for the solver and visualization system) become simpler. The overall reliance on file 'standards' is minimized. This 'Geometry Centric' approach makes multi-physics (multi-disciplinary) analysis codes much easier to build. By using the shared (coupled) surface as the foundation, CAPRI provides a single call to interpolate grid-node based data from the surface discretization in one volume to another. Finally, design systems are possible where the results can be brought back into the CAD system (and therefore manufactured) because all geometry construction and modification are performed using the CAD system's geometry kernel.
Schnurr, Paula P; Bryant, Richard; Berliner, Lucy; Kilpatrick, Dean G; Rizzo, Albert; Ruzek, Josef I
2017-01-01
Background : This paper is based on a panel discussion at the 32nd annual meeting of the International Society for Traumatic Stress Studies in Dallas, Texas, in November 2016. Objective : Paula Schnurr convened a panel of experts in the fields of public health and technology to address the topic: 'What I have changed my mind about and why.' Method : The panel included Richard Bryant, Lucy Berliner, Dean Kilpatrick, Albert ('Skip') Rizzo, and Josef Ruzek. Results : Panellists discussed innovative strategies for the dissemination of scientific knowledge and evidence-based treatment. Conclusions : Although there are effective treatments, there is a need to enhance the effectiveness of these treatments. There also is a need to develop simpler, low-cost strategies to disseminate effective treatments. However, technology approaches also offer pathways to increased dissemination. Researchers must communicate scientific findings more effectively to impact public opinion and public policy.
Xu, Xiaoli; Tang, LiLing
2017-01-01
The living environment of cancer cells is complicated and information-rich. Thus, traditional 2D culture mold in vitro cannot mimic the microenvironment of cancer cells exactly. Currently, bioengineered 3D scaffolds have been developed which can better simulate the microenvironment of tumors and fill the gap between 2D culture and clinical application. In this review, we discuss the scaffold materials used for fabrication techniques, biological behaviors of cancer cells in 3D scaffolds and the scaffold-based drug screening. A major emphasis is placed on the description of scaffold-based epithelial to mesenchymal transition and drug screening in 3D culture. By overcoming the defects of traditional 2D culture, 3D scaffolds culture can provide a simpler, safer and more reliable approach for cancer research. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Luminescent Solar Concentrator Daylighting
NASA Astrophysics Data System (ADS)
Bornstein, Jonathan G.
1984-11-01
Various systems that offer potential solutions to the problem of interior daylighting have been discussed in the literature. Virtually all of these systems rely on some method of tracking the sun along its azimuth and elevation, i.e., direct imaging of the solar disk. A simpler approach, however, involves a nontracking nonimaging device that effectively eliminates moving parts and accepts both the diffuse and direct components of solar radiation. Such an approach is based on a system that combines in a common luminaire the light emitted by luminescent solar concentrators (LSC), of the three primary colors, with a highly efficient artificial point source (HID metal halide) that automatically compensates for fluctuations in the LSC array via a daylight sensor and dimming ballast. A preliminary analysis suggests that this system could supply 90% of the lighting requirement, over the course of an 8 hour day, strictly from the daylight component under typical insolation con-ditions in the Southwest United States. In office buildings alone, the total aggregate energy savings may approach a half a quad annually. This indicates a very good potential for the realization of substantial savings in building electric energy consumption.
Application of geometric algebra for the description of polymer conformations.
Chys, Pieter
2008-03-14
In this paper a Clifford algebra-based method is applied to calculate polymer chain conformations. The approach enables the calculation of the position of an atom in space with the knowledge of the bond length (l), valence angle (theta), and rotation angle (phi) of each of the preceding bonds in the chain. Hence, the set of geometrical parameters {l(i),theta(i),phi(i)} yields all the position coordinates p(i) of the main chain atoms. Moreover, the method allows the calculation of side chain conformations and the computation of rotations of chain segments. With these features it is, in principle, possible to generate conformations of any type of chemical structure. This method is proposed as an alternative for the classical approach by matrix algebra. It is more straightforward and its final symbolic representation considerably simpler than that of matrix algebra. Approaches for realistic modeling by means of incorporation of energetic considerations can be combined with it. This article, however, is entirely focused at showing the suitable mathematical framework on which further developments and applications can be built.
Predicting Deforestation Patterns in Loreto, Peru from 2000-2010 Using a Nested GLM Approach
NASA Astrophysics Data System (ADS)
Vijay, V.; Jenkins, C.; Finer, M.; Pimm, S.
2013-12-01
Loreto is the largest province in Peru, covering about 370,000 km2. Because of its remote location in the Amazonian rainforest, it is also one of the most sparsely populated. Though a majority of the region remains covered by forest, deforestation is being driven by human encroachment through industrial activities and the spread of colonization and agriculture. The importance of accurate predictive modeling of deforestation has spawned an extensive body of literature on the topic. We present a nested GLM approach based on predictions of deforestation from 2000-2010 and using variables representing the expected drivers of deforestation. Models were constructed using 2000 to 2005 changes and tested against data for 2005 to 2010. The most complex model, which included transportation variables (roads and navigable rivers), spatial contagion processes, population centers and industrial activities, performed better in predicting the 2005 to 2010 changes (75.8% accurate) than did a simpler model using only transportation variables (69.2% accurate). Finally we contrast the GLM approach with a more complex spatially articulated model.
Topological strings on singular elliptic Calabi-Yau 3-folds and minimal 6d SCFTs
NASA Astrophysics Data System (ADS)
Del Zotto, Michele; Gu, Jie; Huang, Min-xin; Kashani-Poor, Amir-Kian; Klemm, Albrecht; Lockhart, Guglielmo
2018-03-01
We apply the modular approach to computing the topological string partition function on non-compact elliptically fibered Calabi-Yau 3-folds with higher Kodaira singularities in the fiber. The approach consists in making an ansatz for the partition function at given base degree, exact in all fiber classes to arbitrary order and to all genus, in terms of a rational function of weak Jacobi forms. Our results yield, at given base degree, the elliptic genus of the corresponding non-critical 6d string, and thus the associated BPS invariants of the 6d theory. The required elliptic indices are determined from the chiral anomaly 4-form of the 2d worldsheet theories, or the 8-form of the corresponding 6d theories, and completely fix the holomorphic anomaly equation constraining the partition function. We introduce subrings of the known rings of Weyl invariant Jacobi forms which are adapted to the additional symmetries of the partition function, making its computation feasible to low base wrapping number. In contradistinction to the case of simpler singularities, generic vanishing conditions on BPS numbers are no longer sufficient to fix the modular ansatz at arbitrary base wrapping degree. We show that to low degree, imposing exact vanishing conditions does suffice, and conjecture this to be the case generally.
Image Description with Local Patterns: An Application to Face Recognition
NASA Astrophysics Data System (ADS)
Zhou, Wei; Ahrary, Alireza; Kamata, Sei-Ichiro
In this paper, we propose a novel approach for presenting the local features of digital image using 1D Local Patterns by Multi-Scans (1DLPMS). We also consider the extentions and simplifications of the proposed approach into facial images analysis. The proposed approach consists of three steps. At the first step, the gray values of pixels in image are represented as a vector giving the local neighborhood intensity distrubutions of the pixels. Then, multi-scans are applied to capture different spatial information on the image with advantage of less computation than other traditional ways, such as Local Binary Patterns (LBP). The second step is encoding the local features based on different encoding rules using 1D local patterns. This transformation is expected to be less sensitive to illumination variations besides preserving the appearance of images embedded in the original gray scale. At the final step, Grouped 1D Local Patterns by Multi-Scans (G1DLPMS) is applied to make the proposed approach computationally simpler and easy to extend. Next, we further formulate boosted algorithm to extract the most discriminant local features. The evaluated results demonstrate that the proposed approach outperforms the conventional approaches in terms of accuracy in applications of face recognition, gender estimation and facial expression.
New techniques for positron emission tomography in the study of human neurological disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, D.E.
1993-01-01
This progress report describes accomplishments of four programs. The four programs are entitled (1) Faster,simpler processing of positron-computing precursors: New physicochemical approaches, (2) Novel solid phase reagents and methods to improve radiosynthesis and isotope production, (3) Quantitative evaluation of the extraction of information from PET images, and (4) Optimization of tracer kinetic methods for radioligand studies in PET.
Phenomenological Approach to Training
1977-08-01
overt responses simpler discrete steps is also like digital computer performed. It will be suggested that a highly progr.ms or flowcharts , which consist...simple proficiency performance. cue/reaction Instruction. Putting this another way, try to visualize a 2-dlnenslonal flowchart it is important to... flowchart of discrete steps, but this does not and can easily apply situational context, which is explain how the orbit is maintained. The moon built
NASA Astrophysics Data System (ADS)
Song, H. S.; Li, M.; Qian, W.; Song, X.; Chen, X.; Scheibe, T. D.; Fredrickson, J.; Zachara, J. M.; Liu, C.
2016-12-01
Modeling environmental microbial communities at individual organism level is currently intractable due to overwhelming structural complexity. Functional guild-based approaches alleviate this problem by lumping microorganisms into fewer groups based on their functional similarities. This reduction may become ineffective, however, when individual species perform multiple functions as environmental conditions vary. In contrast, the functional enzyme-based modeling approach we present here describes microbial community dynamics based on identified functional enzymes (rather than individual species or their groups). Previous studies in the literature along this line used biomass or functional genes as surrogate measures of enzymes due to the lack of analytical methods for quantifying enzymes in environmental samples. Leveraging our recent development of a signature peptide-based technique enabling sensitive quantification of functional enzymes in environmental samples, we developed a genetically structured microbial community model (GSMCM) to incorporate enzyme concentrations and various other omics measurements (if available) as key modeling input. We formulated the GSMCM based on the cybernetic metabolic modeling framework to rationally account for cellular regulation without relying on empirical inhibition kinetics. In the case study of modeling denitrification process in Columbia River hyporheic zone sediments collected from the Hanford Reach, our GSMCM provided a quantitative fit to complex experimental data in denitrification, including the delayed response of enzyme activation to the change in substrate concentration. Our future goal is to extend the modeling scope to the prediction of carbon and nitrogen cycles and contaminant fate. Integration of a simpler version of the GSMCM with PFLOTRAN for multi-scale field simulations is in progress.
Okada, Morihiro; Miller, Thomas C; Roediger, Julia; Shi, Yun-Bo; Schech, Joseph Mat
2017-09-01
Various animal models are indispensible in biomedical research. Increasing awareness and regulations have prompted the adaptation of more humane approaches in the use of laboratory animals. With the development of easier and faster methodologies to generate genetically altered animals, convenient and humane methods to genotype these animals are important for research involving such animals. Here, we report skin swabbing as a simple and noninvasive method for extracting genomic DNA from mice and frogs for genotyping. We show that this method is highly reliable and suitable for both immature and adult animals. Our approach allows a simpler and more humane approach for genotyping vertebrate animals.
Equation-based languages – A new paradigm for building energy modeling, simulation and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.
Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less
Equation-based languages – A new paradigm for building energy modeling, simulation and optimization
Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.
2016-04-01
Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyüre, B.; Márkus, B. G.; Bernáth, B.
2015-09-15
We present a novel method to determine the resonant frequency and quality factor of microwave resonators which is faster, more stable, and conceptually simpler than the yet existing techniques. The microwave resonator is pumped with the microwave radiation at a frequency away from its resonance. It then emits an exponentially decaying radiation at its eigen-frequency when the excitation is rapidly switched off. The emitted microwave signal is down-converted with a microwave mixer, digitized, and its Fourier transformation (FT) directly yields the resonance curve in a single shot. Being a FT based method, this technique possesses the Fellgett (multiplex) and Connesmore » (accuracy) advantages and it conceptually mimics that of pulsed nuclear magnetic resonance. We also establish a novel benchmark to compare accuracy of the different approaches of microwave resonator measurements. This shows that the present method has similar accuracy to the existing ones, which are based on sweeping or modulating the frequency of the microwave radiation.« less
YAPPA: a Compiler-Based Parallelization Framework for Irregular Applications on MPSoCs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lovergine, Silvia; Tumeo, Antonino; Villa, Oreste
Modern embedded systems include hundreds of cores. Because of the difficulty in providing a fast, coherent memory architecture, these systems usually rely on non-coherent, non-uniform memory architectures with private memories for each core. However, programming these systems poses significant challenges. The developer must extract large amounts of parallelism, while orchestrating communication among cores to optimize application performance. These issues become even more significant with irregular applications, which present data sets difficult to partition, unpredictable memory accesses, unbalanced control flow and fine grained communication. Hand-optimizing every single aspect is hard and time-consuming, and it often does not lead to the expectedmore » performance. There is a growing gap between such complex and highly-parallel architectures and the high level languages used to describe the specification, which were designed for simpler systems and do not consider these new issues. In this paper we introduce YAPPA (Yet Another Parallel Programming Approach), a compilation framework for the automatic parallelization of irregular applications on modern MPSoCs based on LLVM. We start by considering an efficient parallel programming approach for irregular applications on distributed memory systems. We then propose a set of transformations that can reduce the development and optimization effort. The results of our initial prototype confirm the correctness of the proposed approach.« less
Hardware Implementation of a MIMO Decoder Using Matrix Factorization Based Channel Estimation
NASA Astrophysics Data System (ADS)
Islam, Mohammad Tariqul; Numan, Mostafa Wasiuddin; Misran, Norbahiah; Ali, Mohd Alauddin Mohd; Singh, Mandeep
2011-05-01
This paper presents an efficient hardware realization of multiple-input multiple-output (MIMO) wireless communication decoder that utilizes the available resources by adopting the technique of parallelism. The hardware is designed and implemented on Xilinx Virtex™-4 XC4VLX60 field programmable gate arrays (FPGA) device in a modular approach which simplifies and eases hardware update, and facilitates testing of the various modules independently. The decoder involves a proficient channel estimation module that employs matrix factorization on least squares (LS) estimation to reduce a full rank matrix into a simpler form in order to eliminate matrix inversion. This results in performance improvement and complexity reduction of the MIMO system. Performance evaluation of the proposed method is validated through MATLAB simulations which indicate 2 dB improvement in terms of SNR compared to LS estimation. Moreover complexity comparison is performed in terms of mathematical operations, which shows that the proposed approach appreciably outperforms LS estimation at a lower complexity and represents a good solution for channel estimation technique.
A symbiotic approach to fluid equations and non-linear flux-driven simulations of plasma dynamics
NASA Astrophysics Data System (ADS)
Halpern, Federico
2017-10-01
The fluid framework is ubiquitous in studies of plasma transport and stability. Typical forms of the fluid equations are motivated by analytical work dating several decades ago, before computer simulations were indispensable, and can be, therefore, not optimal for numerical computation. We demonstrate a new first-principles approach to obtaining manifestly consistent, skew-symmetric fluid models, ensuring internal consistency and conservation properties even in discrete form. Mass, kinetic, and internal energy become quadratic (and always positive) invariants of the system. The model lends itself to a robust, straightforward discretization scheme with inherent non-linear stability. A simpler, drift-ordered form of the equations is obtained, and first results of their numerical implementation as a binary framework for bulk-fluid global plasma simulations are demonstrated. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences, Theory Program, under Award No. DE-FG02-95ER54309.
Connector For Embedded Optical Fiber
NASA Technical Reports Server (NTRS)
Wilkerson, Charles; Hiles, Steven; Houghton, J. Richard; Holland, Brent W.
1994-01-01
Partly embedded fixture is simpler and sturdier than other types of outlets for optical fibers embedded in solid structures. No need to align coupling prism and lenses. Fixture includes base, tube bent at 45 degree angle, and ceramic ferrule.
Nonlocality without counterfactual reasoning
NASA Astrophysics Data System (ADS)
Wolf, Stefan
2015-11-01
Nonlocal correlations are usually understood through the outcomes of alternative measurements (on two or more parts of a system) that cannot altogether actually be carried out in an experiment. Indeed, a joint input-output — e.g., measurement-setting-outcome — behavior is nonlocal if and only if the outputs for all possible inputs cannot coexist consistently. It has been argued that this counterfactual view is how Bell's inequalities and their violations are to be seen. I propose an alternative perspective which refrains from setting into relation the results of mutually exclusive measurements, but that is based solely on data actually available. My approach uses algorithmic complexity instead of probability and randomness, and implies that nonlocality has consequences similar to those in the probabilistic view. Our view is conceptually simpler than the traditional reasoning.
NASA Astrophysics Data System (ADS)
Samborski, Sylwester; Valvo, Paolo S.
2018-01-01
The paper deals with the numerical and analytical modelling of the end-loaded split test for multi-directional laminates affected by the typical elastic couplings. Numerical analysis of three-dimensional finite element models was performed with the Abaqus software exploiting the virtual crack closure technique (VCCT). The results show possible asymmetries in the widthwise deflections of the specimen, as well as in the strain energy release rate (SERR) distributions along the delamination front. Analytical modelling based on a beam-theory approach was also conducted in simpler cases, where only bending-extension coupling is present, but no out-of-plane effects. The analytical results matched the numerical ones, thus demonstrating that the analytical models are feasible for test design and experimental data reduction.
On equivalent resistance of electrical circuits
NASA Astrophysics Data System (ADS)
Kagan, Mikhail
2015-01-01
While the standard (introductory physics) way of computing the equivalent resistance of nontrivial electrical circuits is based on Kirchhoff's rules, there is a mathematically and conceptually simpler approach, called the method of nodal potentials, whose basic variables are the values of the electric potential at the circuit's nodes. In this paper, we review the method of nodal potentials and illustrate it using the Wheatstone bridge as an example. We then derive a closed-form expression for the equivalent resistance of a generic circuit, which we apply to a few sample circuits. The result unveils a curious interplay between electrical circuits, matrix algebra, and graph theory and its applications to computer science. The paper is written at a level accessible by undergraduate students who are familiar with matrix arithmetic. Additional proofs and technical details are provided in appendices.
Evolutionary Optimization of Yagi-Uda Antennas
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Kraus, William F.; Linden, Derek S.; Colombano, Silvano P.
2001-01-01
Yagi-Uda antennas are known to be difficult to design and optimize due to their sensitivity at high gain, and the inclusion of numerous parasitic elements. We present a genetic algorithm-based automated antenna optimization system that uses a fixed Yagi-Uda topology and a byte-encoded antenna representation. The fitness calculation allows the implicit relationship between power gain and sidelobe/backlobe loss to emerge naturally, a technique that is less complex than previous approaches. The genetic operators used are also simpler. Our results include Yagi-Uda antennas that have excellent bandwidth and gain properties with very good impedance characteristics. Results exceeded previous Yagi-Uda antennas produced via evolutionary algorithms by at least 7.8% in mainlobe gain. We also present encouraging preliminary results where a coevolutionary genetic algorithm is used.
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Giglio, Louis
1994-01-01
A multi channel physical approach for retrieving rainfall and its vertical structure from Special Sensor Microwave/Imager (SSM/I) observations is examined. While a companion paper was devoted exclusively to the description of the algorithm, its strengths, and its limitations, the main focus of this paper is to report on the results, applicability, and expected accuraciesfrom this algorithm. Some examples are given that compare retrieved results with ground-based radar data from different geographical regions to illustrate the performance and utility of the algorithm under distinct rainfall conditions. More quantitative validation is accomplished using two months of radar data from Darwin, Australia, and the radar network over Japan. Instantaneous comparisons at Darwin indicate that root-mean-square errors for 1.25 deg areas over water are 0.09 mm/h compared to the mean rainfall value of 0.224 mm/h while the correlation exceeds 0.9. Similar results are obtained over the Japanese validation site with rms errors of 0.615 mm/h compared to the mean of 0.0880 mm/h and a correlation of 0.9. Results are less encouraging over land with root-mean-square errors somewhat larger than the mean rain rates and correlations of only 0.71 and 0.62 for Darwin and Japan, respectively. These validation studies are further used in combination with the theoretical treatment of expected accuracies developed in the companion paper to define error estimates on a broader scale than individual radar sites from which the errors may be analyzed. Comparisons with simpler techniques that are based on either emission or scattering measurements are used to illustrate the fact that the current algorithm, while better correlated with the emission methods over water, cannot be reduced to either of these simpler methods.
Electron Beam Freeform Fabrication: A Rapid Metal Deposition Process
NASA Technical Reports Server (NTRS)
Taminger, Karen M. B.; Hafley, Robert A.
2003-01-01
Manufacturing of structural metal parts directly from computer aided design (CAD) data has been investigated by numerous researchers over the past decade. Researchers at NASA Langley REsearch Center are developing a new solid freeform fabrication process, electron beam freeform fabrication (EBF), as a rapid metal deposition process that works efficiently with a variety of weldable alloys. The EBF process introduces metal wire feedstock into a molten pool that is created and sustained using a focused electron beam in a vacuum environment. Thus far, this technique has been demonstrated on aluminum and titanium alloys of interest for aerospace structural applications nickel and ferrous based alloys are also planned. Deposits resulting from 2219 aluminum demonstrations have exhibited a range of grain morphologies depending upon the deposition parameters. These materials ave exhibited excellent tensile properties comparable to typical handbook data for wrought plate product after post-processing heat treatments. The EBF process is capable of bulk metal deposition at deposition rated in excess of 2500 cubic centimeters per hour (150 cubic inches per our) or finer detail at lower deposition rates, depending upon the desired application. This process offers the potential for rapidly adding structural details to simpler cast or forged structures rather than the conventional approach of machining large volumes of chips to produce a monolithic metallic structure. Selective addition of metal onto simpler blanks of material can have a significant effect on lead time reduction and lower material and machining costs.
Rebreathed air as a reference for breath-alcohol testers
DOT National Transportation Integrated Search
1975-01-01
A technique has been devised for a reference measurement of the performance of breath-alcohol measuring instruments directly from the respiratory system. It is shown that this technique is superior and simpler than comparison measurements based on bl...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, D.E.
1993-06-01
This progress report describes accomplishments of four programs. The four programs are entitled (1) Faster,simpler processing of positron-computing precursors: New physicochemical approaches, (2) Novel solid phase reagents and methods to improve radiosynthesis and isotope production, (3) Quantitative evaluation of the extraction of information from PET images, and (4) Optimization of tracer kinetic methods for radioligand studies in PET.
Prenuclear-age leaders and the nuclear arms race.
Frank, Jerome D
1982-10-01
Nuclear arms are a phenomenon with no historical precedent, yet people--and their national leaders--confront the prospect of nuclear war with psychological attitudes from an earlier, simpler time. This paper considers the meaning of our image of the "enemy," analyzes the appropriateness and effectiveness of a policy of deterrence, and considers approaches to doing away with war and to easing international antagonisms through the pursuit of mutually beneficial goals.
ERIC Educational Resources Information Center
Bardell, Nicholas S.
2014-01-01
This paper describes how a simple application of de Moivre's theorem may be used to not only find the roots of a quadratic equation with real or generally complex coefficients but also to pinpoint their location in the Argand plane. This approach is much simpler than the comprehensive analysis presented by Bardell (2012, 2014), but it does not…
Test Protocol for Room-to-Room Distribution of Outside Air by Residential Ventilation Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barley, C. D.; Anderson, R.; Hendron, B.
2007-12-01
This test and analysis protocol has been developed as a practical approach for measuring outside air distribution in homes. It has been used successfully in field tests and has led to significant insights on ventilation design issues. Performance advantages of more sophisticated ventilation systems over simpler, less-costly designs have been verified, and specific problems, such as airflow short-circuiting, have been identified.
SweepSAR: Beam-forming on Receive Using a Reflector-Phased Array Feed Combination for Spaceborne SAR
NASA Technical Reports Server (NTRS)
Freeman, A.; Krieger, G.; Rosen, P.; Younis, M.; Johnson, W. T. K.; Huber, S.; Jordan, R.; Moreira, A.
2012-01-01
In this paper, an alternative approach is described that is suited for longer wavelength SARs in particular, employing a large, deployable reflector antenna and a much simpler phased array feed. To illuminate a wide swath, a substantial fraction of the phased array feed is excited on transmit to sub-illuminate the reflector. Shorter transmit pulses are required than for conventional SAR. On receive, a much smaller portion of the phased array feed is used to collect the return echo, so that a greater portion of the reflector antenna area is used. The locus of the portion of the phased array used on receive is adjusted using an analog beam steering network, to 'sweep' the receive beam(s) across the illuminated swath, tracking the return echo. This is similar in some respects to the whiskbroom approach to optical sensors, hence the name: SweepSAR.SweepSAR has advantages over conventional SAR in that it requires less transmit power, and if the receive beam is narrow enough, it is relatively immune to range ambiguities. Compared to direct radiating arrays with digital beam- forming, it is much simpler to implement, uses currently available technologies, is better suited for longer wavelength systems, and does not require extremely high data rates or onboard processing.
NASA Astrophysics Data System (ADS)
Nguyen, Tien Long; Sansour, Carlo; Hjiaj, Mohammed
2017-05-01
In this paper, an energy-momentum method for geometrically exact Timoshenko-type beam is proposed. The classical time integration schemes in dynamics are known to exhibit instability in the non-linear regime. The so-called Timoshenko-type beam with the use of rotational degree of freedom leads to simpler strain relations and simpler expressions of the inertial terms as compared to the well known Bernoulli-type model. The treatment of the Bernoulli-model has been recently addressed by the authors. In this present work, we extend our approach of using the strain rates to define the strain fields to in-plane geometrically exact Timoshenko-type beams. The large rotational degrees of freedom are exactly computed. The well-known enhanced strain method is used to avoid locking phenomena. Conservation of energy, momentum and angular momentum is proved formally and numerically. The excellent performance of the formulation will be demonstrated through a range of examples.
Alignment-free genetic sequence comparisons: a review of recent approaches by word analysis
Steele, Joe; Bastola, Dhundy
2014-01-01
Modern sequencing and genome assembly technologies have provided a wealth of data, which will soon require an analysis by comparison for discovery. Sequence alignment, a fundamental task in bioinformatics research, may be used but with some caveats. Seminal techniques and methods from dynamic programming are proving ineffective for this work owing to their inherent computational expense when processing large amounts of sequence data. These methods are prone to giving misleading information because of genetic recombination, genetic shuffling and other inherent biological events. New approaches from information theory, frequency analysis and data compression are available and provide powerful alternatives to dynamic programming. These new methods are often preferred, as their algorithms are simpler and are not affected by synteny-related problems. In this review, we provide a detailed discussion of computational tools, which stem from alignment-free methods based on statistical analysis from word frequencies. We provide several clear examples to demonstrate applications and the interpretations over several different areas of alignment-free analysis such as base–base correlations, feature frequency profiles, compositional vectors, an improved string composition and the D2 statistic metric. Additionally, we provide detailed discussion and an example of analysis by Lempel–Ziv techniques from data compression. PMID:23904502
Focusing light inside dynamic scattering media with millisecond digital optical phase conjugation
Liu, Yan; Ma, Cheng; Shen, Yuecheng; Shi, Junhui; Wang, Lihong V.
2017-01-01
Wavefront shaping based on digital optical phase conjugation (DOPC) focuses light through or inside scattering media, but the low speed of DOPC prevents it from being applied to thick, living biological tissue. Although a fast DOPC approach was recently developed, the reported single-shot wavefront measurement method does not work when the goal is to focus light inside, instead of through, highly scattering media. Here, using a ferroelectric liquid crystal based spatial light modulator, we develop a simpler but faster DOPC system that focuses light not only through, but also inside scattering media. By controlling 2.6 × 105 optical degrees of freedom, our system focused light through 3 mm thick moving chicken tissue, with a system latency of 3.0 ms. Using ultrasound-guided DOPC, along with a binary wavefront measurement method, our system focused light inside a scattering medium comprising moving tissue with a latency of 6.0 ms, which is one to two orders of magnitude shorter than those of previous digital wavefront shaping systems. Since the demonstrated speed approaches tissue decorrelation rates, this work is an important step toward in vivo deep-tissue non-invasive optical imaging, manipulation, and therapy. PMID:28815194
Arens-Volland, Andreas G; Spassova, Lübomira; Bohn, Torsten
2015-12-01
The aim of this review was to analyze computer-based tools for dietary management (including web-based and mobile devices) from both scientific and applied perspectives, presenting advantages and disadvantages as well as the state of validation. For this cross-sectional analysis, scientific results from 41 articles retrieved via a medline search as well as 29 applications from online markets were identified and analyzed. Results show that many approaches computerize well-established existing nutritional concepts for dietary assessment, e.g., food frequency questionnaires (FFQ) or dietary recalls (DR). Both food records and barcode scanning are less prominent in research but are frequently offered by commercial applications. Integration with a personal health record (PHR) or a health care workflow is suggested in the literature but is rarely found in mobile applications. It is expected that employing food records for dietary assessment in research settings will be increasingly used when simpler interfaces, e.g., barcode scanning techniques, and comprehensive food databases are applied, which can also support user adherence to dietary interventions and follow-up phases of nutritional studies. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Outerbridge, Gregory John, II
Pose estimation techniques have been developed on both optical and digital correlator platforms to aid in the autonomous rendezvous and docking of spacecraft. This research has focused on the optical architecture, which utilizes high-speed bipolar-phase grayscale-amplitude spatial light modulators as the image and correlation filter devices. The optical approach has the primary advantage of optical parallel processing: an extremely fast and efficient way of performing complex correlation calculations. However, the constraints imposed on optically implementable filters makes optical correlator based posed estimation technically incompatible with the popular weighted composite filter designs successfully used on the digital platform. This research employs a much simpler "bank of filters" approach to optical pose estimation that exploits the inherent efficiency of optical correlation devices. A novel logarithmically mapped optically implementable matched filter combined with a pose search algorithm resulted in sub-degree standard deviations in angular pose estimation error. These filters were extremely simple to generate, requiring no complicated training sets and resulted in excellent performance even in the presence of significant background noise. Common edge detection and scaling of the input image was the only image pre-processing necessary for accurate pose detection at all alignment distances of interest.
Alignment-free genetic sequence comparisons: a review of recent approaches by word analysis.
Bonham-Carter, Oliver; Steele, Joe; Bastola, Dhundy
2014-11-01
Modern sequencing and genome assembly technologies have provided a wealth of data, which will soon require an analysis by comparison for discovery. Sequence alignment, a fundamental task in bioinformatics research, may be used but with some caveats. Seminal techniques and methods from dynamic programming are proving ineffective for this work owing to their inherent computational expense when processing large amounts of sequence data. These methods are prone to giving misleading information because of genetic recombination, genetic shuffling and other inherent biological events. New approaches from information theory, frequency analysis and data compression are available and provide powerful alternatives to dynamic programming. These new methods are often preferred, as their algorithms are simpler and are not affected by synteny-related problems. In this review, we provide a detailed discussion of computational tools, which stem from alignment-free methods based on statistical analysis from word frequencies. We provide several clear examples to demonstrate applications and the interpretations over several different areas of alignment-free analysis such as base-base correlations, feature frequency profiles, compositional vectors, an improved string composition and the D2 statistic metric. Additionally, we provide detailed discussion and an example of analysis by Lempel-Ziv techniques from data compression. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Animal Welfare: Freedoms, Dominions and “A Life Worth Living”
Webster, John
2016-01-01
This opinion paper considers the relative validity and utility of three concepts: the Five Freedoms (FF), Five Domains (FD) and Quality of Life (QoL) as tools for the analysis of animal welfare. The aims of FF and FD are different but complementary. FD seeks to assess the impact of the physical and social environment on the mental (affective) state of a sentient animal, FF is an outcome-based approach to identify and evaluate the efficacy of specific actions necessary to promote well-being. Both have utility. The concept of QoL is presented mainly as a motivational framework. The FD approach provides an effective foundation for research and evidence-based conclusions as to the impact of the things we do on the mental state of the animals in our care. Moreover, it is one that can evolve with time. The FF are much simpler. They do not attempt to achieve an overall picture of mental state and welfare status, but the principles upon which they are based are timeless. Their aim is to be no more than a memorable set of signposts to right action. Since, so far as the animals are concerned, it is not what we think but what we do that counts, I suggest that they are likely to have a more general impact. PMID:27231943
Animal Welfare: Freedoms, Dominions and "A Life Worth Living".
Webster, John
2016-05-24
This opinion paper considers the relative validity and utility of three concepts: the Five Freedoms (FF), Five Domains (FD) and Quality of Life (QoL) as tools for the analysis of animal welfare. The aims of FF and FD are different but complementary. FD seeks to assess the impact of the physical and social environment on the mental (affective) state of a sentient animal, FF is an outcome-based approach to identify and evaluate the efficacy of specific actions necessary to promote well-being. Both have utility. The concept of QoL is presented mainly as a motivational framework. The FD approach provides an effective foundation for research and evidence-based conclusions as to the impact of the things we do on the mental state of the animals in our care. Moreover, it is one that can evolve with time. The FF are much simpler. They do not attempt to achieve an overall picture of mental state and welfare status, but the principles upon which they are based are timeless. Their aim is to be no more than a memorable set of signposts to right action. Since, so far as the animals are concerned, it is not what we think but what we do that counts, I suggest that they are likely to have a more general impact.
Bilinear approach to Kuperschmidt super-KdV type equations
NASA Astrophysics Data System (ADS)
Babalic, Corina N.; Carstea, A. S.
2018-06-01
Hirota bilinear form and soliton solutions for the super-KdV (Korteweg–de Vries) equation of Kuperschmidt (Kuper–KdV) are given. It is shown that even though the collision of supersolitons is more complicated than in the case of the supersymmetric KdV equation of Manin–Radul, the asymptotic effect of the interaction is simpler. As a physical application it is shown that the well-known FPU problem, having a phonon-mediated interaction of some internal degrees of freedom expressed through Grassmann fields, transforms to the Kuper–KdV equation in a multiple-scale approach.
A Program Structure for Event-Based Speech Synthesis by Rules within a Flexible Segmental Framework.
ERIC Educational Resources Information Center
Hill, David R.
1978-01-01
A program structure based on recently developed techniques for operating system simulation has the required flexibility for use as a speech synthesis algorithm research framework. This program makes synthesis possible with less rigid time and frequency-component structure than simpler schemes. It also meets real-time operation and memory-size…
Data Integration and Mining for Synthetic Biology Design.
Mısırlı, Göksel; Hallinan, Jennifer; Pocock, Matthew; Lord, Phillip; McLaughlin, James Alastair; Sauro, Herbert; Wipat, Anil
2016-10-21
One aim of synthetic biologists is to create novel and predictable biological systems from simpler modular parts. This approach is currently hampered by a lack of well-defined and characterized parts and devices. However, there is a wealth of existing biological information, which can be used to identify and characterize biological parts, and their design constraints in the literature and numerous biological databases. However, this information is spread among these databases in many different formats. New computational approaches are required to make this information available in an integrated format that is more amenable to data mining. A tried and tested approach to this problem is to map disparate data sources into a single data set, with common syntax and semantics, to produce a data warehouse or knowledge base. Ontologies have been used extensively in the life sciences, providing this common syntax and semantics as a model for a given biological domain, in a fashion that is amenable to computational analysis and reasoning. Here, we present an ontology for applications in synthetic biology design, SyBiOnt, which facilitates the modeling of information about biological parts and their relationships. SyBiOnt was used to create the SyBiOntKB knowledge base, incorporating and building upon existing life sciences ontologies and standards. The reasoning capabilities of ontologies were then applied to automate the mining of biological parts from this knowledge base. We propose that this approach will be useful to speed up synthetic biology design and ultimately help facilitate the automation of the biological engineering life cycle.
The induced electric field due to a current transient
NASA Astrophysics Data System (ADS)
Beck, Y.; Braunstein, A.; Frankental, S.
2007-05-01
Calculations and measurements of the electric fields, induced by a lightning strike, are important for understanding the phenomenon and developing effective protection systems. In this paper, a novel approach to the calculation of the electric fields due to lightning strikes, using a relativistic approach, is presented. This approach is based on a known current wave-pair model, representing the lightning current wave. The model presented is one that describes the lightning current wave, either at the first stage of the descending charge wave from the cloud or at the later stage of the return stroke. The electric fields computed are cylindrically symmetric. A simplified method for the calculation of the electric field is achieved by using special relativity theory and relativistic considerations. The proposed approach, described in this paper, is based on simple expressions (by applying Coulomb's law) compared with much more complicated partial differential equations based on Maxwell's equations. A straight forward method of calculating the electric field due to a lightning strike, modelled as a negative-positive (NP) wave-pair, is determined by using the special relativity theory in order to calculate the 'velocity field' and relativistic concepts for calculating the 'acceleration field'. These fields are the basic elements required for calculating the total field resulting from the current wave-pair model. Moreover, a modified simpler method using sub models is represented. The sub-models are filaments of either static charges or charges at constant velocity only. Combining these simple sub-models yields the total wave-pair model. The results fully agree with that obtained by solving Maxwell's equations for the discussed problem.
COSP - A computer model of cyclic oxidation
NASA Technical Reports Server (NTRS)
Lowell, Carl E.; Barrett, Charles A.; Palmer, Raymond W.; Auping, Judith V.; Probst, Hubert B.
1991-01-01
A computer model useful in predicting the cyclic oxidation behavior of alloys is presented. The model considers the oxygen uptake due to scale formation during the heating cycle and the loss of oxide due to spalling during the cooling cycle. The balance between scale formation and scale loss is modeled and used to predict weight change and metal loss kinetics. A simple uniform spalling model is compared to a more complex random spall site model. In nearly all cases, the simpler uniform spall model gave predictions as accurate as the more complex model. The model has been applied to several nickel-base alloys which, depending upon composition, form Al2O3 or Cr2O3 during oxidation. The model has been validated by several experimental approaches. Versions of the model that run on a personal computer are available.
The Levy sections theorem revisited
NASA Astrophysics Data System (ADS)
Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio
2007-06-01
This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets.
Schostak, M; Miller, K; Schrader, M
2008-01-01
Radical prostatectomy for treatment of prostate cancer is a technically sophisticated operation. Simpler therapies have therefore been developed in the course of decades. The decisive advantage of a radical operation is the chance of a cure with minimal collateral damage. It is the only approach that enables precise tumor staging. The 10-year progression-free survival probability is approximately 85% for a localized tumor with negative resection margins. This high cure rate is unsurpassed by competitive treatment modalities. Nowadays, experienced surgeons achieve excellent functional results (for example, recovery of continence and erectile function) with minimum morbidity. Even in the locally advanced stage, results are very good compared to those obtained with other treatment modalities. Pathological staging enables stratified adjuvant therapy based on concrete information. The overall prognosis can thus be significantly improved.
Jonnalagadda, Siddhartha; Gonzalez, Graciela
2010-11-13
BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.
Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling
NASA Astrophysics Data System (ADS)
Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.
2017-12-01
Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model. This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.
Lloyd, G T; Bapst, D W; Friedman, M; Davis, K E
2016-11-01
Branch lengths-measured in character changes-are an essential requirement of clock-based divergence estimation, regardless of whether the fossil calibrations used represent nodes or tips. However, a separate set of divergence time approaches are typically used to date palaeontological trees, which may lack such branch lengths. Among these methods, sophisticated probabilistic approaches have recently emerged, in contrast with simpler algorithms relying on minimum node ages. Here, using a novel phylogenetic hypothesis for Mesozoic dinosaurs, we apply two such approaches to estimate divergence times for: (i) Dinosauria, (ii) Avialae (the earliest birds) and (iii) Neornithes (crown birds). We find: (i) the plausibility of a Permian origin for dinosaurs to be dependent on whether Nyasasaurus is the oldest dinosaur, (ii) a Middle to Late Jurassic origin of avian flight regardless of whether Archaeopteryx or Aurornis is considered the first bird and (iii) a Late Cretaceous origin for Neornithes that is broadly congruent with other node- and tip-dating estimates. Demonstrating the feasibility of probabilistic time-scaling further opens up divergence estimation to the rich histories of extinct biodiversity in the fossil record, even in the absence of detailed character data. © 2016 The Authors.
A new approach to impulsive rendezvous near circular orbit
NASA Astrophysics Data System (ADS)
Carter, Thomas; Humi, Mayer
2012-04-01
A new approach is presented for the problem of planar optimal impulsive rendezvous of a spacecraft in an inertial frame near a circular orbit in a Newtonian gravitational field. The total characteristic velocity to be minimized is replaced by a related characteristic-value function and this related optimization problem can be solved in closed form. The solution of this problem is shown to approach the solution of the original problem in the limit as the boundary conditions approach those of a circular orbit. Using a form of primer-vector theory the problem is formulated in a way that leads to relatively easy calculation of the optimal velocity increments. A certain vector that can easily be calculated from the boundary conditions determines the number of impulses required for solution of the optimization problem and also is useful in the computation of these velocity increments. Necessary and sufficient conditions for boundary conditions to require exactly three nonsingular non-degenerate impulses for solution of the related optimal rendezvous problem, and a means of calculating these velocity increments are presented. A simple example of a three-impulse rendezvous problem is solved and the resulting trajectory is depicted. Optimal non-degenerate nonsingular two-impulse rendezvous for the related problem is found to consist of four categories of solutions depending on the four ways the primer vector locus intersects the unit circle. Necessary and sufficient conditions for each category of solutions are presented. The region of the boundary values that admit each category of solutions of the related problem are found, and in each case a closed-form solution of the optimal velocity increments is presented. Similar results are presented for the simpler optimal rendezvous that require only one-impulse. For brevity degenerate and singular solutions are not discussed in detail, but should be presented in a following study. Although this approach is thought to provide simpler computations than existing methods, its main contribution may be in establishing a new approach to the more general problem.
Hybrid x-space: a new approach for MPI reconstruction.
Tateo, A; Iurino, A; Settanni, G; Andrisani, A; Stifanelli, P F; Larizza, P; Mazzia, F; Mininni, R M; Tangaro, S; Bellotti, R
2016-06-07
Magnetic particle imaging (MPI) is a new medical imaging technique capable of recovering the distribution of superparamagnetic particles from their measured induced signals. In literature there are two main MPI reconstruction techniques: measurement-based (MB) and x-space (XS). The MB method is expensive because it requires a long calibration procedure as well as a reconstruction phase that can be numerically costly. On the other side, the XS method is simpler than MB but the exact knowledge of the field free point (FFP) motion is essential for its implementation. Our simulation work focuses on the implementation of a new approach for MPI reconstruction: it is called hybrid x-space (HXS), representing a combination of the previous methods. Specifically, our approach is based on XS reconstruction because it requires the knowledge of the FFP position and velocity at each time instant. The difference with respect to the original XS formulation is how the FFP velocity is computed: we estimate it from the experimental measurements of the calibration scans, typical of the MB approach. Moreover, a compressive sensing technique is applied in order to reduce the calibration time, setting a fewer number of sampling positions. Simulations highlight that HXS and XS methods give similar results. Furthermore, an appropriate use of compressive sensing is crucial for obtaining a good balance between time reduction and reconstructed image quality. Our proposal is suitable for open geometry configurations of human size devices, where incidental factors could make the currents, the fields and the FFP trajectory irregular.
Tunable molecular plasmons in polycyclic aromatic hydrocarbons.
Manjavacas, Alejandro; Marchesin, Federico; Thongrattanasiri, Sukosin; Koval, Peter; Nordlander, Peter; Sánchez-Portal, Daniel; García de Abajo, F Javier
2013-04-23
We show that chemically synthesized polycyclic aromatic hydrocarbons (PAHs) exhibit molecular plasmon resonances that are remarkably sensitive to the net charge state of the molecule and the atomic structure of the edges. These molecules can be regarded as nanometer-sized forms of graphene, from which they inherit their high electrical tunability. Specifically, the addition or removal of a single electron switches on/off these molecular plasmons. Our first-principles time-dependent density-functional theory (TDDFT) calculations are in good agreement with a simpler tight-binding approach that can be easily extended to much larger systems. These fundamental insights enable the development of novel plasmonic devices based upon chemically available molecules, which, unlike colloidal or lithographic nanostructures, are free from structural imperfections. We further show a strong interaction between plasmons in neighboring molecules, quantified in significant energy shifts and field enhancement, and enabling molecular-based plasmonic designs. Our findings suggest new paradigms for electro-optical modulation and switching, single-electron detection, and sensing using individual molecules.
Mechanical transduction via a single soft polymer
NASA Astrophysics Data System (ADS)
Hou, Ruizheng; Wang, Nan; Bao, Weizhu; Wang, Zhisong
2018-04-01
Molecular machines from biology and nanotechnology often depend on soft structures to perform mechanical functions, but the underlying mechanisms and advantages or disadvantages over rigid structures are not fully understood. We report here a rigorous study of mechanical transduction along a single soft polymer based on exact solutions to the realistic three-dimensional wormlike-chain model and augmented with analytical relations derived from simpler polymer models. The results reveal surprisingly that a soft polymer with vanishingly small persistence length below a single chemical bond still transduces biased displacement and mechanical work up to practically significant amounts. This "soft" approach possesses unique advantages over the conventional wisdom of rigidity-based transduction, and potentially leads to a unified mechanism for effective allosterylike transduction and relay of mechanical actions, information, control, and molecules from one position to another in molecular devices and motors. This study also identifies an entropy limit unique to the soft transduction, and thereby suggests a possibility of detecting higher efficiency for kinesin motor and mutants in future experiments.
A novel unsplit perfectly matched layer for the second-order acoustic wave equation.
Ma, Youneng; Yu, Jinhua; Wang, Yuanyuan
2014-08-01
When solving acoustic field equations by using numerical approximation technique, absorbing boundary conditions (ABCs) are widely used to truncate the simulation to a finite space. The perfectly matched layer (PML) technique has exhibited excellent absorbing efficiency as an ABC for the acoustic wave equation formulated as a first-order system. However, as the PML was originally designed for the first-order equation system, it cannot be applied to the second-order equation system directly. In this article, we aim to extend the unsplit PML to the second-order equation system. We developed an efficient unsplit implementation of PML for the second-order acoustic wave equation based on an auxiliary-differential-equation (ADE) scheme. The proposed method can benefit to the use of PML in simulations based on second-order equations. Compared with the existing PMLs, it has simpler implementation and requires less extra storage. Numerical results from finite-difference time-domain models are provided to illustrate the validity of the approach. Copyright © 2014 Elsevier B.V. All rights reserved.
Peters, Gjalt-Jorn Ygram; de Bruin, Marijn; Crutzen, Rik
2015-01-01
There is a need to consolidate the evidence base underlying our toolbox of methods of behaviour change. Recent efforts to this effect have conducted meta-regressions on evaluations of behaviour change interventions, deriving each method's effectiveness from its association to intervention effect size. However, there are a range of issues that raise concern about whether this approach is actually furthering or instead obstructing the advancement of health psychology theories and the quality of health behaviour change interventions. Using examples from theory, the literature and data from previous meta-analyses, these concerns and their implications are explained and illustrated. An iterative protocol for evidence base accumulation is proposed that integrates evidence derived from both experimental and applied behaviour change research, and combines theory development in experimental settings with theory testing in applied real-life settings. As evidence gathered in this manner accumulates, a cumulative science of behaviour change can develop.
Peters, Gjalt-Jorn Ygram; de Bruin, Marijn; Crutzen, Rik
2015-01-01
There is a need to consolidate the evidence base underlying our toolbox of methods of behaviour change. Recent efforts to this effect have conducted meta-regressions on evaluations of behaviour change interventions, deriving each method's effectiveness from its association to intervention effect size. However, there are a range of issues that raise concern about whether this approach is actually furthering or instead obstructing the advancement of health psychology theories and the quality of health behaviour change interventions. Using examples from theory, the literature and data from previous meta-analyses, these concerns and their implications are explained and illustrated. An iterative protocol for evidence base accumulation is proposed that integrates evidence derived from both experimental and applied behaviour change research, and combines theory development in experimental settings with theory testing in applied real-life settings. As evidence gathered in this manner accumulates, a cumulative science of behaviour change can develop. PMID:25793484
Multiplex High-Throughput Targeted Proteomic Assay To Identify Induced Pluripotent Stem Cells.
Baud, Anna; Wessely, Frank; Mazzacuva, Francesca; McCormick, James; Camuzeaux, Stephane; Heywood, Wendy E; Little, Daniel; Vowles, Jane; Tuefferd, Marianne; Mosaku, Olukunbi; Lako, Majlinda; Armstrong, Lyle; Webber, Caleb; Cader, M Zameel; Peeters, Pieter; Gissen, Paul; Cowley, Sally A; Mills, Kevin
2017-02-21
Induced pluripotent stem cells have great potential as a human model system in regenerative medicine, disease modeling, and drug screening. However, their use in medical research is hampered by laborious reprogramming procedures that yield low numbers of induced pluripotent stem cells. For further applications in research, only the best, competent clones should be used. The standard assays for pluripotency are based on genomic approaches, which take up to 1 week to perform and incur significant cost. Therefore, there is a need for a rapid and cost-effective assay able to distinguish between pluripotent and nonpluripotent cells. Here, we describe a novel multiplexed, high-throughput, and sensitive peptide-based multiple reaction monitoring mass spectrometry assay, allowing for the identification and absolute quantitation of multiple core transcription factors and pluripotency markers. This assay provides simpler and high-throughput classification into either pluripotent or nonpluripotent cells in 7 min analysis while being more cost-effective than conventional genomic tests.
Snapshot imaging Fraunhofer line discriminator for detection of plant fluorescence
NASA Astrophysics Data System (ADS)
Gupta Roy, S.; Kudenov, M. W.
2015-05-01
Non-invasive quantification of plant health is traditionally accomplished using reflectance based metrics, such as the normalized difference vegetative index (NDVI). However, measuring plant fluorescence (both active and passive) to determine photochemistry of plants has gained importance. Due to better cost efficiency, lower power requirements, and simpler scanning synchronization, detecting passive fluorescence is preferred over active fluorescence. In this paper, we propose a high speed imaging approach for measuring passive plant fluorescence, within the hydrogen alpha Fraunhofer line at ~656 nm, using a Snapshot Imaging Fraunhofer Line Discriminator (SIFOLD). For the first time, the advantage of snapshot imaging for high throughput Fraunhofer Line Discrimination (FLD) is cultivated by our system, which is based on a multiple-image Fourier transform spectrometer and a spatial heterodyne interferometer (SHI). The SHI is a Sagnac interferometer, which is dispersion compensated using blazed diffraction gratings. We present data and techniques for calibrating the SIFOLD to any particular wavelength. This technique can be applied to quantify plant fluorescence at low cost and reduced complexity of data collection.
Rosenbaum, Benjamin P; Silkin, Nikolay; Miller, Randolph A
2014-01-01
Real-time alerting systems typically warn providers about abnormal laboratory results or medication interactions. For more complex tasks, institutions create site-wide 'data warehouses' to support quality audits and longitudinal research. Sophisticated systems like i2b2 or Stanford's STRIDE utilize data warehouses to identify cohorts for research and quality monitoring. However, substantial resources are required to install and maintain such systems. For more modest goals, an organization desiring merely to identify patients with 'isolation' orders, or to determine patients' eligibility for clinical trials, may adopt a simpler, limited approach based on processing the output of one clinical system, and not a data warehouse. We describe a limited, order-entry-based, real-time 'pick off' tool, utilizing public domain software (PHP, MySQL). Through a web interface the tool assists users in constructing complex order-related queries and auto-generates corresponding database queries that can be executed at recurring intervals. We describe successful application of the tool for research and quality monitoring.
Simplified process model discovery based on role-oriented genetic mining.
Zhao, Weidong; Liu, Xi; Dai, Weihui
2014-01-01
Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.
Pérez-Rodríguez, Gael; Dias, Sónia; Pérez-Pérez, Martín; Fdez-Riverola, Florentino; Azevedo, Nuno F; Lourenço, Anália
2018-03-08
Experimental incapacity to track microbe-microbe interactions in structures like biofilms, and the complexity inherent to the mathematical modelling of those interactions, raises the need for feasible, alternative modelling approaches. This work proposes an agent-based representation of the diffusion of N-acyl homoserine lactones (AHL) in a multicellular environment formed by Pseudomonas aeruginosa and Candida albicans. Depending on the spatial location, C. albicans cells were variably exposed to AHLs, an observation that might help explain why phenotypic switching of individual cells in biofilms occurred at different time points. The simulation and algebraic results were similar for simpler scenarios, although some statistical differences could be observed (p < 0.05). The model was also successfully applied to a more complex scenario representing a small multicellular environment containing C. albicans and P. aeruginosa cells encased in a 3-D matrix. Further development of this model may help create a predictive tool to depict biofilm heterogeneity at the single-cell level.
Science deficiency in conservation practice: the monitoring of tiger populations in India
Karanth, K.U.; Nichols, J.D.; Seidensticker, J.; Dinerstein, Eric; Smith, J.L.D.; McDougal, C.; Johnsingh, A.J.T.; Chundawat, Raghunandan S.; Thapar, V.
2003-01-01
Conservation practices are supposed to get refined by advancing scientific knowledge. We study this phenomenon in the context of monitoring tiger populations in India, by evaluating the 'pugmark census method' employed by wildlife managers for three decades. We use an analytical framework of modem animal population sampling to test the efficacy of the pugmark censuses using scientific data on tigers and our field observations. We identify three critical goals for monitoring tiger populations, in order of increasing sophistication: (1) distribution mapping, (2) tracking relative abundance, (3) estimation of absolute abundance. We demonstrate that the present census-based paradigm does not work because it ignores the first two simpler goals, and targets, but fails to achieve, the most difficult third goal. We point out the utility and ready availability of alternative monitoring paradigms that deal with the central problems of spatial sampling and observability. We propose an alternative sampling-based approach that can be tailored to meet practical needs of tiger monitoring at different levels of refinement.
Calibrating cellular automaton models for pedestrians walking through corners
NASA Astrophysics Data System (ADS)
Dias, Charitha; Lovreglio, Ruggiero
2018-05-01
Cellular Automata (CA) based pedestrian simulation models have gained remarkable popularity as they are simpler and easier to implement compared to other microscopic modeling approaches. However, incorporating traditional floor field representations in CA models to simulate pedestrian corner navigation behavior could result in unrealistic behaviors. Even though several previous studies have attempted to enhance CA models to realistically simulate pedestrian maneuvers around bends, such modifications have not been calibrated or validated against empirical data. In this study, two static floor field (SFF) representations, namely 'discrete representation' and 'continuous representation', are calibrated for CA-models to represent pedestrians' walking behavior around 90° bends. Trajectory data collected through a controlled experiment are used to calibrate these model representations. Calibration results indicate that although both floor field representations can represent pedestrians' corner navigation behavior, the 'continuous' representation fits the data better. Output of this study could be beneficial for enhancing the reliability of existing CA-based models by representing pedestrians' corner navigation behaviors more realistically.
NASA Astrophysics Data System (ADS)
Antonetti, Manuel; Buss, Rahel; Scherrer, Simon; Margreth, Michael; Zappa, Massimiliano
2016-07-01
The identification of landscapes with similar hydrological behaviour is useful for runoff and flood predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP-mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexities of automatic DRP-mapping approaches affect hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison, and a deviation map were derived. The automatically derived DRP maps were used in synthetic runoff simulations with an adapted version of the PREVAH hydrological model, and simulation results compared with those from simulations using the reference maps. The DRP maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. Furthermore, we argue not to use expert knowledge for only model building and constraining, but also in the phase of landscape classification.
NASA Technical Reports Server (NTRS)
Houseman, John; Patzold, Jack D.; Jackson, Julie R.; Brown, Pamela R.
1999-01-01
The loading of spacecraft with Hydrazine type fuels has long been recognized as a hazardous operation. This has led to safety strategies that include the use of SCAPE protective suits for personnel. The use of SCAPE suits have an excellent safety record, however there are associated drawbacks. Drawbacks include the high cost of maintaining and cleaning the suits, reduced mobility and dexterity when wearing the suits, the requirement for extensive specialized health and safety training, and the need to rotate personnel every two hours. A study was undertaken to look at procedures and/or equipment to eliminate or reduce the time spent in SCAPE-type operations. The major conclusions are drawn from observations of the loading of the JPL/NASA spacecraft Deep Space One (DS1) at KSC and the loading of a commercial communications satellite by Motorola at Vandenberg AF Base. The DS1 operations require extensive use of SCAPE suits, while the Motorola operation uses only SPLASH attire with a two-man team on standby in SCAPE. The Motorola team used very different loading equipment and procedures based on an integrated approach involving the propellant supplier. Overall, the Motorola approach was very clean, much faster and simpler than the DS1 procedure. The DS1 spacecraft used a bladder in the propellant tank, whereas the Motorola spacecraft used a Propellant Management Device (PMD). The Motorola approach cannot be used for tanks with bladders. To overcome this problem, some new procedures and new equipment are proposed to enable tanks with bladders to be loaded without using SCAPE, using a modified Motorola approach. Overall, it appears feasible to adopt the non-SCAPE approach while maintaining a very high degree of safety and reliability.
NASA Technical Reports Server (NTRS)
Alexander, June; Corwin, Edward; Lloyd, David; Logar, Antonette; Welch, Ronald
1996-01-01
This research focuses on a new neural network scene classification technique. The task is to identify scene elements in Advanced Very High Resolution Radiometry (AVHRR) data from three scene types: polar, desert and smoke from biomass burning in South America (smoke). The ultimate goal of this research is to design and implement a computer system which will identify the clouds present on a whole-Earth satellite view as a means of tracking global climate changes. Previous research has reported results for rule-based systems (Tovinkere et at 1992, 1993) for standard back propagation (Watters et at. 1993) and for a hierarchical approach (Corwin et al 1994) for polar data. This research uses a hierarchical neural network with don't care conditions and applies this technique to complex scenes. A hierarchical neural network consists of a switching network and a collection of leaf networks. The idea of the hierarchical neural network is that it is a simpler task to classify a certain pattern from a subset of patterns than it is to classify a pattern from the entire set. Therefore, the first task is to cluster the classes into groups. The switching, or decision network, performs an initial classification by selecting a leaf network. The leaf networks contain a reduced set of similar classes, and it is in the various leaf networks that the actual classification takes place. The grouping of classes in the various leaf networks is determined by applying an iterative clustering algorithm. Several clustering algorithms were investigated, but due to the size of the data sets, the exhaustive search algorithms were eliminated. A heuristic approach using a confusion matrix from a lightly trained neural network provided the basis for the clustering algorithm. Once the clusters have been identified, the hierarchical network can be trained. The approach of using don't care nodes results from the difficulty in generating extremely complex surfaces in order to separate one class from all of the others. This approach finds pairwise separating surfaces and forms the more complex separating surface from combinations of simpler surfaces. This technique both reduces training time and improves accuracy over the previously reported results. Accuracies of 97.47%, 95.70%, and 99.05% were achieved for the polar, desert and smoke data sets.
NASA Astrophysics Data System (ADS)
Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul
2015-12-01
Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approach is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers' efficiency.
Bernardini, Annarita; Larrabide, Ignacio; Morales, Hernán G.; Pennati, Giancarlo; Petrini, Lorenza; Cito, Salvatore; Frangi, Alejandro F.
2011-01-01
Cerebral aneurysms are abnormal focal dilatations of artery walls. The interest in virtual tools to help clinicians to value the effectiveness of different procedures for cerebral aneurysm treatment is constantly growing. This study is focused on the analysis of the influence of different stent deployment approaches on intra-aneurysmal haemodynamics using computational fluid dynamics (CFD). A self-expanding stent was deployed in an idealized aneurysmatic cerebral vessel in two initial positions. Different cases characterized by a progression of simplifications on stent modelling (geometry and material) and vessel material properties were set up, using finite element and fast virtual stenting methods. Then, CFD analysis was performed for untreated and stented vessels. Haemodynamic parameters were analysed qualitatively and quantitatively, comparing the cases and the two initial positions. All the cases predicted a reduction of average wall shear stress and average velocity of almost 50 per cent after stent deployment for both initial positions. Results highlighted that, although some differences in calculated parameters existed across the cases based on the modelling simplifications, all the approaches described the most important effects on intra-aneurysmal haemodynamics. Hence, simpler and faster modelling approaches could be included in clinical workflow and, despite the adopted simplifications, support clinicians in the treatment planning. PMID:22670204
Transient high frequency signal estimation: A model-based processing approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, F.L.
1985-03-22
By utilizing the superposition property of linear systems a method of estimating the incident signal from reflective nondispersive data is developed. One of the basic merits of this approach is that, the reflections were removed by direct application of a Weiner type estimation algorithm, after the appropriate input was synthesized. The structure of the nondispersive signal model is well documented, and thus its' credence is established. The model is stated and more effort is devoted to practical methods of estimating the model parameters. Though a general approach was developed for obtaining the reflection weights, a simpler approach was employed here,more » since a fairly good reflection model is available. The technique essentially consists of calculating ratios of the autocorrelation function at lag zero and that lag where the incident and first reflection coincide. We initially performed our processing procedure on a measurement of a single signal. Multiple application of the processing procedure was required when we applied the reflection removal technique on a measurement containing information from the interaction of two physical phenomena. All processing was performed using SIG, an interactive signal processing package. One of the many consequences of using SIG was that repetitive operations were, for the most part, automated. A custom menu was designed to perform the deconvolution process.« less
On the Formulation of Anisotropic-Polyaxial Failure Criteria: A Comparative Study
NASA Astrophysics Data System (ADS)
Parisio, Francesco; Laloui, Lyesse
2018-02-01
The correct representation of the failure of geomaterials that feature strength anisotropy and polyaxiality is crucial for many applications. In this contribution, we propose and evaluate through a comparative study a generalized framework that covers both features. Polyaxiality of strength is modeled with a modified Van Eekelen approach, while the anisotropy is modeled using a fabric tensor approach of the Pietruszczak and Mroz type. Both approaches share the same philosophy as they can be applied to simpler failure surfaces, allowing great flexibility in model formulation. The new failure surface is tested against experimental data and its performance compared against classical failure criteria commonly used in geomechanics. Our study finds that the global error between predictions and data is generally smaller for the proposed framework compared to other classical approaches.
Ortín, A; Torres-Lapasió, J R; García-Álvarez-Coque, M C
2011-08-26
Situations of minimal resolution are often found in liquid chromatography, when samples that contain a large number of compounds, or highly similar in terms of structure and/or polarity, are analysed. This makes full resolution with a single separation condition (e.g., mobile phase, gradient or column) unfeasible. In this work, the optimisation of the resolution of such samples in reversed-phase liquid chromatography is approached using two or more isocratic mobile phases with a complementary resolution behaviour (complementary mobile phases, CMPs). Each mobile phase is dedicated to the separation of a group of compounds. The CMPs are selected in such a way that, when the separation is considered globally, all the compounds in the sample are satisfactorily resolved. The search of optimal CMPs can be carried out through a comprehensive examination of the mobile phases in a selected domain. The computation time of this search has been reported to be substantially reduced by application of a genetic algorithm with local search (LOGA). A much simpler approach is here described, which is accessible to non-experts in programming, and offers solutions of the same quality as LOGA, with a similar computation time. The approach makes a sequential search of CMPs based on the peak count concept, which is the number of peaks exceeding a pre-established resolution threshold. The new approach is described using as test sample a mixture of 30 probe compounds, 23 of them with an ionisable character, and the pH and organic solvent contents as experimental factors. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Ito, Hiroyuki; Tani, Iori; Yukihiro, Ryoji; Adachi, Jun; Hara, Koichi; Ogasawara, Megumi; Inoue, Masahiko; Kamio, Yoko; Nakamura, Kazuhiko; Uchiyama, Tokio; Ichikawa, Hironobu; Sugiyama, Toshiro; Hagiwara, Taku; Tsujii, Masatsugu
2012-01-01
The pervasive developmental disorders (PDDs) Autism Society Japan Rating Scale (PARS), an interview-based instrument for evaluating PDDs, has been developed in Japan with the aim of providing a method that (1) can be used to evaluate PDD symptoms and related support needs and (2) is simpler and easier than the currently used "gold…
Planner-Based Control of Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Kortenkamp, David; Fry, Chuck; Bell, Scott
2005-01-01
The paper describes an approach to the integration of qualitative and quantitative modeling techniques for advanced life support (ALS) systems. Developing reliable control strategies that scale up to fully integrated life support systems requires augmenting quantitative models and control algorithms with the abstractions provided by qualitative, symbolic models and their associated high-level control strategies. This will allow for effective management of the combinatorics due to the integration of a large number of ALS subsystems. By focusing control actions at different levels of detail and reactivity we can use faster: simpler responses at the lowest level and predictive but complex responses at the higher levels of abstraction. In particular, methods from model-based planning and scheduling can provide effective resource management over long time periods. We describe reference implementation of an advanced control system using the IDEA control architecture developed at NASA Ames Research Center. IDEA uses planning/scheduling as the sole reasoning method for predictive and reactive closed loop control. We describe preliminary experiments in planner-based control of ALS carried out on an integrated ALS simulation developed at NASA Johnson Space Center.
Potential formulation of sleep dynamics
NASA Astrophysics Data System (ADS)
Phillips, A. J. K.; Robinson, P. A.
2009-02-01
A physiologically based model of the mechanisms that control the human sleep-wake cycle is formulated in terms of an equivalent nonconservative mechanical potential. The potential is analytically simplified and reduced to a quartic two-well potential, matching the bifurcation structure of the original model. This yields a dynamics-based model that is analytically simpler and has fewer parameters than the original model, allowing easier fitting to experimental data. This model is first demonstrated to semiquantitatively match the dynamics of the physiologically based model from which it is derived, and is then fitted directly to a set of experimentally derived criteria. These criteria place rigorous constraints on the parameter values, and within these constraints the model is shown to reproduce normal sleep-wake dynamics and recovery from sleep deprivation. Furthermore, this approach enables insights into the dynamics by direct analogies to phenomena in well studied mechanical systems. These include the relation between friction in the mechanical system and the timecourse of neurotransmitter action, and the possible relation between stochastic resonance and napping behavior. The model derived here also serves as a platform for future investigations of sleep-wake phenomena from a dynamical perspective.
Sears, Clinton; Andersson, Zach; Cann, Meredith
2016-01-01
ABSTRACT Background: Supporting the diverse needs of people living with HIV (PLHIV) can help reduce the individual and structural barriers they face in adhering to antiretroviral treatment (ART). The Livelihoods and Food Security Technical Assistance II (LIFT) project sought to improve adherence in Malawi by establishing 2 referral systems linking community-based economic strengthening and livelihoods services to clinical health facilities. One referral system in Balaka district, started in October 2013, connected clients to more than 20 types of services while the other simplified approach in Kasungu and Lilongwe districts, started in July 2014, connected PLHIV attending HIV and nutrition support facilities directly to community savings groups. Methods: From June to July 2015, LIFT visited referral sites in Balaka, Kasungu, and Lilongwe districts to collect qualitative data on referral utility, the perceived association of referrals with client and household health and vulnerability, and the added value of the referral system as perceived by network member providers. We interviewed a random sample of 152 adult clients (60 from Balaka, 57 from Kasungu, and 35 from Lilongwe) who had completed their referral. We also conducted 2 focus group discussions per district with network providers. Findings: Clients in all 3 districts indicated their ability to save money had improved after receiving a referral, although the percentage was higher among clients in the simplified Kasungu and Lilongwe model than the more complex Balaka model (85.6% vs. 56.0%, respectively). Nearly 70% of all clients interviewed had HIV infection; 72.7% of PLHIV in Balaka and 95.7% of PLHIV in Kasungu and Lilongwe credited referrals for helping them stay on their ART. After the referral, 76.0% of clients in Balaka and 92.3% of clients in Kasungu and Lilongwe indicated they would be willing to spend their savings on health costs. The more diverse referral network and use of an mHealth app to manage data in Balaka hindered provider uptake of the system, while the simpler system in Kasungu and Lilongwe, which included only 2 referral options and use of a paper-based referral tool, seemed simpler for the providers to manage. Conclusions: Participation in the referral systems was perceived positively by clients and providers in both models, but more so in Kasungu and Lilongwe where the referral process was simpler. Future referral networks should consider limiting the number of service options included in the network and simplify referral tools to the extent possible to facilitate uptake among network providers. PMID:28031300
Systems Biology Perspectives on Minimal and Simpler Cells
Xavier, Joana C.; Patil, Kiran Raosaheb
2014-01-01
SUMMARY The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. PMID:25184563
[Transoral laser resection for head and neck cancers].
Hartl, Dana
2007-12-01
Transoral laser surgery has become a therapeutic option and even a standard for certain tumors of the larynx and pharynx. The postoperative course after this type of minimally invasive surgery has been shown to be significantly simpler, with less need for temporary tracheotomy and enteral feeding. For selected tumors amenable to this approach, the oncologic results have been shown to be equivalent to those obtained by classic external approaches. Transoral laser surgery requires specific equipment and training of the surgeon, the anaesthesiologist, the operating room team and the pathologist. Despite this specificity, but because of the simplified postoperative course, transoral laser surgery has already supplanted several external approaches and will in the future probably replace other techniques, as experience with the technique increases and the indications evolve.
Manifold Coal-Slurry Transport System
NASA Technical Reports Server (NTRS)
Liddle, S. G.; Estus, J. M.; Lavin, M. L.
1986-01-01
Feeding several slurry pipes into main pipeline reduces congestion in coal mines. System based on manifold concept: feeder pipelines from each working entry joined to main pipeline that carries coal slurry out of panel and onto surface. Manifold concept makes coal-slurry haulage much simpler than existing slurry systems.
Microprocessor-Based Valved Controller
NASA Technical Reports Server (NTRS)
Norman, Arnold M., Jr.
1987-01-01
New controller simpler, more precise, and lighter than predecessors. Mass-flow controller compensates for changing supply pressure and temperature such as occurs when gas-supply tank becomes depleted. By periodically updating calculation of mass-flow rate, controller determines correct new position for valve and keeps mass-flow rate nearly constant.
Structural simplicity as a restraint on the structure of amorphous silicon
NASA Astrophysics Data System (ADS)
Cliffe, Matthew J.; Bartók, Albert P.; Kerber, Rachel N.; Grey, Clare P.; Csányi, Gábor; Goodwin, Andrew L.
2017-06-01
Understanding the structural origins of the properties of amorphous materials remains one of the most important challenges in structural science. In this study, we demonstrate that local "structural simplicity", embodied by the degree to which atomic environments within a material are similar to each other, is a powerful concept for rationalizing the structure of amorphous silicon (a -Si) a canonical amorphous material. We show, by restraining a reverse Monte Carlo refinement against pair distribution function (PDF) data to be simpler, that the simplest model consistent with the PDF is a continuous random network (CRN). A further effect of producing a simple model of a -Si is the generation of a (pseudo)gap in the electronic density of states, suggesting that structural homogeneity drives electronic homogeneity. That this method produces models of a -Si that approach the state-of-the-art without the need for chemically specific restraints (beyond the assumption of homogeneity) suggests that simplicity-based refinement approaches may allow experiment-driven structural modeling techniques to be developed for the wide variety of amorphous semiconductors with strong local order.
Orita, Toru; Moore, Lee R.; Joshi, Powrnima; Tomita, Masahiro; Horiuchi, Takashi; Zborowski, Maciej
2014-01-01
Quadrupole Magnetic Field-Flow Fractionation (QMgFFF) is a technique for characterization of sub-micrometer magnetic particles based on their retention in the magnetic field from flowing suspensions. Different magnetic field strengths and volumetric flow rates were tested using on-off field application and two commercial nanoparticle preparations that significantly differed in their retention parameter, λ (by nearly 8-fold). The fractograms showed a regular pattern of higher retention (98.6% v. 53.3%) for the larger particle (200 nm v. 90 nm) at the higher flow rate (0.05 mL/min v. 0.01 mL/min) at the highest magnetic field (0.52 T), as expected because of its lower retention parameter. The significance of this approach is a demonstration of a system that is simpler in operation than a programmed field QMgFFF in applications to particle mixtures consisting of two distinct particle fractions. This approach could be useful for detection of unwanted particulate contaminants, especially important in industrial and biomedical applications. PMID:23842422
Quantifying MCMC exploration of phylogenetic tree space.
Whidden, Chris; Matsen, Frederick A
2015-05-01
In order to gain an understanding of the effectiveness of phylogenetic Markov chain Monte Carlo (MCMC), it is important to understand how quickly the empirical distribution of the MCMC converges to the posterior distribution. In this article, we investigate this problem on phylogenetic tree topologies with a metric that is especially well suited to the task: the subtree prune-and-regraft (SPR) metric. This metric directly corresponds to the minimum number of MCMC rearrangements required to move between trees in common phylogenetic MCMC implementations. We develop a novel graph-based approach to analyze tree posteriors and find that the SPR metric is much more informative than simpler metrics that are unrelated to MCMC moves. In doing so, we show conclusively that topological peaks do occur in Bayesian phylogenetic posteriors from real data sets as sampled with standard MCMC approaches, investigate the efficiency of Metropolis-coupled MCMC (MCMCMC) in traversing the valleys between peaks, and show that conditional clade distribution (CCD) can have systematic problems when there are multiple peaks. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Managing and capturing the physics of robotic systems
NASA Astrophysics Data System (ADS)
Werfel, Justin
Algorithmic and other theoretical analyses of robotic systems often use a discretized or otherwise idealized framework, while the real world is continuous-valued and noisy. This disconnect can make theoretical work sometimes problematic to apply successfully to real-world systems. One approach to bridging the separation can be to design hardware to take advantage of simple physical effects mechanically, in order to guide elements into a desired set of discrete attracting states. As a result, the system behavior can effectively approximate a discretized formalism, so that proofs based on an idealization remain directly relevant, while control can be made simpler. It is important to note, conversely, that such an approach does not make a physical instantiation unnecessary nor a purely theoretical treatment sufficient. Experiments with hardware in practice always reveal physical effects not originally accounted for in simulation or analytic modeling, which lead to unanticipated results and require nontrivial modifications to control algorithms in order to achieve desired outcomes. I will discuss these points in the context of swarm robotic systems recently developed at the Self-Organizing Systems Research Group at Harvard.
Karim, Mohammad Ehsanul; Gustafson, Paul; Petkau, John; Tremlett, Helen
2016-08-15
In time-to-event analyses of observational studies of drug effectiveness, incorrect handling of the period between cohort entry and first treatment exposure during follow-up may result in immortal time bias. This bias can be eliminated by acknowledging a change in treatment exposure status with time-dependent analyses, such as fitting a time-dependent Cox model. The prescription time-distribution matching (PTDM) method has been proposed as a simpler approach for controlling immortal time bias. Using simulation studies and theoretical quantification of bias, we compared the performance of the PTDM approach with that of the time-dependent Cox model in the presence of immortal time. Both assessments revealed that the PTDM approach did not adequately address immortal time bias. Based on our simulation results, another recently proposed observational data analysis technique, the sequential Cox approach, was found to be more useful than the PTDM approach (Cox: bias = -0.002, mean squared error = 0.025; PTDM: bias = -1.411, mean squared error = 2.011). We applied these approaches to investigate the association of β-interferon treatment with delaying disability progression in a multiple sclerosis cohort in British Columbia, Canada (Long-Term Benefits and Adverse Effects of Beta-Interferon for Multiple Sclerosis (BeAMS) Study, 1995-2008). © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Rotationally Actuated Prosthetic Hand
NASA Technical Reports Server (NTRS)
Norton, William E.; Belcher, Jewell G., Jr.; Carden, James R.; Vest, Thomas W.
1991-01-01
Prosthetic hand attached to end of remaining part of forearm and to upper arm just above elbow. Pincerlike fingers pushed apart to degree depending on rotation of forearm. Simpler in design, simpler to operate, weighs less, and takes up less space.
AFC-Enabled Simplified High-Lift System Integration Study
NASA Technical Reports Server (NTRS)
Hartwich, Peter M.; Dickey, Eric D.; Sclafani, Anthony J.; Camacho, Peter; Gonzales, Antonio B.; Lawson, Edward L.; Mairs, Ron Y.; Shmilovich, Arvin
2014-01-01
The primary objective of this trade study report is to explore the potential of using Active Flow Control (AFC) for achieving lighter and mechanically simpler high-lift systems for transonic commercial transport aircraft. This assessment was conducted in four steps. First, based on the Common Research Model (CRM) outer mold line (OML) definition, two high-lift concepts were developed. One concept, representative of current production-type commercial transonic transports, features leading edge slats and slotted trailing edge flaps with Fowler motion. The other CRM-based design relies on drooped leading edges and simply hinged trailing edge flaps for high-lift generation. The relative high-lift performance of these two high-lift CRM variants is established using Computational Fluid Dynamics (CFD) solutions to the Reynolds-Averaged Navier-Stokes (RANS) equations for steady flow. These CFD assessments identify the high-lift performance that needs to be recovered through AFC to have the CRM variant with the lighter and mechanically simpler high-lift system match the performance of the conventional high-lift system. Conceptual design integration studies for the AFC-enhanced high-lift systems were conducted with a NASA Environmentally Responsible Aircraft (ERA) reference configuration, the so-called ERA-0003 concept. These design trades identify AFC performance targets that need to be met to produce economically feasible ERA-0003-like concepts with lighter and mechanically simpler high-lift designs that match the performance of conventional high-lift systems. Finally, technical challenges are identified associated with the application of AFC-enabled highlift systems to modern transonic commercial transports for future technology maturation efforts.
Designing Distance Learning Tasks to Help Maximize Vocabulary Development
ERIC Educational Resources Information Center
Loucky, John Paul
2012-01-01
Task-based language learning using the benefits of online computer-assisted language learning (CALL) can be effective for rapid vocabulary expansion, especially when target vocabulary has been pre-arranged into bilingual categories under simpler, common Semantic Field Keywords. Results and satisfaction levels for both Chinese English majors and…
Heuristics and Cognitive Error in Medical Imaging.
Itri, Jason N; Patel, Sohil H
2018-05-01
The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2008-01-01
The Space Shuttle Columbia Accident Investigation Board recommended that NASA develop, validate, and maintain a modeling tool capable of predicting the damage threshold for debris impacts on the Space Shuttle Reinforced Carbon-Carbon (RCC) wing leading edge and nosecap assembly. The results presented in this paper are one part of a multi-level approach that supported the development of the predictive tool used to recertify the shuttle for flight following the Columbia Accident. The assessment of predictive capability was largely based on test analysis comparisons for simpler component structures. This paper provides comparisons of finite element simulations with test data for external tank foam debris impacts onto 6-in. square RCC flat panels. Both quantitative displacement and qualitative damage assessment correlations are provided. The comparisons show good agreement and provided the Space Shuttle Program with confidence in the predictive tool.
Real Gas Computation Using an Energy Relaxation Method and High-Order WENO Schemes
NASA Technical Reports Server (NTRS)
Montarnal, Philippe; Shu, Chi-Wang
1998-01-01
In this paper, we use a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition into two parts: one part is associated with a simpler pressure law and the other part (the nonlinear deviation) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the first part. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.
Enzyme-free nucleic acid dynamical systems.
Srinivas, Niranjan; Parkin, James; Seelig, Georg; Winfree, Erik; Soloveichik, David
2017-12-15
Chemistries exhibiting complex dynamics-from inorganic oscillators to gene regulatory networks-have been long known but either cannot be reprogrammed at will or rely on the sophisticated enzyme chemistry underlying the central dogma. Can simpler molecular mechanisms, designed from scratch, exhibit the same range of behaviors? Abstract chemical reaction networks have been proposed as a programming language for complex dynamics, along with their systematic implementation using short synthetic DNA molecules. We developed this technology for dynamical systems by identifying critical design principles and codifying them into a compiler automating the design process. Using this approach, we built an oscillator containing only DNA components, establishing that Watson-Crick base-pairing interactions alone suffice for complex chemical dynamics and that autonomous molecular systems can be designed via molecular programming languages. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Optical Control of a Nuclear Spin in Diamond
NASA Astrophysics Data System (ADS)
Levonian, David; Goldman, Michael; Degreve, Kristiaan; Choi, Soonwon; Markham, Matthew; Twitchen, Daniel; Lukin, Mikhail
2017-04-01
The nitrogen-vacancy (NV) center in diamond has emerged as a promising candidate for quantum information and quantum communication applications. The NV center's potential as a quantum register is due to the long coherence time of its spin-triplet electronic ground state, the optical addressability of its electronic transitions, and the presence of nearby ancillary nuclear spins. The NV center's electronic spin and nearby nuclear spins are most commonly manipulated using applied microwave and RF fields, but this approach would be difficult to scale up for use with an array of NV-based quantum registers. In this context, all-optical manipulation would be more scalable, technically simpler, and potentially faster. Although all-optical control of the electronic spin has been demonstrated, it is an outstanding problem for the nuclear spins. Here, we use an optical Raman scheme to implement nuclear spin-specific control of the electronic spin and coherent control of the 14N nuclear spin.
CRISPR-Cas9; an efficient tool for precise plant genome editing.
Islam, Waqar
2018-06-01
Efficient plant genome editing is dependent upon induction of double stranded DNA breaks (DSBs) through site specified nucleases. These DSBs initiate the process of DNA repair which can either base upon homologous recombination (HR) or non-homologous end jointing (NHEJ). Recently, CRISPR-Cas9 mechanism got highlighted as revolutionizing genetic tool due to its simpler frame work along with the broad range of adaptability and applications. So, in this review, I have tried to sum up the application of this biotechnological tool in plant genome editing. Furthermore, I have tried to explain successful adaptation of CRISPR in various plant species where it is used for the successful generation of stable mutations in a steadily growing number of species through NHEJ. The review also sheds light upon other biotechnological approaches relying upon single DNA lesion induction such as genomic deletion or pair wise nickases for evasion of offsite effects. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Wen, John T.; Kreutz, Kenneth; Bayard, David S.
1988-01-01
A class of joint-level control laws for all-revolute robot arms is introduced. The analysis is similar to the recently proposed energy Liapunov function approach except that the closed-loop potential function is shaped in accordance with the underlying joint space topology. By using energy Liapunov functions with the modified potential energy, a much simpler analysis can be used to show closed-loop global asymptotic stability and local exponential stability. When Coulomb and viscous friction and model parameter errors are present, a sliding-mode-like modification of the control law is proposed to add a robustness-enhancing outer loop. Adaptive control is also addressed within the same framework. A linear-in-the-parameters formulation is adopted, and globally asymptotically stable adaptive control laws are derived by replacing the model parameters in the nonadaptive control laws by their estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, R.W.; Phillips, A.M.
1990-02-01
Low-permeability reservoirs are currently being propped with sand, resin-coated sand, intermediate-density proppants, and bauxite. This wide range of proppant cost and performance has resulted in the proliferation of proppant selection models. Initially, a rather vague relationship between well depth and proppant strength dictated the choice of proppant. More recently, computerized models of varying complexity that use net-present-value (NPV) calculations have become available. The input is based on the operator's performance goals for each well and specific reservoir properties. Simpler, noncomputerized approaches include cost/performance comparisons and nomographs. Each type of model, including several of the computerized models, is examined here. Bymore » use of these models and NPV calculations, optimum fracturing treatment designs have been developed for such low-permeability reservoirs as the Prue in Oklahoma. Typical well conditions are used in each of the selection models, and the results are compared.« less
History of Artificial Gravity. Chapter 3
NASA Technical Reports Server (NTRS)
Clement, Gilles; Bukley, Angie; Paloski, William
2006-01-01
This chapter reviews the past and current projects on artificial gravity during space missions. The idea of a rotating wheel-like space station providing artificial gravity goes back in the writings of Tsiolkovsky, Noordung, and Wernher von Braun. Its most famous fictional representation is in the film 2001: A Space Odyssey, which also depicts spin-generated artificial gravity aboard a space station and a spaceship bound for Jupiter. The O Neill-type space colony provides another classic illustration of this technique. A more realistic approach to rotating the space station is to provide astronauts with a smaller centrifuge contained within a spacecraft. The astronauts would go into it for a workout, and get their gravity therapeutic dose for a certain period of time, daily or a few times a week. This simpler concept is current being tested during ground-based studies in several laboratories around the world.
EoR imaging with the SKA: the challenge of foreground removal
NASA Astrophysics Data System (ADS)
Bonaldi, Anna
2018-05-01
21-cm observations of the Cosmic dawn (CD) and Epoch of Reionization (EoR) are one of the high priority science objectives for SKA Low. One of the most difficult aspects of the 21-cm measurement is the presence of foreground emission, due to our Galaxy and extragalactic sources, which is about four orders of magnitude brighter than the cosmological signal. While end-to-end simulations are being produced to investigate in details the foreground subtraction strategy, it is useful to complement this thorough but time-consuming approach with simpler, quicker ways to evaluate performance and identify possible critical steps. In this work, I present a forecast method, based on Bonaldi et al. (2015), Bonaldi & Ricciardi (2011), to understand the level of residual contamination after a component separation step, and its impact on our ability to investigate CD and EoR.
Mixed-state fidelity susceptibility through iterated commutator series expansion
NASA Astrophysics Data System (ADS)
Tonchev, N. S.
2014-11-01
We present a perturbative approach to the problem of computation of mixed-state fidelity susceptibility (MFS) for thermal states. The mathematical techniques used provide an analytical expression for the MFS as a formal expansion in terms of the thermodynamic mean values of successively higher commutators of the Hamiltonian with the operator involved through the control parameter. That expression is naturally divided into two parts: the usual isothermal susceptibility and a constituent in the form of an infinite series of thermodynamic mean values which encodes the noncommutativity in the problem. If the symmetry properties of the Hamiltonian are given in terms of the generators of some (finite-dimensional) algebra, the obtained expansion may be evaluated in a closed form. This issue is tested on several popular models, for which it is shown that the calculations are much simpler if they are based on the properties from the representation theory of the Heisenberg or SU(1, 1) Lie algebra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saurav, Kumar; Chandan, Vikas
District-heating-and-cooling (DHC) systems are a proven energy solution that has been deployed for many years in a growing number of urban areas worldwide. They comprise a variety of technologies that seek to develop synergies between the production and supply of heat, cooling, domestic hot water and electricity. Although the benefits of DHC systems are significant and have been widely acclaimed, yet the full potential of modern DHC systems remains largely untapped. There are several opportunities for development of energy efficient DHC systems, which will enable the effective exploitation of alternative renewable resources, waste heat recovery, etc., in order to increasemore » the overall efficiency and facilitate the transition towards the next generation of DHC systems. This motivated the need for modelling these complex systems. Large-scale modelling of DHC-networks is challenging, as it has several components such as buildings, pipes, valves, heating source, etc., interacting with each other. In this paper, we focus on building modelling. In particular, we present a gray-box methodology for thermal modelling of buildings. Gray-box modelling is a hybrid of data driven and physics based models where, coefficients of the equations from physics based models are learned using data. This approach allows us to capture the dynamics of the buildings more effectively as compared to pure data driven approach. Additionally, this approach results in a simpler models as compared to pure physics based models. We first develop the individual components of the building such as temperature evolution, flow controller, etc. These individual models are then integrated in to the complete gray-box model for the building. The model is validated using data collected from one of the buildings at Lule{\\aa}, a city on the coast of northern Sweden.« less
Liu, Chuang; Liu, Yi; Li, Zhiguo; Zhang, Guoshi; Chen, Fang
2017-04-24
A simpler approach for establishing fertilizer recommendations for major crops is urgently required to improve the application efficiency of commercial fertilizers in China. To address this need, we developed a method based on field data drawn from the China Program of the International Plant Nutrition Institute (IPNI) rice experiments and investigations carried out in southeastern China during 2001 to 2012. Our results show that, using agronomic efficiencies and a sustainable yield index (SYI), this new method for establishing fertilizer recommendations robustly estimated the mean rice yield (7.6 t/ha) and mean nutrient supply capacities (186, 60, and 96 kg/ha of N, P 2 O 5 , and K 2 O, respectively) of fertilizers in the study region. In addition, there were significant differences in rice yield response, economic cost/benefit ratio, and nutrient-use efficiencies associated with agronomic efficiencies ranked as high, medium and low. Thus, ranking agronomic efficiency could strengthen linear models relating rice yields and SYI. Our results also indicate that the new method provides better recommendations in terms of rice yield, SYI, and profitability than previous methods. Hence, we believe it is an effective approach for improving recommended applications of commercial fertilizers to rice (and potentially other crops).
Statistical primer: how to deal with missing data in scientific research?
Papageorgiou, Grigorios; Grant, Stuart W; Takkenberg, Johanna J M; Mokhles, Mostafa M
2018-05-10
Missing data are a common challenge encountered in research which can compromise the results of statistical inference when not handled appropriately. This paper aims to introduce basic concepts of missing data to a non-statistical audience, list and compare some of the most popular approaches for handling missing data in practice and provide guidelines and recommendations for dealing with and reporting missing data in scientific research. Complete case analysis and single imputation are simple approaches for handling missing data and are popular in practice, however, in most cases they are not guaranteed to provide valid inferences. Multiple imputation is a robust and general alternative which is appropriate for data missing at random, surpassing the disadvantages of the simpler approaches, but should always be conducted with care. The aforementioned approaches are illustrated and compared in an example application using Cox regression.
2002-05-01
Antiretroviral research presented recently at the 9th Conference on Retroviruses and Opportunistic Infections demonstrates that investigators and pharmaceutical companies continue to strive for the next highly potent and easily tolerated anti-HIV drug. Among the new approaches are entry inhibitor drug and second-generation non-nucleoside reverse transcriptase inhibitors. New studies also looked into potency against multidrug-resistant virus and medication regimens that are simpler to take and have fewer side effects.
MODELING MICROBUBBLE DYNAMICS IN BIOMEDICAL APPLICATIONS*
CHAHINE, Georges L.; HSIAO, Chao-Tsung
2012-01-01
Controlling microbubble dynamics to produce desirable biomedical outcomes when and where necessary and avoid deleterious effects requires advanced knowledge, which can be achieved only through a combination of experimental and numerical/analytical techniques. The present communication presents a multi-physics approach to study the dynamics combining viscous- in-viscid effects, liquid and structure dynamics, and multi bubble interaction. While complex numerical tools are developed and used, the study aims at identifying the key parameters influencing the dynamics, which need to be included in simpler models. PMID:22833696
Calculus domains modelled using an original bool algebra based on polygons
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2016-08-01
Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.
User Centric Job Monitoring - a redesign and novel approach in the STAR experiment
NASA Astrophysics Data System (ADS)
Arkhipkin, D.; Lauret, J.; Zulkarneeva, Y.
2014-06-01
User Centric Monitoring (or UCM) has been a long awaited feature in STAR, whereas programs, workflows and system "events" could be logged, broadcast and later analyzed. UCM allows to collect and filter available job monitoring information from various resources and present it to users in a user-centric view rather than an administrative-centric point of view. The first attempt and implementation of "a" UCM approach was made in STAR 2004 using a log4cxx plug-in back-end and then further evolved with an attempt to push toward a scalable database back-end (2006) and finally using a Web-Service approach (2010, CSW4DB SBIR). The latest showed to be incomplete and not addressing the evolving needs of the experiment where streamlined messages for online (data acquisition) purposes as well as the continuous support for the data mining needs and event analysis need to coexists and unified in a seamless approach. The code also revealed to be hardly maintainable. This paper presents the next evolutionary step of the UCM toolkit, a redesign and redirection of our latest attempt acknowledging and integrating recent technologies and a simpler, maintainable and yet scalable manner. The extended version of the job logging package is built upon three-tier approach based on Task, Job and Event, and features a Web-Service based logging API, a responsive AJAX-powered user interface, and a database back-end relying on MongoDB, which is uniquely suited for STAR needs. In addition, we present details of integration of this logging package with the STAR offline and online software frameworks. Leveraging on the reported experience and work from the ATLAS and CMS experience on using the ESPER engine, we discuss and show how such approach has been implemented in STAR for meta-data event triggering stream processing and filtering. An ESPER based solution seems to fit well into the online data acquisition system where many systems are monitored.
Hatt, Mathieu; Lee, John A.; Schmidtlein, Charles R.; Naqa, Issam El; Caldwell, Curtis; De Bernardi, Elisabetta; Lu, Wei; Das, Shiva; Geets, Xavier; Gregoire, Vincent; Jeraj, Robert; MacManus, Michael P.; Mawlawi, Osama R.; Nestle, Ursula; Pugachev, Andrei B.; Schöder, Heiko; Shepherd, Tony; Spezi, Emiliano; Visvikis, Dimitris; Zaidi, Habib; Kirov, Assen S.
2017-01-01
Purpose The purpose of this educational report is to provide an overview of the present state-of-the-art PET auto-segmentation (PET-AS) algorithms and their respective validation, with an emphasis on providing the user with help in understanding the challenges and pitfalls associated with selecting and implementing a PET-AS algorithm for a particular application. Approach A brief description of the different types of PET-AS algorithms is provided using a classification based on method complexity and type. The advantages and the limitations of the current PET-AS algorithms are highlighted based on current publications and existing comparison studies. A review of the available image datasets and contour evaluation metrics in terms of their applicability for establishing a standardized evaluation of PET-AS algorithms is provided. The performance requirements for the algorithms and their dependence on the application, the radiotracer used and the evaluation criteria are described and discussed. Finally, a procedure for algorithm acceptance and implementation, as well as the complementary role of manual and auto-segmentation are addressed. Findings A large number of PET-AS algorithms have been developed within the last 20 years. Many of the proposed algorithms are based on either fixed or adaptively selected thresholds. More recently, numerous papers have proposed the use of more advanced image analysis paradigms to perform semi-automated delineation of the PET images. However, the level of algorithm validation is variable and for most published algorithms is either insufficient or inconsistent which prevents recommending a single algorithm. This is compounded by the fact that realistic image configurations with low signal-to-noise ratios (SNR) and heterogeneous tracer distributions have rarely been used. Large variations in the evaluation methods used in the literature point to the need for a standardized evaluation protocol. Conclusions Available comparison studies suggest that PET-AS algorithms relying on advanced image analysis paradigms provide generally more accurate segmentation than approaches based on PET activity thresholds, particularly for realistic configurations. However, this may not be the case for simple shape lesions in situations with a narrower range of parameters, where simpler methods may also perform well. Recent algorithms which employ some type of consensus or automatic selection between several PET-AS methods have potential to overcome the limitations of the individual methods when appropriately trained. In either case, accuracy evaluation is required for each different PET scanner and scanning and image reconstruction protocol. For the simpler, less robust approaches, adaptation to scanning conditions, tumor type, and tumor location by optimization of parameters is necessary. The results from the method evaluation stage can be used to estimate the contouring uncertainty. All PET-AS contours should be critically verified by a physician. A standard test, i.e., a benchmark dedicated to evaluating both existing and future PET-AS algorithms needs to be designed, to aid clinicians in evaluating and selecting PET-AS algorithms and to establish performance limits for their acceptance for clinical use. The initial steps toward designing and building such a standard are undertaken by the task group members. PMID:28120467
Dispositional anger and the resolution of the approach-avoidance conflict.
Robinson, Michael D; Boyd, Ryan L; Persich, Michelle R
2016-09-01
The approach-avoidance conflict is one in which approaching reward brings increased threat while avoiding threat means forgoing reward. This conflict can be uniquely informative because it will be resolved in different ways depending on whether approach (toward) or avoidance (away from) is the stronger motive. Two studies (total N = 191) created a computerized version of this conflict and used the test to examine questions of motivational direction in anger. In Study 1, noise blast provocations increased the frequency of approach behaviors at high levels of trait anger, but decreased their frequency at low levels. In Study 2, a simpler version of the conflict test was used to predict anger in daily life. As hypothesized, greater approach frequencies in the test predicted greater anger reactivity to daily provocations and frustrations. The discussion focuses on the utility of the approach-avoidance conflict test and on questions of motivational direction in anger. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation
2004-12-01
area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the
Experiences of building a medical data acquisition system based on two-level modeling.
Li, Bei; Li, Jianbin; Lan, Xiaoyun; An, Ying; Gao, Wuqiang; Jiang, Yuqiao
2018-04-01
Compared to traditional software development strategies, the two-level modeling approach is more flexible and applicable to build an information system in the medical domain. However, the standards of two-level modeling such as openEHR appear complex to medical professionals. This study aims to investigate, implement, and improve the two-level modeling approach, and discusses the experience of building a unified data acquisition system for four affiliated university hospitals based on this approach. After the investigation, we simplified the approach of archetype modeling and developed a medical data acquisition system where medical experts can define the metadata for their own specialties by using a visual easy-to-use tool. The medical data acquisition system for multiple centers, clinical specialties, and diseases has been developed, and integrates the functions of metadata modeling, form design, and data acquisition. To date, 93,353 data items and 6,017 categories for 285 specific diseases have been created by medical experts, and over 25,000 patients' information has been collected. OpenEHR is an advanced two-level modeling method for medical data, but its idea to separate domain knowledge and technical concern is not easy to realize. Moreover, it is difficult to reach an agreement on archetype definition. Therefore, we adopted simpler metadata modeling, and employed What-You-See-Is-What-You-Get (WYSIWYG) tools to further improve the usability of the system. Compared with the archetype definition, our approach lowers the difficulty. Nevertheless, to build such a system, every participant should have some knowledge in both medicine and information technology domains, as these interdisciplinary talents are necessary. Copyright © 2018 Elsevier B.V. All rights reserved.
Supramolecular Based Membrane Sensors
Ganjali, Mohammad Reza; Norouzi, Parviz; Rezapour, Morteza; Faridbod, Farnoush; Pourjavid, Mohammad Reza
2006-01-01
Supramolecular chemistry can be defined as a field of chemistry, which studies the complex multi-molecular species formed from molecular components that have relatively simpler structures. This field has been subject to extensive research over the past four decades. This review discusses classification of supramolecules and their application in design and construction of ion selective sensors.
NASA Astrophysics Data System (ADS)
Shen, C.; Fang, K.
2017-12-01
Deep Learning (DL) methods have made revolutionary strides in recent years. A core value proposition of DL is that abstract notions and patterns can be extracted purely from data, without the need for domain expertise. Process-based models (PBM), on the other hand, can be regarded as repositories of human knowledge or hypotheses about how systems function. Here, through computational examples, we argue that there is merit in integrating PBMs with DL due to the imbalance and lack of data in many situations, especially in hydrology. We trained a deep-in-time neural network, the Long Short-Term Memory (LSTM), to learn soil moisture dynamics from Soil Moisture Active Passive (SMAP) Level 3 product. We show that when PBM solutions are integrated into LSTM, the network is able to better generalize across regions. LSTM is able to better utilize PBM solutions than simpler statistical methods. Our results suggest PBMs have generalization value which should be carefully assessed and utilized. We also emphasize that when properly regularized, the deep network is robust and is of superior testing performance compared to simpler methods.
Modeling, simulation, and analysis of optical remote sensing systems
NASA Technical Reports Server (NTRS)
Kerekes, John Paul; Landgrebe, David A.
1989-01-01
Remote Sensing of the Earth's resources from space-based sensors has evolved in the past 20 years from a scientific experiment to a commonly used technological tool. The scientific applications and engineering aspects of remote sensing systems have been studied extensively. However, most of these studies have been aimed at understanding individual aspects of the remote sensing process while relatively few have studied their interrelations. A motivation for studying these interrelationships has arisen with the advent of highly sophisticated configurable sensors as part of the Earth Observing System (EOS) proposed by NASA for the 1990's. Two approaches to investigating remote sensing systems are developed. In one approach, detailed models of the scene, the sensor, and the processing aspects of the system are implemented in a discrete simulation. This approach is useful in creating simulated images with desired characteristics for use in sensor or processing algorithm development. A less complete, but computationally simpler method based on a parametric model of the system is also developed. In this analytical model the various informational classes are parameterized by their spectral mean vector and covariance matrix. These class statistics are modified by models for the atmosphere, the sensor, and processing algorithms and an estimate made of the resulting classification accuracy among the informational classes. Application of these models is made to the study of the proposed High Resolution Imaging Spectrometer (HRIS). The interrelationships among observational conditions, sensor effects, and processing choices are investigated with several interesting results.
NASA Astrophysics Data System (ADS)
Rheinheimer, David E.; Bales, Roger C.; Oroza, Carlos A.; Lund, Jay R.; Viers, Joshua H.
2016-05-01
We assessed the potential value of hydrologic forecasting improvements for a snow-dominated high-elevation hydropower system in the Sierra Nevada of California, using a hydropower optimization model. To mimic different forecasting skill levels for inflow time series, rest-of-year inflows from regression-based forecasts were blended in different proportions with representative inflows from a spatially distributed hydrologic model. The statistical approach mimics the simpler, historical forecasting approach that is still widely used. Revenue was calculated using historical electricity prices, with perfect price foresight assumed. With current infrastructure and operations, perfect hydrologic forecasts increased annual hydropower revenue by 0.14 to 1.6 million, with lower values in dry years and higher values in wet years, or about $0.8 million (1.2%) on average, representing overall willingness-to-pay for perfect information. A second sensitivity analysis found a wider range of annual revenue gain or loss using different skill levels in snow measurement in the regression-based forecast, mimicking expected declines in skill as the climate warms and historical snow measurements no longer represent current conditions. The value of perfect forecasts was insensitive to storage capacity for small and large reservoirs, relative to average inflow, and modestly sensitive to storage capacity with medium (current) reservoir storage. The value of forecasts was highly sensitive to powerhouse capacity, particularly for the range of capacities in the northern Sierra Nevada. The approach can be extended to multireservoir, multipurpose systems to help guide investments in forecasting.
Systems biology perspectives on minimal and simpler cells.
Xavier, Joana C; Patil, Kiran Raosaheb; Rocha, Isabel
2014-09-01
The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Ren, Qiang; Nagar, Jogender; Kang, Lei; Bian, Yusheng; Werner, Ping; Werner, Douglas H
2017-05-18
A highly efficient numerical approach for simulating the wideband optical response of nano-architectures comprised of Drude-Critical Points (DCP) media (e.g., gold and silver) is proposed and validated through comparing with commercial computational software. The kernel of this algorithm is the subdomain level discontinuous Galerkin time domain (DGTD) method, which can be viewed as a hybrid of the spectral-element time-domain method (SETD) and the finite-element time-domain (FETD) method. An hp-refinement technique is applied to decrease the Degrees-of-Freedom (DoFs) and computational requirements. The collocated E-J scheme facilitates solving the auxiliary equations by converting the inversions of matrices to simpler vector manipulations. A new hybrid time stepping approach, which couples the Runge-Kutta and Newmark methods, is proposed to solve the temporal auxiliary differential equations (ADEs) with a high degree of efficiency. The advantages of this new approach, in terms of computational resource overhead and accuracy, are validated through comparison with well-known commercial software for three diverse cases, which cover both near-field and far-field properties with plane wave and lumped port sources. The presented work provides the missing link between DCP dispersive models and FETD and/or SETD based algorithms. It is a competitive candidate for numerically studying the wideband plasmonic properties of DCP media.
Skipping the real world: Classification of PolSAR images without explicit feature extraction
NASA Astrophysics Data System (ADS)
Hänsch, Ronny; Hellwich, Olaf
2018-06-01
The typical processing chain for pixel-wise classification from PolSAR images starts with an optional preprocessing step (e.g. speckle reduction), continues with extracting features projecting the complex-valued data into the real domain (e.g. by polarimetric decompositions) which are then used as input for a machine-learning based classifier, and ends in an optional postprocessing (e.g. label smoothing). The extracted features are usually hand-crafted as well as preselected and represent (a somewhat arbitrary) projection from the complex to the real domain in order to fit the requirements of standard machine-learning approaches such as Support Vector Machines or Artificial Neural Networks. This paper proposes to adapt the internal node tests of Random Forests to work directly on the complex-valued PolSAR data, which makes any explicit feature extraction obsolete. This approach leads to a classification framework with a significantly decreased computation time and memory footprint since no image features have to be computed and stored beforehand. The experimental results on one fully-polarimetric and one dual-polarimetric dataset show that, despite the simpler approach, accuracy can be maintained (decreased by only less than 2 % for the fully-polarimetric dataset) or even improved (increased by roughly 9 % for the dual-polarimetric dataset).
Mass balance modelling of contaminants in river basins: a flexible matrix approach.
Warren, Christopher; Mackay, Don; Whelan, Mick; Fox, Kay
2005-12-01
A novel and flexible approach is described for simulating the behaviour of chemicals in river basins. A number (n) of river reaches are defined and their connectivity is described by entries in an n x n matrix. Changes in segmentation can be readily accommodated by altering the matrix entries, without the need for model revision. Two models are described. The simpler QMX-R model only considers advection and an overall loss due to the combined processes of volatilization, net transfer to sediment and degradation. The rate constant for the overall loss is derived from fugacity calculations for a single segment system. The more rigorous QMX-F model performs fugacity calculations for each segment and explicitly includes the processes of advection, evaporation, water-sediment exchange and degradation in both water and sediment. In this way chemical exposure in all compartments (including equilibrium concentrations in biota) can be estimated. Both models are designed to serve as intermediate-complexity exposure assessment tools for river basins with relatively low data requirements. By considering the spatially explicit nature of emission sources and the changes in concentration which occur with transport in the channel system, the approach offers significant advantages over simple one-segment simulations while being more readily applicable than more sophisticated, highly segmented, GIS-based models.
A rigorous and simpler method of image charges
NASA Astrophysics Data System (ADS)
Ladera, C. L.; Donoso, G.
2016-07-01
The method of image charges relies on the proven uniqueness of the solution of the Laplace differential equation for an electrostatic potential which satisfies some specified boundary conditions. Granted by that uniqueness, the method of images is rightly described as nothing but shrewdly guessing which and where image charges are to be placed to solve the given electrostatics problem. Here we present an alternative image charges method that is based not on guessing but on rigorous and simpler theoretical grounds, namely the constant potential inside any conductor and the application of powerful geometric symmetries. The aforementioned required uniqueness and, more importantly, guessing are therefore both altogether dispensed with. Our two new theoretical fundaments also allow the image charges method to be introduced in earlier physics courses for engineering and sciences students, instead of its present and usual introduction in electromagnetic theory courses that demand familiarity with the Laplace differential equation and its boundary conditions.
Heuristics for the Hodgkin-Huxley system.
Hoppensteadt, Frank
2013-09-01
Hodgkin and Huxley (HH) discovered that voltages control ionic currents in nerve membranes. This led them to describe electrical activity in a neuronal membrane patch in terms of an electronic circuit whose characteristics were determined using empirical data. Due to the complexity of this model, a variety of heuristics, including relaxation oscillator circuits and integrate-and-fire models, have been used to investigate activity in neurons, and these simpler models have been successful in suggesting experiments and explaining observations. Connections between most of the simpler models had not been made clear until recently. Shown here are connections between these heuristics and the full HH model. In particular, we study a new model (Type III circuit): It includes the van der Pol-based models; it can be approximated by a simple integrate-and-fire model; and it creates voltages and currents that correspond, respectively, to the h and V components of the HH system. Copyright © 2012 Elsevier Inc. All rights reserved.
Domurat, Artur; Kowalczuk, Olga; Idzikowska, Katarzyna; Borzymowska, Zuzanna; Nowak-Przygodzka, Marta
2015-01-01
This paper has two aims. First, we investigate how often people make choices conforming to Bayes' rule when natural sampling is applied. Second, we show that using Bayes' rule is not necessary to make choices satisfying Bayes' rule. Simpler methods, even fallacious heuristics, might prescribe correct choices reasonably often under specific circumstances. We considered elementary situations with binary sets of hypotheses and data. We adopted an ecological approach and prepared two-stage computer tasks resembling natural sampling. Probabilistic relations were inferred from a set of pictures, followed by a choice which was made to maximize the chance of a preferred outcome. Use of Bayes' rule was deduced indirectly from choices. Study 1 used a stratified sample of N = 60 participants equally distributed with regard to gender and type of education (humanities vs. pure sciences). Choices satisfying Bayes' rule were dominant. To investigate ways of making choices more directly, we replicated Study 1, adding a task with a verbal report. In Study 2 (N = 76) choices conforming to Bayes' rule dominated again. However, the verbal reports revealed use of a new, non-inverse rule, which always renders correct choices, but is easier than Bayes' rule to apply. It does not require inversion of conditions [transforming P(H) and P(D|H) into P(H|D)] when computing chances. Study 3 examined the efficiency of three fallacious heuristics (pre-Bayesian, representativeness, and evidence-only) in producing choices concordant with Bayes' rule. Computer-simulated scenarios revealed that the heuristics produced correct choices reasonably often under specific base rates and likelihood ratios. Summing up we conclude that natural sampling results in most choices conforming to Bayes' rule. However, people tend to replace Bayes' rule with simpler methods, and even use of fallacious heuristics may be satisfactorily efficient.
Domurat, Artur; Kowalczuk, Olga; Idzikowska, Katarzyna; Borzymowska, Zuzanna; Nowak-Przygodzka, Marta
2015-01-01
This paper has two aims. First, we investigate how often people make choices conforming to Bayes’ rule when natural sampling is applied. Second, we show that using Bayes’ rule is not necessary to make choices satisfying Bayes’ rule. Simpler methods, even fallacious heuristics, might prescribe correct choices reasonably often under specific circumstances. We considered elementary situations with binary sets of hypotheses and data. We adopted an ecological approach and prepared two-stage computer tasks resembling natural sampling. Probabilistic relations were inferred from a set of pictures, followed by a choice which was made to maximize the chance of a preferred outcome. Use of Bayes’ rule was deduced indirectly from choices. Study 1 used a stratified sample of N = 60 participants equally distributed with regard to gender and type of education (humanities vs. pure sciences). Choices satisfying Bayes’ rule were dominant. To investigate ways of making choices more directly, we replicated Study 1, adding a task with a verbal report. In Study 2 (N = 76) choices conforming to Bayes’ rule dominated again. However, the verbal reports revealed use of a new, non-inverse rule, which always renders correct choices, but is easier than Bayes’ rule to apply. It does not require inversion of conditions [transforming P(H) and P(D|H) into P(H|D)] when computing chances. Study 3 examined the efficiency of three fallacious heuristics (pre-Bayesian, representativeness, and evidence-only) in producing choices concordant with Bayes’ rule. Computer-simulated scenarios revealed that the heuristics produced correct choices reasonably often under specific base rates and likelihood ratios. Summing up we conclude that natural sampling results in most choices conforming to Bayes’ rule. However, people tend to replace Bayes’ rule with simpler methods, and even use of fallacious heuristics may be satisfactorily efficient. PMID:26347676
Maydeu-Olivares, Alberto
2016-01-01
Nesselroade and Molenaar advocate the use of an idiographic filter approach. This is a fixed-effects approach, which may limit the number of individuals that can be simultaneously modeled, and it is not clear how to model the presence of subpopulations. Most important, Nesselroade and Molenaar's proposal appears to be best suited for modeling long time series on a few variables for a few individuals. Long time series are not common in psychological applications. Can it be applied to the usual longitudinal data we face? These are characterized by short time series (four to five points in time), hundreds of individuals, and dozens of variables. If so, what do we gain? Applied settings most often involve between-individual decisions. I conjecture that their approach will not outperform common, simpler, methods. However, when intraindividual decisions are involved, their approach may have an edge.
Ho Yeon, Deuk; Chandra Mohanty, Bhaskar; Lee, Seung Min; Soo Cho, Yong
2015-09-23
Here we report the highest energy conversion efficiency and good stability of PbS thin film-based depleted heterojunction solar cells, not involving PbS quantum dots. The PbS thin films were grown by the low cost chemical bath deposition (CBD) process at relatively low temperatures. Compared to the quantum dot solar cells which require critical and multistep complex procedures for surface passivation, the present approach, leveraging the facile modulation of the optoelectronic properties of the PbS films by the CBD process, offers a simpler route for optimization of PbS-based solar cells. Through an architectural modification, wherein two band-aligned junctions are stacked without any intervening layers, an enhancement of conversion efficiency by as much as 30% from 3.10 to 4.03% facilitated by absorption of a wider range of solar spectrum has been obtained. As an added advantage of the low band gap PbS stacked over a wide gap PbS, the devices show stability over a period of 10 days.
NUMERICAL FLOW AND TRANSPORT SIMULATIONS SUPPORTING THE SALTSTONE FACILITY PERFORMANCE ASSESSMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G.
2009-02-28
The Saltstone Disposal Facility Performance Assessment (PA) is being revised to incorporate requirements of Section 3116 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA), and updated data and understanding of vault performance since the 1992 PA (Cook and Fowler 1992) and related Special Analyses. A hybrid approach was chosen for modeling contaminant transport from vaults and future disposal cells to exposure points. A higher resolution, largely deterministic, analysis is performed on a best-estimate Base Case scenario using the PORFLOW numerical analysis code. a few additional sensitivity cases are simulated to examine alternative scenarios andmore » parameter settings. Stochastic analysis is performed on a simpler representation of the SDF system using the GoldSim code to estimate uncertainty and sensitivity about the Base Case. This report describes development of PORFLOW models supporting the SDF PA, and presents sample results to illustrate model behaviors and define impacts relative to key facility performance objectives. The SDF PA document, when issued, should be consulted for a comprehensive presentation of results.« less
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Himansu, Ananda; Loh, Ching-Yuen; Wang, Xiao-Yen; Yu, Shang-Tao
2003-01-01
This paper reports on a significant advance in the area of non-reflecting boundary conditions (NRBCs) for unsteady flow computations. As a part of the development of the space-time conservation element and solution element (CE/SE) method, sets of NRBCs for 1D Euler problems are developed without using any characteristics-based techniques. These conditions are much simpler than those commonly reported in the literature, yet so robust that they are applicable to subsonic, transonic and supersonic flows even in the presence of discontinuities. In addition, the straightforward multidimensional extensions of the present 1D NRBCs have been shown numerically to be equally simple and robust. The paper details the theoretical underpinning of these NRBCs, and explains their unique robustness and accuracy in terms of the conservation of space-time fluxes. Some numerical results for an extended Sod's shock-tube problem, illustrating the effectiveness of the present NRBCs are included, together with an associated simple Fortran computer program. As a preliminary to the present development, a review of the basic CE/SE schemes is also included.
Barimani, Shirin; Kleinebudde, Peter
2017-10-01
A multivariate analysis method, Science-Based Calibration (SBC), was used for the first time for endpoint determination of a tablet coating process using Raman data. Two types of tablet cores, placebo and caffeine cores, received a coating suspension comprising a polyvinyl alcohol-polyethylene glycol graft-copolymer and titanium dioxide to a maximum coating thickness of 80µm. Raman spectroscopy was used as in-line PAT tool. The spectra were acquired every minute and correlated to the amount of applied aqueous coating suspension. SBC was compared to another well-known multivariate analysis method, Partial Least Squares-regression (PLS) and a simpler approach, Univariate Data Analysis (UVDA). All developed calibration models had coefficient of determination values (R 2 ) higher than 0.99. The coating endpoints could be predicted with root mean square errors (RMSEP) less than 3.1% of the applied coating suspensions. Compared to PLS and UVDA, SBC proved to be an alternative multivariate calibration method with high predictive power. Copyright © 2017 Elsevier B.V. All rights reserved.
Novel Phenotype Issues Raised in Cross-National Epidemiological Research on Drug Dependence
Anthony, James C.
2010-01-01
Stage-transition models based on the American Diagnostic and Statistical Manual (DSM) generally are applied in epidemiology and genetics research on drug dependence syndromes associated with cannabis, cocaine, and other internationally regulated drugs (IRD). Difficulties with DSM stage-transition models have surfaced during cross-national research intended to provide a truly global perspective, such as the work of the World Mental Health Surveys (WMHS) Consortium. Alternative simpler dependence-related phenotypes are possible, including population-level count process models for steps early and before coalescence of clinical features into a coherent syndrome (e.g., zero-inflated Poisson regression). Selected findings are reviewed, based on ZIP modeling of alcohol, tobacco, and IRD count processes, with an illustration that may stimulate new research on genetic susceptibility traits. The annual National Surveys on Drug Use and Health can be readily modified for this purpose, along the lines of a truly anonymous research approach that can help make NSDUH-type cross-national epidemiological surveys more useful in the context of subsequent genome wide association (GWAS) research and post-GWAS investigations with a truly global health perspective. PMID:20201862
NASA Astrophysics Data System (ADS)
Moura, R. C.; Mengaldo, G.; Peiró, J.; Sherwin, S. J.
2017-02-01
We present estimates of spectral resolution power for under-resolved turbulent Euler flows obtained with high-order discontinuous Galerkin (DG) methods. The '1% rule' based on linear dispersion-diffusion analysis introduced by Moura et al. (2015) [10] is here adapted for 3D energy spectra and validated through the inviscid Taylor-Green vortex problem. The 1% rule estimates the wavenumber beyond which numerical diffusion induces an artificial dissipation range on measured energy spectra. As the original rule relies on standard upwinding, different Riemann solvers are tested. Very good agreement is found for solvers which treat the different physical waves in a consistent manner. Relatively good agreement is still found for simpler solvers. The latter however displayed spurious features attributed to the inconsistent treatment of different physical waves. It is argued that, in the limit of vanishing viscosity, such features might have a significant impact on robustness and solution quality. The estimates proposed are regarded as useful guidelines for no-model DG-based simulations of free turbulence at very high Reynolds numbers.
An approximate spin design criterion for monoplanes, 1 May 1939
NASA Technical Reports Server (NTRS)
Seidman, O.; Donlan, C. J.
1976-01-01
An approximate empirical criterion, based on the projected side area and the mass distribution of the airplane, was formulated. The British results were analyzed and applied to American designs. A simpler design criterion, based solely on the type and the dimensions of the tail, was developed; it is useful in a rapid estimation of whether a new design is likely to comply with the minimum requirements for safety in spinning.
NASA Astrophysics Data System (ADS)
Yager, Kevin; Albert, Thomas; Brower, Bernard V.; Pellechia, Matthew F.
2015-06-01
The domain of Geospatial Intelligence Analysis is rapidly shifting toward a new paradigm of Activity Based Intelligence (ABI) and information-based Tipping and Cueing. General requirements for an advanced ABIAA system present significant challenges in architectural design, computing resources, data volumes, workflow efficiency, data mining and analysis algorithms, and database structures. These sophisticated ABI software systems must include advanced algorithms that automatically flag activities of interest in less time and within larger data volumes than can be processed by human analysts. In doing this, they must also maintain the geospatial accuracy necessary for cross-correlation of multi-intelligence data sources. Historically, serial architectural workflows have been employed in ABIAA system design for tasking, collection, processing, exploitation, and dissemination. These simpler architectures may produce implementations that solve short term requirements; however, they have serious limitations that preclude them from being used effectively in an automated ABIAA system with multiple data sources. This paper discusses modern ABIAA architectural considerations providing an overview of an advanced ABIAA system and comparisons to legacy systems. It concludes with a recommended strategy and incremental approach to the research, development, and construction of a fully automated ABIAA system.
Electron Beam Freeform Fabrication for Cost Effective Near-Net Shape Manufacturing
NASA Technical Reports Server (NTRS)
Taminger, Karen M.; Hafley, Robert A.
2006-01-01
Manufacturing of structural metal parts directly from computer aided design (CAD) data has been investigated by numerous researchers over the past decade. Researchers at NASA Langley Research Center are developing a new solid freeform fabrication process, electron beam freeform fabrication (EBF3), as a rapid metal deposition process that works efficiently with a variety of weldable alloys. EBF3 deposits of 2219 aluminium and Ti-6Al-4V have exhibited a range of grain morphologies depending upon the deposition parameters. These materials have exhibited excellent tensile properties comparable to typical handbook data for wrought plate product after post-processing heat treatments. The EBF3 process is capable of bulk metal deposition at deposition rates in excess of 2500 cm3/hr (150 in3/hr) or finer detail at lower deposition rates, depending upon the desired application. This process offers the potential for rapidly adding structural details to simpler cast or forged structures rather than the conventional approach of machining large volumes of chips to produce a monolithic metallic structure. Selective addition of metal onto simpler blanks of material can have a significant effect on lead time reduction and lower material and machining costs.
Electron Beam Freeform Fabrication (EBF3) for Cost Effective Near-Net Shape Manufacturing
NASA Technical Reports Server (NTRS)
Taminger, Karen M.; Hafley, Robert A.
2006-01-01
Manufacturing of structural metal parts directly from computer aided design (CAD) data has been investigated by numerous researchers over the past decade. Researchers at NASA Langley Research Center are developing a new solid freeform fabrication process, electron beam freeform fabrication (EBF3), as a rapid metal deposition process that works efficiently with a variety of weldable alloys. EBF3 deposits of 2219 aluminium and Ti-6Al-4V have exhibited a range of grain morphologies depending upon the deposition parameters. These materials have exhibited excellent tensile properties comparable to typical handbook data for wrought plate product after post-processing heat treatments. The EBF3 process is capable of bulk metal deposition at deposition rates in excess of 2500 cubic centimeters per hour (150 in3/hr) or finer detail at lower deposition rates, depending upon the desired application. This process offers the potential for rapidly adding structural details to simpler cast or forged structures rather than the conventional approach of machining large volumes of chips to produce a monolithic metallic structure. Selective addition of metal onto simpler blanks of material can have a significant effect on lead time reduction and lower material and machining costs.
ERIC Educational Resources Information Center
Fan, Yi; Lance, Charles E.
2017-01-01
The correlated trait-correlated method (CTCM) model for the analysis of multitrait-multimethod (MTMM) data is known to suffer convergence and admissibility (C&A) problems. We describe a little known and seldom applied reparameterized version of this model (CTCM-R) based on Rindskopf's reparameterization of the simpler confirmatory factor…
Mathematical Model for a Simplified Calculation of the Input Momentum Coefficient for AFC Purposes
NASA Astrophysics Data System (ADS)
Hirsch, Damian; Gharib, Morteza
2016-11-01
Active Flow Control (AFC) is an emerging technology which aims at enhancing the aerodynamic performance of flight vehicles (i.e., to save fuel). A viable AFC system must consider the limited resources available on a plane for attaining performance goals. A higher performance goal (i.e., airplane incremental lift) demands a higher input fluidic requirement (i.e., mass flow rate). Therefore, the key requirement for a successful and practical design is to minimize power input while maximizing performance to achieve design targets. One of the most used design parameters is the input momentum coefficient Cμ. The difficulty associated with Cμ lies in obtaining the parameters for its calculation. In the literature two main approaches can be found, which both have their own disadvantages (assumptions, difficult measurements). A new, much simpler calculation approach will be presented that is based on a mathematical model that can be applied to most jet designs (i.e., steady or sweeping jets). The model-incorporated assumptions will be justified theoretically as well as experimentally. Furthermore, the model's capabilities are exploited to give new insight to the AFC technology and its physical limitations. Supported by Boeing.
NASA Astrophysics Data System (ADS)
Hu, Yanpu; Egbert, Gary; Ji, Yanju; Fang, Guangyou
2017-01-01
In this study, we apply fictitious wave domain (FWD) methods, based on the correspondence principle for the wave and diffusion fields, to finite difference (FD) modeling of transient electromagnetic (TEM) diffusion problems for geophysical applications. A novel complex frequency shifted perfectly matched layer (PML) boundary condition is adapted to the FWD to truncate the computational domain, with the maximum electromagnetic wave propagation velocity in the FWD used to set the absorbing parameters for the boundary layers. Using domains of varying spatial extent we demonstrate that these boundary conditions offer significant improvements over simpler PML approaches, which can result in spurious reflections and large errors in the FWD solutions, especially for low frequencies and late times. In our development, resistive air layers are directly included in the FWD, allowing simulation of TEM responses in the presence of topography, as is commonly encountered in geophysical applications. We compare responses obtained by our new FD-FWD approach and with the spectral Lanczos decomposition method on 3-D resistivity models of varying complexity. The comparisons demonstrate that our absorbing boundary condition in FWD for the TEM diffusion problems works well even in complex high-contrast conductivity models.
Comparison of model propeller tests with airfoil theory
NASA Technical Reports Server (NTRS)
Durand, William F; Lesley, E P
1925-01-01
The purpose of the investigation covered by this report was the examination of the degree of approach which may be anticipated between laboratory tests on model airplane propellers and results computed by the airfoil theory, based on tests of airfoils representative of successive blade sections. It is known that the corrections of angles of attack and for aspect ratio, speed, and interference rest either on experimental data or on somewhat uncertain theoretical assumptions. The general situation as regards these four sets of corrections is far from satisfactory, and while it is recognized that occasion exists for the consideration of such corrections, their determination in any given case is a matter of considerable uncertainty. There exists at the present time no theory generally accepted and sufficiently comprehensive to indicate the amount of such corrections, and the application to individual cases of the experimental data available is, at best, uncertain. While the results of this first phase of the investigation are less positive than had been hoped might be the case, the establishment of the general degree of approach between the two sets of results which might be anticipated on the basis of this simpler mode of application seems to have been desirable.
Detecting glaucomatous change in visual fields: Analysis with an optimization framework.
Yousefi, Siamak; Goldbaum, Michael H; Varnousfaderani, Ehsan S; Belghith, Akram; Jung, Tzyy-Ping; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher
2015-12-01
Detecting glaucomatous progression is an important aspect of glaucoma management. The assessment of longitudinal series of visual fields, measured using Standard Automated Perimetry (SAP), is considered the reference standard for this effort. We seek efficient techniques for determining progression from longitudinal visual fields by formulating the problem as an optimization framework, learned from a population of glaucoma data. The longitudinal data from each patient's eye were used in a convex optimization framework to find a vector that is representative of the progression direction of the sample population, as a whole. Post-hoc analysis of longitudinal visual fields across the derived vector led to optimal progression (change) detection. The proposed method was compared to recently described progression detection methods and to linear regression of instrument-defined global indices, and showed slightly higher sensitivities at the highest specificities than other methods (a clinically desirable result). The proposed approach is simpler, faster, and more efficient for detecting glaucomatous changes, compared to our previously proposed machine learning-based methods, although it provides somewhat less information. This approach has potential application in glaucoma clinics for patient monitoring and in research centers for classification of study participants. Copyright © 2015 Elsevier Inc. All rights reserved.
Optimal Control via Self-Generated Stochasticity
NASA Technical Reports Server (NTRS)
Zak, Michail
2011-01-01
The problem of global maxima of functionals has been examined. Mathematical roots of local maxima are the same as those for a much simpler problem of finding global maximum of a multi-dimensional function. The second problem is instability even if an optimal trajectory is found, there is no guarantee that it is stable. As a result, a fundamentally new approach is introduced to optimal control based upon two new ideas. The first idea is to represent the functional to be maximized as a limit of a probability density governed by the appropriately selected Liouville equation. Then, the corresponding ordinary differential equations (ODEs) become stochastic, and that sample of the solution that has the largest value will have the highest probability to appear in ODE simulation. The main advantages of the stochastic approach are that it is not sensitive to local maxima, the function to be maximized must be only integrable but not necessarily differentiable, and global equality and inequality constraints do not cause any significant obstacles. The second idea is to remove possible instability of the optimal solution by equipping the control system with a self-stabilizing device. The applications of the proposed methodology will optimize the performance of NASA spacecraft, as well as robot performance.
NASA Technical Reports Server (NTRS)
Houbolt, John C; Kordes, Eldon E
1954-01-01
An analysis is made of the structural response to gusts of an airplane having the degrees of freedom of vertical motion and wing bending flexibility and basic parameters are established. A convenient and accurate numerical solution of the response equations is developed for the case of discrete-gust encounter, an exact solution is made for the simpler case of continuous-sinusoidal-gust encounter, and the procedure is outlined for treating the more realistic condition of continuous random atmospheric turbulence, based on the methods of generalized harmonic analysis. Correlation studies between flight and calculated results are then given to evaluate the influence of wing bending flexibility on the structural response to gusts of two twin-engine transports and one four-engine bomber. It is shown that calculated results obtained by means of a discrete-gust approach reveal the general nature of the flexibility effects and lead to qualitative correlation with flight results. In contrast, calculations by means of the continuous-turbulence approach show good quantitative correlation with flight results and indicate a much greater degree of resolution of the flexibility effects.
Complexity in language learning and treatment.
Thompson, Cynthia K
2007-02-01
To introduce a Clinical Forum focused on the Complexity Account of Treatment Efficacy (C. K. Thompson, L. P. Shapiro, S. Kiran, & J. Sobecks, 2003), a counterintuitive but effective approach for treating language disorders. This approach espouses training complex structures to promote generalized improvement of simpler, linguistically related structures. Three articles are included, addressing complexity in treatment of phonology, lexical-semantics, and syntax. Complexity hierarchies based on models of normal language representation and processing are discussed in each language domain. In addition, each article presents single-subject controlled experimental studies examining the complexity effect. By counterbalancing treatment of complex and simple structures across participants, acquisition and generalization patterns are examined as they emerge. In all language domains, cascading generalization occurs from more to less complex structures; however, the opposite pattern is rarely seen. The results are robust, with replication within and across participants. The construct of complexity appears to be a general principle that is relevant to treating a range of language disorders in both children and adults. While challenging the long-standing clinical notion that treatment should begin with simple structures, mounting evidence points toward the facilitative effects of using more complex structures as a starting point for treatment.
Robotics and automation in Mars exploration
NASA Technical Reports Server (NTRS)
Bourke, Roger D.; Sturms, Francis M., Jr.; Golombek, Matthew P.; Gamber, R. T.
1992-01-01
A new approach to the exploration of Mars is examined which relies on the use of smaller and simpler vehicles. The new strategy involves the following principles: limiting science objectives to retrieval of rock samples from several different but geologically homogeneous areas; making use of emerging microspacecraft technologies to significantly reduce the mass of hardware elements; simplifying missions to the absolutely essential elements; and managing risk through the employment of many identical independent pieces some of which may fail. The emerging technologies and their applications to robotic Mars missions are discussed.
A novel iterative scheme and its application to differential equations.
Khan, Yasir; Naeem, F; Šmarda, Zdeněk
2014-01-01
The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.
PINPIN a-Si:H based structures for X-ray image detection using the laser scanning technique
NASA Astrophysics Data System (ADS)
Fernandes, M.; Vygranenko, Y.; Vieira, M.
2015-05-01
Conventional film based X-ray imaging systems are being replaced by their digital equivalents. Different approaches are being followed by considering direct or indirect conversion, with the later technique dominating. The typical, indirect conversion, X-ray panel detector uses a phosphor for X-ray conversion coupled to a large area array of amorphous silicon based optical sensors and a couple of switching thin film transistors (TFT). The pixel information can then be readout by switching the correspondent line and column transistors, routing the signal to an external amplifier. In this work we follow an alternative approach, where the electrical switching performed by the TFT is replaced by optical scanning using a low power laser beam and a sensing/switching PINPIN structure, thus resulting in a simpler device. The optically active device is a PINPIN array, sharing both front and back electrical contacts, deposited over a glass substrate. During X-ray exposure, each sensing side photodiode collects photons generated by the scintillator screen (560 nm), charging its internal capacitance. Subsequently a laser beam (445 nm) scans the switching diodes (back side) retrieving the stored charge in a sequential way, reconstructing the image. In this paper we present recent work on the optoelectronic characterization of the PINPIN structure to be incorporated in the X-ray image sensor. The results from the optoelectronic characterization of the device and the dependence on scanning beam parameters are presented and discussed. Preliminary results of line scans are also presented.
Key Topics for High-Lift Research: A Joint Wind Tunnel/Flight Test Approach
NASA Technical Reports Server (NTRS)
Fisher, David; Thomas, Flint O.; Nelson, Robert C.
1996-01-01
Future high-lift systems must achieve improved aerodynamic performance with simpler designs that involve fewer elements and reduced maintenance costs. To expeditiously achieve this, reliable CFD design tools are required. The development of useful CFD-based design tools for high lift systems requires increased attention to unresolved flow physics issues. The complex flow field over any multi-element airfoil may be broken down into certain generic component flows which are termed high-lift building block flows. In this report a broad spectrum of key flow field physics issues relevant to the design of improved high lift systems are considered. It is demonstrated that in-flight experiments utilizing the NASA Dryden Flight Test Fixture (which is essentially an instrumented ventral fin) carried on an F-15B support aircraft can provide a novel and cost effective method by which both Reynolds and Mach number effects associated with specific high lift building block flows can be investigated. These in-flight high lift building block flow experiments are most effective when performed in conjunction with coordinated ground based wind tunnel experiments in low speed facilities. For illustrative purposes three specific examples of in-flight high lift building block flow experiments capable of yielding a high payoff are described. The report concludes with a description of a joint wind tunnel/flight test approach to high lift aerodynamics research.
A review of surrogate models and their application to groundwater modeling
NASA Astrophysics Data System (ADS)
Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.
2015-08-01
The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.
NASA Astrophysics Data System (ADS)
Shenoy, U. Sandhya; Shetty, A. Nityananda
2018-03-01
Enhancement of thermal properties of conventional heat transfer fluids has become one of the important technical challenges. Since nanofluids offer a promising help in this regard, development of simpler and hassle free routes for their synthesis is of utmost importance. Synthesis of nanofluids using a hassle free route with greener chemicals has been reported. The single-step chemical approach reported here overcomes the drawbacks of the two-step procedures in the synthesis of nanofluids. The resulting Newtonian nanofluids prepared contained cuboctahedral particles of cuprous oxide and exhibited a thermal conductivity of 2.852 W·m-1·K-1. Polyvinylpyrrolidone (PVP) used during the synthesis acted as a stabilizing agent rendering the nanofluid a stability of 9 weeks.
Disruptive innovation for social change.
Christensen, Clayton M; Baumann, Heiner; Ruggles, Rudy; Sadtler, Thomas M
2006-12-01
Countries, organizations, and individuals around the globe spend aggressively to solve social problems, but these efforts often fail to deliver. Misdirected investment is the primary reason for that failure. Most of the money earmarked for social initiatives goes to organizations that are structured to support specific groups of recipients, often with sophisticated solutions. Such organizations rarely reach the broader populations that could be served by simpler alternatives. There is, however, an effective way to get to those underserved populations. The authors call it "catalytic innovation." Based on Clayton Christensen's disruptive-innovation model, catalytic innovations challenge organizational incumbents by offering simpler, good-enough solutions aimed at underserved groups. Unlike disruptive innovations, though, catalytic innovations are focused on creating social change. Catalytic innovators are defined by five distinct qualities. First, they create social change through scaling and replication. Second, they meet a need that is either overserved (that is, the existing solution is more complex than necessary for many people) or not served at all. Third, the products and services they offer are simpler and cheaper than alternatives, but recipients view them as good enough. Fourth, they bring in resources in ways that initially seem unattractive to incumbents. And fifth, they are often ignored, put down, or even encouraged by existing organizations, which don't see the catalytic innovators' solutions as viable. As the authors show through examples in health care, education, and economic development, both nonprofit and for-profit groups are finding ways to create catalytic innovation that drives social change.
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2017-07-01
Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.
Preview Scheduled Model Predictive Control For Horizontal Axis Wind Turbines
NASA Astrophysics Data System (ADS)
Laks, Jason H.
This research investigates the use of model predictive control (MPC) in application to wind turbine operation from start-up to cut-out. The studies conducted are focused on the design of an MPC controller for a 650˜KW, three-bladed horizontal axis turbine that is in operation at the National Renewable Energy Laboratory's National Wind Technology Center outside of Golden, Colorado. This turbine is at the small end of utility scale turbines, but it provides advanced instrumentation and control capabilities, and there is a good probability that the approach developed in simulation for this thesis, will be field tested on the actual turbine. A contribution of this thesis is a method to combine the use of preview measurements with MPC while also providing regulation of turbine speed and cyclic blade loading. A common MPC technique provides integral-like control to achieve offset-free operation. At the same time in wind turbine applications, multiple studies have developed "feed-forward" controls based on applying a gain to an estimate of the wind speed changes obtained from an observer incorporating a disturbance model. These approaches are based on a technique that can be referred to as disturbance accommodating control (DAC). In this thesis, it is shown that offset-free tracking MPC is equivalent to a DAC approach when the disturbance gain is computed to satisfy a regulator equation. Although the MPC literature has recognized that this approach provides "structurally stable" disturbance rejection and tracking, this step is not typically divorced from the MPC computations repeated each sample hit. The DAC formulation is conceptually simpler, and essentially uncouples regulation considerations from MPC related issues. This thesis provides a self contained proof that the DAC formulation (an observer-controller and appropriate disturbance gain) provides structurally stable regulation.
1986-12-31
synthesize synchronization skeletons"Science of Computer Programming 2, 1982, pp. 241-266 [Gel85] Gelernter, David, "Generative communication in...effective computation based on given primitives . An architecture is an abstract object-type, whose instances are computing systems. By a parallel computing...explaining the language primitives on this basis. We explain how such a basis can be "simpler" than a general-purpose manual-programming language such as
Methods for improving simulations of biological systems: systemic computation and fractal proteins
Bentley, Peter J.
2009-01-01
Modelling and simulation are becoming essential for new fields such as synthetic biology. Perhaps the most important aspect of modelling is to follow a clear design methodology that will help to highlight unwanted deficiencies. The use of tools designed to aid the modelling process can be of benefit in many situations. In this paper, the modelling approach called systemic computation (SC) is introduced. SC is an interaction-based language, which enables individual-based expression and modelling of biological systems, and the interactions between them. SC permits a precise description of a hypothetical mechanism to be written using an intuitive graph-based or a calculus-based notation. The same description can then be directly run as a simulation, merging the hypothetical mechanism and the simulation into the same entity. However, even when using well-designed modelling tools to produce good models, the best model is not always the most accurate one. Frequently, computational constraints or lack of data make it infeasible to model an aspect of biology. Simplification may provide one way forward, but with inevitable consequences of decreased accuracy. Instead of attempting to replace an element with a simpler approximation, it is sometimes possible to substitute the element with a different but functionally similar component. In the second part of this paper, this modelling approach is described and its advantages are summarized using an exemplar: the fractal protein model. Finally, the paper ends with a discussion of good biological modelling practice by presenting lessons learned from the use of SC and the fractal protein model. PMID:19324681
The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces
NASA Astrophysics Data System (ADS)
Vuik, C.; Saghir, A.; Boerstoel, G. P.
2000-08-01
Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright
Simple systems that exhibit self-directed replication
NASA Technical Reports Server (NTRS)
Reggia, James A.; Armentrout, Steven L.; Chou, Hui-Hsien; Peng, Yun
1993-01-01
Biological experience and intuition suggest that self-replication is an inherently complex phenomenon, and early cellular automata models support that conception. More recently, simpler computational models of self-directed replication called sheathed loops have been developed. It is shown here that 'unsheathing' these structures and altering certain assumptions about the symmetry of their components leads to a family of nontrivial self-replicating structures some substantially smaller and simpler than those previously reported. The dependence of replication time and transition function complexity on initial structure size, cell state symmetry, and neighborhood are examined. These results support the view that self-replication is not an inherently complex phenomenon but rather an emergent property arising from local interactions in systems that can be much simpler than is generally believed.
ERIC Educational Resources Information Center
Foo, Patrick; Warren, William H.; Duchon, Andrew; Tarr, Michael J.
2005-01-01
Do humans integrate experience on specific routes into metric survey knowledge of the environment, or do they depend on a simpler strategy of landmark navigation? The authors tested this question using a novel shortcut paradigm during walking in a virtual environment. The authors find that participants could not take successful shortcuts in a…
ERIC Educational Resources Information Center
Connolly, John J.; Glessner, Joseph T.; Hakonarson, Hakon
2013-01-01
Efforts to understand the causes of autism spectrum disorders (ASDs) have been hampered by genetic complexity and heterogeneity among individuals. One strategy for reducing complexity is to target endophenotypes, simpler biologically based measures that may involve fewer genes and constitute a more homogenous sample. A genome-wide association…
Deterministic compressive sampling for high-quality image reconstruction of ultrasound tomography.
Huy, Tran Quang; Tue, Huynh Huu; Long, Ton That; Duc-Tan, Tran
2017-05-25
A well-known diagnostic imaging modality, termed ultrasound tomography, was quickly developed for the detection of very small tumors whose sizes are smaller than the wavelength of the incident pressure wave without ionizing radiation, compared to the current gold-standard X-ray mammography. Based on inverse scattering technique, ultrasound tomography uses some material properties such as sound contrast or attenuation to detect small targets. The Distorted Born Iterative Method (DBIM) based on first-order Born approximation is an efficient diffraction tomography approach. One of the challenges for a high quality reconstruction is to obtain many measurements from the number of transmitters and receivers. Given the fact that biomedical images are often sparse, the compressed sensing (CS) technique could be therefore effectively applied to ultrasound tomography by reducing the number of transmitters and receivers, while maintaining a high quality of image reconstruction. There are currently several work on CS that dispose randomly distributed locations for the measurement system. However, this random configuration is relatively difficult to implement in practice. Instead of it, we should adopt a methodology that helps determine the locations of measurement devices in a deterministic way. For this, we develop the novel DCS-DBIM algorithm that is highly applicable in practice. Inspired of the exploitation of the deterministic compressed sensing technique (DCS) introduced by the authors few years ago with the image reconstruction process implemented using l 1 regularization. Simulation results of the proposed approach have demonstrated its high performance, with the normalized error approximately 90% reduced, compared to the conventional approach, this new approach can save half of number of measurements and only uses two iterations. Universal image quality index is also evaluated in order to prove the efficiency of the proposed approach. Numerical simulation results indicate that CS and DCS techniques offer equivalent image reconstruction quality with simpler practical implementation. It would be a very promising approach in practical applications of modern biomedical imaging technology.
NASA Astrophysics Data System (ADS)
Miorelli, Roberto; Reboud, Christophe
2018-04-01
Pulsed Eddy Current Testing (PECT) is a popular NonDestructive Testing (NDT) technique for some applications like corrosion monitoring in the oil and gas industry, or rivet inspection in the aeronautic area. Its particularity is to use a transient excitation, which allows to retrieve more information from the piece than conventional harmonic ECT, in a simpler and cheaper way than multi-frequency ECT setups. Efficient modeling tools prove, as usual, very useful to optimize experimental sensors and devices or evaluate their performance, for instance. This paper proposes an efficient simulation of PECT signals based on standard time harmonic solvers and use of an Adaptive Sparse Grid (ASG) algorithm. An adaptive sampling of the ECT signal spectrum is performed with this algorithm, then the complete spectrum is interpolated from this sparse representation and PECT signals are finally synthesized by means of inverse Fourier transform. Simulation results corresponding to existing industrial configurations are presented and the performance of the strategy is discussed by comparison to reference results.
Götschi, Thomas; de Nazelle, Audrey; Brand, Christian; Gerike, Regine
2017-09-01
This paper reviews the use of conceptual frameworks in research on active travel, such as walking and cycling. Generic framework features and a wide range of contents are identified and synthesized into a comprehensive framework of active travel behavior, as part of the Physical Activity through Sustainable Transport Approaches project (PASTA). PASTA is a European multinational, interdisciplinary research project on active travel and health. Along with an exponential growth in active travel research, a growing number of conceptual frameworks has been published since the early 2000s. Earlier frameworks are simpler and emphasize the distinction of environmental vs. individual factors, while more recently several studies have integrated travel behavior theories more thoroughly. Based on the reviewed frameworks and various behavioral theories, we propose the comprehensive PASTA conceptual framework of active travel behavior. We discuss how it can guide future research, such as data collection, data analysis, and modeling of active travel behavior, and present some examples from the PASTA project.
Magneto-hydrodynamically stable axisymmetric mirrorsa)
NASA Astrophysics Data System (ADS)
Ryutov, D. D.; Berk, H. L.; Cohen, B. I.; Molvik, A. W.; Simonen, T. C.
2011-09-01
Making axisymmetric mirrors magnetohydrodynamically (MHD) stable opens up exciting opportunities for using mirror devices as neutron sources, fusion-fission hybrids, and pure-fusion reactors. This is also of interest from a general physics standpoint (as it seemingly contradicts well-established criteria of curvature-driven instabilities). The axial symmetry allows for much simpler and more reliable designs of mirror-based fusion facilities than the well-known quadrupole mirror configurations. In this tutorial, after a summary of classical results, several techniques for achieving MHD stabilization of the axisymmetric mirrors are considered, in particular: (1) employing the favorable field-line curvature in the end tanks; (2) using the line-tying effect; (3) controlling the radial potential distribution; (4) imposing a divertor configuration on the solenoidal magnetic field; and (5) affecting the plasma dynamics by the ponderomotive force. Some illuminative theoretical approaches for understanding axisymmetric mirror stability are described. The applicability of the various stabilization techniques to axisymmetric mirrors as neutron sources, hybrids, and pure-fusion reactors are discussed; and the constraints on the plasma parameters are formulated.
Endocavity Ultrasound Probe Manipulators
Stoianovici, Dan; Kim, Chunwoo; Schäfer, Felix; Huang, Chien-Ming; Zuo, Yihe; Petrisor, Doru; Han, Misop
2014-01-01
We developed two similar structure manipulators for medical endocavity ultrasound probes with 3 and 4 degrees of freedom (DoF). These robots allow scanning with ultrasound for 3-D imaging and enable robot-assisted image-guided procedures. Both robots use remote center of motion kinematics, characteristic of medical robots. The 4-DoF robot provides unrestricted manipulation of the endocavity probe. With the 3-DoF robot the insertion motion of the probe must be adjusted manually, but the device is simpler and may also be used to manipulate external-body probes. The robots enabled a novel surgical approach of using intraoperative image-based navigation during robot-assisted laparoscopic prostatectomy (RALP), performed with concurrent use of two robotic systems (Tandem, T-RALP). Thus far, a clinical trial for evaluation of safety and feasibility has been performed successfully on 46 patients. This paper describes the architecture and design of the robots, the two prototypes, control features related to safety, preclinical experiments, and the T-RALP procedure. PMID:24795525
Autonomous Guidance of Agile Small-scale Rotorcraft
NASA Technical Reports Server (NTRS)
Mettler, Bernard; Feron, Eric
2004-01-01
This report describes a guidance system for agile vehicles based on a hybrid closed-loop model of the vehicle dynamics. The hybrid model represents the vehicle dynamics through a combination of linear-time-invariant control modes and pre-programmed, finite-duration maneuvers. This particular hybrid structure can be realized through a control system that combines trim controllers and a maneuvering control logic. The former enable precise trajectory tracking, and the latter enables trajectories at the edge of the vehicle capabilities. The closed-loop model is much simpler than the full vehicle equations of motion, yet it can capture a broad range of dynamic behaviors. It also supports a consistent link between the physical layer and the decision-making layer. The trajectory generation was formulated as an optimization problem using mixed-integer-linear-programming. The optimization is solved in a receding horizon fashion. Several techniques to improve the computational tractability were investigate. Simulation experiments using NASA Ames 'R-50 model show that this approach fully exploits the vehicle's agility.
Active multilayered capsules for in vivo bone formation
Facca, S.; Cortez, C.; Mendoza-Palomares, C.; Messadeq, N.; Dierich, A.; Johnston, A. P. R.; Mainard, D.; Voegel, J.-C.; Caruso, F.; Benkirane-Jessel, N.
2010-01-01
Interest in the development of new sources of transplantable materials for the treatment of injury or disease has led to the convergence of tissue engineering with stem cell technology. Bone and joint disorders are expected to benefit from this new technology because of the low self-regenerating capacity of bone matrix secreting cells. Herein, the differentiation of stem cells to bone cells using active multilayered capsules is presented. The capsules are composed of poly-L-glutamic acid and poly-L-lysine with active growth factors embedded into the multilayered film. The bone induction from these active capsules incubated with embryonic stem cells was demonstrated in vitro. Herein, we report the unique demonstration of a multilayered capsule-based delivery system for inducing bone formation in vivo. This strategy is an alternative approach for in vivo bone formation. Strategies using simple chemistry to control complex biological processes would be particularly powerful, as they make production of therapeutic materials simpler and more easily controlled. PMID:20160118
Equation-of-motion coupled cluster method for the description of the high spin excited states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musiał, Monika, E-mail: musial@ich.us.edu.pl; Lupa, Łukasz; Kucharski, Stanisław A.
2016-04-21
The equation-of-motion (EOM) coupled cluster (CC) approach in the version applicable for the excitation energy (EE) calculations has been formulated for high spin components. The EE-EOM-CC scheme based on the restricted Hartree-Fock reference and standard amplitude equations as used in the Davidson diagonalization procedure yields the singlet states. The triplet and higher spin components require separate amplitude equations. In the case of quintets, the relevant equations are much simpler and easier to solve. Out of 26 diagrammatic terms contributing to the R{sub 1} and R{sub 2} singlet equations in the case of quintets, only R{sub 2} operator survives with 5more » diagrammatic terms present. In addition all terms engaging three body elements of the similarity transformed Hamiltonian disappear. This indicates a substantial simplification of the theory. The implemented method has been applied to the pilot study of the excited states of the C{sub 2} molecule and quintet states of C and Si atoms.« less
Pure quasi-P-wave calculation in transversely isotropic media using a hybrid method
NASA Astrophysics Data System (ADS)
Wu, Zedong; Liu, Hongwei; Alkhalifah, Tariq
2018-07-01
The acoustic approximation for anisotropic media is widely used in current industry imaging and inversion algorithms mainly because Pwaves constitute the majority of the energy recorded in seismic exploration. The resulting acoustic formulae tend to be simpler, resulting in more efficient implementations, and depend on fewer medium parameters. However, conventional solutions of the acoustic wave equation with higher-order derivatives suffer from shear wave artefacts. Thus, we derive a new acoustic wave equation for wave propagation in transversely isotropic (TI) media, which is based on a partially separable approximation of the dispersion relation for TI media and free of shear wave artefacts. Even though our resulting equation is not a partial differential equation, it is still a linear equation. Thus, we propose to implement this equation efficiently by combining the finite difference approximation with spectral evaluation of the space-independent parts. The resulting algorithm provides solutions without the constraint ɛ ≥ δ. Numerical tests demonstrate the effectiveness of the approach.
Solute partitioning in multi-component γ/γ' Co–Ni-base superalloys with near-zero lattice misfit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meher, S.; Carroll, L. J.; Pollock, T. M.
The addition of nickel to cobalt-base alloys enables alloys with a near zero γ – γ' lattice misfit. The solute partitioning between ordered γ' precipitates and the disordered γ matrix have been investigated using atom probe tomography. Lastly, the unique shift in solute partitioning in these alloys, as compared to that in simpler Co-base alloys, derives from changes in site substitution of solutes as the relative amounts of Co and Ni change, highlighting new opportunities for the development of advanced tailored alloys.
Solute partitioning in multi-component γ/γ' Co–Ni-base superalloys with near-zero lattice misfit
Meher, S.; Carroll, L. J.; Pollock, T. M.; ...
2015-11-21
The addition of nickel to cobalt-base alloys enables alloys with a near zero γ – γ' lattice misfit. The solute partitioning between ordered γ' precipitates and the disordered γ matrix have been investigated using atom probe tomography. Lastly, the unique shift in solute partitioning in these alloys, as compared to that in simpler Co-base alloys, derives from changes in site substitution of solutes as the relative amounts of Co and Ni change, highlighting new opportunities for the development of advanced tailored alloys.
Autonomous electrochemical biosensors: A new vision to direct methanol fuel cells.
Sales, M Goreti F; Brandão, Lúcia
2017-12-15
A new approach to biosensing devices is demonstrated aiming an easier and simpler application in routine health care systems. Our methodology considered a new concept for the biosensor transducing event that allows to obtain, simultaneously, an equipment-free, user-friendly, cheap electrical biosensor. The use of the anode triple-phase boundary (TPB) layer of a passive direct methanol fuel cell (DMFC) as biosensor transducer is herein proposed. For that, the ionomer present in the anode catalytic layer of the DMFC is partially replaced by an ionomer with molecular recognition capability working as the biorecognition element of the biosensor. In this approach, fuel cell anode catalysts are modified with a molecularly imprinted polymer (plastic antibody) capable of protein recognition (ferritin is used as model protein), inserted in a suitable membrane electrode assembly (MEA) and tested, as initial proof-of-concept, in a non-passive fuel cell operation environment. The anchoring of the ionomer-based plastic antibody on the catalyst surface follows a simple one-step grafting from approach through radical polymerization. Such modification increases fuel cell performance due to the proton conductivity and macroporosity characteristics of the polymer on the TPB. Finally, the response and selectivity of the bioreceptor inside the fuel cell showed a clear and selective signal from the biosensor. Moreover, such pioneering transducing approach allowed amplification of the electrochemical response and increased biosensor sensitivity by 2 orders of magnitude when compared to a 3-electrodes configuration system. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
A green approach for preparing anion exchange membrane based on cardo polyetherketone powders
NASA Astrophysics Data System (ADS)
Hu, Jue; Zhang, Chengxu; Zhang, Xiaodong; Chen, Longwei; Jiang, Lin; Meng, Yuedong; Wang, Xiangke
2014-12-01
Anion exchange membranes (AEMs) have attracted great attention due to their irreplaceable role in platinum-free fuel cell applications. The majority of AEM preparations have been performed in two steps: the grafting of functional groups and quaternization. Here, we adopted a simpler, more eco-friendly approach for the first time to prepare AEMs by atmospheric-pressure plasma-grafting. This approach enables the direct introduction of anion exchange groups (benzyltrimethylammonium groups) into the polymer matrix, overcoming the need for toxic chloromethyl ether and quaternization reagents. Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy and 1H NMR spectroscopy results demonstrate that benzyltrimethylammonium groups have been successfully grafted into the cardo polyetherketone (PEK-C) matrix. Thermogravimetric analysis reveals that the plasma-grafting technique is a facile and non-destructive method able to improve the thermal stability of the polymer matrix due to the strong preservation of the PEK-C backbone structure and the cross-linking of the grafted side chains. The plasma-grafted PG-NOH membrane, which shows satisfactory alcohol resistance (ethanol permeability of 6.3 × 10-7 cm2 s-1), selectivity (1.2 × 104 S s cm-3), thermal stability (safely used below 130 °C), chemical stability, anion conductivity (7.7 mS cm-1 at 20 °C in deionized water) and mechanical properties is promising for the construction of high-performance fuel cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Sahubar Ali Mohd. Nadhar, E-mail: sahubar@uum.edu.my; Ramli, Razamin, E-mail: razamin@uum.edu.my; Baten, M. D. Azizul, E-mail: baten-math@yahoo.com
Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approachmore » is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers’ efficiency.« less
Using leap motion to investigate the emergence of structure in speech and language.
Eryilmaz, Kerem; Little, Hannah
2017-10-01
In evolutionary linguistics, experiments using artificial signal spaces are being used to investigate the emergenceof speech structure. These signal spaces need to be continuous, non-discretized spaces from which discrete unitsand patterns can emerge. They need to be dissimilar from-but comparable with-the vocal tract, in order tominimize interference from pre-existing linguistic knowledge, while informing us about language. This is a hardbalance to strike. This article outlines a new approach that uses the Leap Motion, an infrared controller that canconvert manual movement in 3d space into sound. The signal space using this approach is more flexible than signalspaces in previous attempts. Further, output data using this approach is simpler to arrange and analyze. Theexperimental interface was built using free, and mostly open- source libraries in Python. We provide our sourcecode for other researchers as open source.
NASA Astrophysics Data System (ADS)
Tisdell, C. C.
2017-08-01
Solution methods to exact differential equations via integrating factors have a rich history dating back to Euler (1740) and the ideas enjoy applications to thermodynamics and electromagnetism. Recently, Azevedo and Valentino presented an analysis of the generalized Bernoulli equation, constructing a general solution by linearizing the problem through a substitution. The purpose of this note is to present an alternative approach using 'exact methods', illustrating that a substitution and linearization of the problem is unnecessary. The ideas may be seen as forming a complimentary and arguably simpler approach to Azevedo and Valentino that have the potential to be assimilated and adapted to pedagogical needs of those learning and teaching exact differential equations in schools, colleges, universities and polytechnics. We illustrate how to apply the ideas through an analysis of the Gompertz equation, which is of interest in biomathematical models of tumour growth.
Automatic discovery of cell types and microcircuitry from neural connectomics
Jonas, Eric; Kording, Konrad
2015-01-01
Neural connectomics has begun producing massive amounts of data, necessitating new analysis methods to discover the biological and computational structure. It has long been assumed that discovering neuron types and their relation to microcircuitry is crucial to understanding neural function. Here we developed a non-parametric Bayesian technique that identifies neuron types and microcircuitry patterns in connectomics data. It combines the information traditionally used by biologists in a principled and probabilistically coherent manner, including connectivity, cell body location, and the spatial distribution of synapses. We show that the approach recovers known neuron types in the retina and enables predictions of connectivity, better than simpler algorithms. It also can reveal interesting structure in the nervous system of Caenorhabditis elegans and an old man-made microprocessor. Our approach extracts structural meaning from connectomics, enabling new approaches of automatically deriving anatomical insights from these emerging datasets. DOI: http://dx.doi.org/10.7554/eLife.04250.001 PMID:25928186
Automatic discovery of cell types and microcircuitry from neural connectomics
Jonas, Eric; Kording, Konrad
2015-04-30
Neural connectomics has begun producing massive amounts of data, necessitating new analysis methods to discover the biological and computational structure. It has long been assumed that discovering neuron types and their relation to microcircuitry is crucial to understanding neural function. Here we developed a non-parametric Bayesian technique that identifies neuron types and microcircuitry patterns in connectomics data. It combines the information traditionally used by biologists in a principled and probabilistically coherent manner, including connectivity, cell body location, and the spatial distribution of synapses. We show that the approach recovers known neuron types in the retina and enables predictions of connectivity,more » better than simpler algorithms. It also can reveal interesting structure in the nervous system of Caenorhabditis elegans and an old man-made microprocessor. Our approach extracts structural meaning from connectomics, enabling new approaches of automatically deriving anatomical insights from these emerging datasets.« less
Automatic discovery of cell types and microcircuitry from neural connectomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonas, Eric; Kording, Konrad
Neural connectomics has begun producing massive amounts of data, necessitating new analysis methods to discover the biological and computational structure. It has long been assumed that discovering neuron types and their relation to microcircuitry is crucial to understanding neural function. Here we developed a non-parametric Bayesian technique that identifies neuron types and microcircuitry patterns in connectomics data. It combines the information traditionally used by biologists in a principled and probabilistically coherent manner, including connectivity, cell body location, and the spatial distribution of synapses. We show that the approach recovers known neuron types in the retina and enables predictions of connectivity,more » better than simpler algorithms. It also can reveal interesting structure in the nervous system of Caenorhabditis elegans and an old man-made microprocessor. Our approach extracts structural meaning from connectomics, enabling new approaches of automatically deriving anatomical insights from these emerging datasets.« less
NASA Astrophysics Data System (ADS)
Price, Stanton R.; Murray, Bryce; Hu, Lequn; Anderson, Derek T.; Havens, Timothy C.; Luke, Robert H.; Keller, James M.
2016-05-01
A serious threat to civilians and soldiers is buried and above ground explosive hazards. The automatic detection of such threats is highly desired. Many methods exist for explosive hazard detection, e.g., hand-held based sensors, downward and forward looking vehicle mounted platforms, etc. In addition, multiple sensors are used to tackle this extreme problem, such as radar and infrared (IR) imagery. In this article, we explore the utility of feature and decision level fusion of learned features for forward looking explosive hazard detection in IR imagery. Specifically, we investigate different ways to fuse learned iECO features pre and post multiple kernel (MK) support vector machine (SVM) based classification. Three MK strategies are explored; fixed rule, heuristics and optimization-based. Performance is assessed in the context of receiver operating characteristic (ROC) curves on data from a U.S. Army test site that contains multiple target and clutter types, burial depths and times of day. Specifically, the results reveal two interesting things. First, the different MK strategies appear to indicate that the different iECO individuals are all more-or-less important and there is not a dominant feature. This is reinforcing as our hypothesis was that iECO provides different ways to approach target detection. Last, we observe that while optimization-based MK is mathematically appealing, i.e., it connects the learning of the fusion to the underlying classification problem we are trying to solve, it appears to be highly susceptible to over fitting and simpler, e.g., fixed rule and heuristics approaches help us realize more generalizable iECO solutions.
NASA Technical Reports Server (NTRS)
Li, Jun; Koehne, Jessica; Chen, Hua; Cassell, Alan; Ng, Hou Tee; Ye, Qi; Han, Jie; Meyyappan, M.
2004-01-01
There is a strong need for faster, cheaper, and simpler methods for nucleic acid analysis in today s clinical tests. Nanotechnologies can potentially provide solutions to these requirements by integrating nanomaterials with biofunctionalities. Dramatic improvement in the sensitivity and multiplexing can be achieved through the high-degree miniaturization. Here, we present our study in the development of an ultrasensitive label-free electronic chip for DNA/RNA analysis based on carbon nanotube nanoelectrode arrays. A reliable nanoelectrode array based on vertically aligned multi-walled carbon nanotubes (MWNTs) embedded in a SiO2 matrix is fabricated using a bottom-up approach. Characteristic nanoelectrode behavior is observed with a low-density MWNT nanoelectrode array in measuring both the bulk and surface immobilized redox species. The open-end of MWNTs are found to present similar properties as graphite edge-plane electrodes, with a wide potential window, flexible chemical functionalities, and good biocompatibility. A BRCA1 related oligonucleotide probe with 18 bases is covalently functionalized at the open ends of the MWNTs and specifically hybridized with an oligonucleotide target as well as a PCR amplicon. The guanine bases in the target molecules are employed as the signal moieties for the electrochemical measurements. Ru(bpy)3(2+) mediator is used to further amplify the guanine oxidation signal. This technique has been employed for direct electrochemical detection of label-free PCR amplicon through specific hybridization with the BRCAl probe. The detection limit is estimated to be less than approximately 1000 DNA molecules, approaching the limit of the sensitivity by laser-based fluorescence techniques in DNA microarray. This system provides a general electronic platform for rapid molecular diagnostics in applications requiring ultrahigh sensitivity, high-degree of miniaturization, simple sample preparation, and low- cost operation.
NASA Astrophysics Data System (ADS)
Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul
2017-11-01
In recent years eco-efficiency which considers the effect of production process on environment in determining the efficiency of firms have gained traction and a lot of attention. Rice farming is one of such production processes which typically produces two types of outputs which are economic desirable as well as environmentally undesirable. In efficiency analysis, these undesirable outputs cannot be ignored and need to be included in the model to obtain the actual estimation of firm's efficiency. There are numerous approaches that have been used in data envelopment analysis (DEA) literature to account for undesirable outputs of which directional distance function (DDF) approach is the most widely used as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, slack based DDF DEA approaches considers the output shortfalls and input excess in determining efficiency. In situations when data uncertainty is present, the deterministic DEA model is not suitable to be used as the effects of uncertain data will not be considered. In this case, it has been found that interval data approach is suitable to account for data uncertainty as it is much simpler to model and need less information regarding the underlying data distribution and membership function. The proposed model uses an enhanced DEA model which is based on DDF approach and incorporates slack based measure to determine efficiency in the presence of undesirable factors and data uncertainty. Interval data approach was used to estimate the values of inputs, undesirable outputs and desirable outputs. Two separate slack based interval DEA models were constructed for optimistic and pessimistic scenarios. The developed model was used to determine rice farmers efficiency from Kepala Batas, Kedah. The obtained results were later compared to the results obtained using a deterministic DDF DEA model. The study found that 15 out of 30 farmers are efficient in all cases. It is also found that the average efficiency values of all farmers for deterministic case is always lower than the optimistic scenario and higher than pessimistic scenario. The results confirm with the hypothesis since farmers who operates in optimistic scenario are in best production situation compared to pessimistic scenario in which they operate in worst production situation. The results show that the proposed model can be applied when data uncertainty is present in the production environment.
Transmission-grating-based wavefront tilt sensor.
Iwata, Koichi; Fukuda, Hiroki; Moriwaki, Kousuke
2009-07-10
We propose a new type of tilt sensor. It consists of a grating and an image sensor. It detects the tilt of the collimated wavefront reflected from a plane mirror. Its principle is described and analyzed based on wave optics. Experimental results show its validity. Simulations of the ordinary autocollimator and the proposed tilt sensor show that the effect of noise on the measured angle is smaller for the latter. These results show a possibility of making a smaller and simpler tilt sensor.
NASA Astrophysics Data System (ADS)
Cisneros, Rafael; Gao, Rui; Ortega, Romeo; Husain, Iqbal
2016-10-01
The present paper proposes a maximum power extraction control for a wind system consisting of a turbine, a permanent magnet synchronous generator, a rectifier, a load and one constant voltage source, which is used to form the DC bus. We propose a linear PI controller, based on passivity, whose stability is guaranteed under practically reasonable assumptions. PI structures are widely accepted in practice as they are easier to tune and simpler than other existing model-based methods. Real switching based simulations have been performed to assess the performance of the proposed controller.
Efficient dual approach to distance metric learning.
Shen, Chunhua; Kim, Junae; Liu, Fayao; Wang, Lei; van den Hengel, Anton
2014-02-01
Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.
Multiple imputation of missing covariates for the Cox proportional hazards cure model
Beesley, Lauren J; Bartlett, Jonathan W; Wolf, Gregory T; Taylor, Jeremy M G
2016-01-01
We explore several approaches for imputing partially observed covariates when the outcome of interest is a censored event time and when there is an underlying subset of the population that will never experience the event of interest. We call these subjects “cured,” and we consider the case where the data are modeled using a Cox proportional hazards (CPH) mixture cure model. We study covariate imputation approaches using fully conditional specification (FCS). We derive the exact conditional distribution and suggest a sampling scheme for imputing partially observed covariates in the CPH cure model setting. We also propose several approximations to the exact distribution that are simpler and more convenient to use for imputation. A simulation study demonstrates that the proposed imputation approaches outperform existing imputation approaches for survival data without a cure fraction in terms of bias in estimating CPH cure model parameters. We apply our multiple imputation techniques to a study of patients with head and neck cancer. PMID:27439726
Modeling human target acquisition in ground-to-air weapon systems
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Mohr, R. L.; Vikmanis, M.; Wei, K. C.
1982-01-01
The problems associated with formulating and validating mathematical models for describing and predicting human target acquisition response are considered. In particular, the extension of the human observer model to include the acquisition phase as well as the tracking segment is presented. Relationship of the Observer model structure to the more complex Standard Optimal Control model formulation and to the simpler Transfer Function/Noise representation is discussed. Problems pertinent to structural identifiability and the form of the parameterization are elucidated. A systematic approach toward the identification of the observer acquisition model parameters from ensemble tracking error data is presented.
NASA Astrophysics Data System (ADS)
Pocebneva, Irina; Belousov, Vadim; Fateeva, Irina
2018-03-01
This article provides a methodical description of resource-time analysis for a wide range of requirements imposed for resource consumption processes in scheduling tasks during the construction of high-rise buildings and facilities. The core of the proposed approach and is the resource models being determined. The generalized network models are the elements of those models, the amount of which can be too large to carry out the analysis of each element. Therefore, the problem is to approximate the original resource model by simpler time models, when their amount is not very large.
Analyses of Third Order Bose-Einstein Correlation by Means of Coulomb Wave Function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biyajima, Minoru; Mizoguchi, Takuya; Suzuki, Naomichi
2006-04-11
In order to include a correction by the Coulomb interaction in Bose-Einstein correlation (BEC), the wave function for the Coulomb scattering were introduced in the quantum optical approach to BEC in the previous work. If we formulate the amplitude written by Coulomb wave functions according to the diagram for BEC in the plane wave formulation, the formula for 3{pi} -BEC becomes simpler than that of our previous work. We re-analyze the raw data of 3{pi} -BEC by NA44 and STAR Collaborations by this formula. Results are compared with the previous ones.
NASA Astrophysics Data System (ADS)
Holzapfel, Wilfried B.
2018-06-01
Thermodynamic modeling of fluids (liquids and gases) uses mostly series expansions which diverge at low temperatures and do not fit to the behavior of metastable quenched fluids (amorphous, glass like solids). These divergences are removed in the present approach by the use of reasonable forms for the "cold" potential energy and for the thermal pressure of the fluid system. Both terms are related to the potential energy and to the thermal pressure of the crystalline phase in a coherent way, which leads to simpler and non diverging series expansions for the thermal pressure and thermal energy of the fluid system. Data for solid and fluid argon are used to illustrate the potential of the present approach.
USDA-ARS?s Scientific Manuscript database
As baits, fermented food products are generally attractive to many types of insects, making it difficult to sort through nontarget insects to monitor a pest species of interest. We test the hypothesis that a chemically simpler and more defined attractant developed for a target insect is more specifi...
Searching for simplicity in the analysis of neurons and behavior
Stephens, Greg J.; Osborne, Leslie C.; Bialek, William
2011-01-01
What fascinates us about animal behavior is its richness and complexity, but understanding behavior and its neural basis requires a simpler description. Traditionally, simplification has been imposed by training animals to engage in a limited set of behaviors, by hand scoring behaviors into discrete classes, or by limiting the sensory experience of the organism. An alternative is to ask whether we can search through the dynamics of natural behaviors to find explicit evidence that these behaviors are simpler than they might have been. We review two mathematical approaches to simplification, dimensionality reduction and the maximum entropy method, and we draw on examples from different levels of biological organization, from the crawling behavior of Caenorhabditis elegans to the control of smooth pursuit eye movements in primates, and from the coding of natural scenes by networks of neurons in the retina to the rules of English spelling. In each case, we argue that the explicit search for simplicity uncovers new and unexpected features of the biological system and that the evidence for simplification gives us a language with which to phrase new questions for the next generation of experiments. The fact that similar mathematical structures succeed in taming the complexity of very different biological systems hints that there is something more general to be discovered. PMID:21383186
Multi-model inference for incorporating trophic and climate uncertainty into stock assessments
NASA Astrophysics Data System (ADS)
Ianelli, James; Holsman, Kirstin K.; Punt, André E.; Aydin, Kerim
2016-12-01
Ecosystem-based fisheries management (EBFM) approaches allow a broader and more extensive consideration of objectives than is typically possible with conventional single-species approaches. Ecosystem linkages may include trophic interactions and climate change effects on productivity for the relevant species within the system. Presently, models are evolving to include a comprehensive set of fishery and ecosystem information to address these broader management considerations. The increased scope of EBFM approaches is accompanied with a greater number of plausible models to describe the systems. This can lead to harvest recommendations and biological reference points that differ considerably among models. Model selection for projections (and specific catch recommendations) often occurs through a process that tends to adopt familiar, often simpler, models without considering those that incorporate more complex ecosystem information. Multi-model inference provides a framework that resolves this dilemma by providing a means of including information from alternative, often divergent models to inform biological reference points and possible catch consequences. We apply an example of this approach to data for three species of groundfish in the Bering Sea: walleye pollock, Pacific cod, and arrowtooth flounder using three models: 1) an age-structured "conventional" single-species model, 2) an age-structured single-species model with temperature-specific weight at age, and 3) a temperature-specific multi-species stock assessment model. The latter two approaches also include consideration of alternative future climate scenarios, adding another dimension to evaluate model projection uncertainty. We show how Bayesian model-averaging methods can be used to incorporate such trophic and climate information to broaden single-species stock assessments by using an EBFM approach that may better characterize uncertainty.
NASA Astrophysics Data System (ADS)
Minakov, A.; Medvedev, S.
2017-12-01
Analysis of lithospheric stresses is necessary to gain understanding of the forces that drive plate tectonics and intraplate deformations and the structure and strength of the lithosphere. A major source of lithospheric stresses is believed to be in variations of surface topography and lithospheric density. The traditional approach to stress estimation is based on direct calculations of the Gravitational Potential Energy (GPE), the depth integrated density moment of the lithosphere column. GPE is highly sensitive to density structure which, however, is often poorly constrained. Density structure of the lithosphere may be refined using methods of gravity modeling. However, the resulted density models suffer from non-uniqueness of the inverse problem. An alternative approach is to directly estimate lithospheric stresses (depth integrated) from satellite gravimetry data. Satellite gravity gradient measurements by the ESA GOCE mission ensures a wealth of data for mapping lithospheric stresses if a link between data and stresses or GPE can be established theoretically. The non-uniqueness of interpretation of sources of the gravity signal holds in this case as well. Therefore, the data analysis was tested for the North Atlantic region where reliable additional constraints are supplied by both controlled-source and earthquake seismology. The study involves comparison of three methods of stress modeling: (1) the traditional modeling approach using a thin sheet approximation; (2) the filtered geoid approach; and (3) the direct utilization of the gravity gradient tensor. Whereas the first two approaches (1)-(2) calculate GPE and utilize a computationally expensive finite element mechanical modeling to calculate stresses, the approach (3) uses a much simpler numerical treatment but requires simplifying assumptions that yet to be tested. The modeled orientation of principal stresses and stress magnitudes by each of the three methods are compared with the World Stress Map.
Improving the use of environmental diversity as a surrogate for species representation.
Albuquerque, Fabio; Beier, Paul
2018-01-01
The continuous p-median approach to environmental diversity (ED) is a reliable way to identify sites that efficiently represent species. A recently developed maximum dispersion (maxdisp) approach to ED is computationally simpler, does not require the user to reduce environmental space to two dimensions, and performed better than continuous p-median for datasets of South African animals. We tested whether maxdisp performs as well as continuous p-median for 12 datasets that included plants and other continents, and whether particular types of environmental variables produced consistently better models of ED. We selected 12 species inventories and atlases to span a broad range of taxa (plants, birds, mammals, reptiles, and amphibians), spatial extents, and resolutions. For each dataset, we used continuous p-median ED and maxdisp ED in combination with five sets of environmental variables (five combinations of temperature, precipitation, insolation, NDVI, and topographic variables) to select environmentally diverse sites. We used the species accumulation index (SAI) to evaluate the efficiency of ED in representing species for each approach and set of environmental variables. Maxdisp ED represented species better than continuous p-median ED in five of 12 biodiversity datasets, and about the same for the other seven biodiversity datasets. Efficiency of ED also varied with type of variables used to define environmental space, but no particular combination of variables consistently performed best. We conclude that maxdisp ED performs at least as well as continuous p-median ED, and has the advantage of faster and simpler computation. Surprisingly, using all 38 environmental variables was not consistently better than using subsets of variables, nor did any subset emerge as consistently best or worst; further work is needed to identify the best variables to define environmental space. Results can help ecologists and conservationists select sites for species representation and assist in conservation planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petersen, Jakob; Pollak, Eli, E-mail: eli.pollak@weizmann.ac.il
2015-12-14
One of the challenges facing on-the-fly ab initio semiclassical time evolution is the large expense needed to converge the computation. In this paper, we suggest that a significant saving in computational effort may be achieved by employing a semiclassical initial value representation (SCIVR) of the quantum propagator based on the Heisenberg interaction representation. We formulate and test numerically a modification and simplification of the previous semiclassical interaction representation of Shao and Makri [J. Chem. Phys. 113, 3681 (2000)]. The formulation is based on the wavefunction form of the semiclassical propagation instead of the operator form, and so is simpler andmore » cheaper to implement. The semiclassical interaction representation has the advantage that the phase and prefactor vary relatively slowly as compared to the “standard” SCIVR methods. This improves its convergence properties significantly. Using a one-dimensional model system, the approximation is compared with Herman-Kluk’s frozen Gaussian and Heller’s thawed Gaussian approximations. The convergence properties of the interaction representation approach are shown to be favorable and indicate that the interaction representation is a viable way of incorporating on-the-fly force field information within a semiclassical framework.« less
Cloud parallel processing of tandem mass spectrometry based proteomics data.
Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus
2012-10-05
Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.
A nonlinear CDM based damage growth law for ductile materials
NASA Astrophysics Data System (ADS)
Gautam, Abhinav; Priya Ajit, K.; Sarkar, Prabir Kumar
2018-02-01
A nonlinear ductile damage growth criterion is proposed based on continuum damage mechanics (CDM) approach. The model is derived in the framework of thermodynamically consistent CDM assuming damage to be isotropic. In this study, the damage dissipation potential is also derived to be a function of varying strain hardening exponent in addition to damage strain energy release rate density. Uniaxial tensile tests and load-unload-cyclic tensile tests for AISI 1020 steel, AISI 1030 steel and Al 2024 aluminum alloy are considered for the determination of their respective damage variable D and other parameters required for the model(s). The experimental results are very closely predicted, with a deviation of 0%-3%, by the proposed model for each of the materials. The model is also tested with predictabilities of damage growth by other models in the literature. Present model detects the state of damage quantitatively at any level of plastic strain and uses simpler material tests to find the parameters of the model. So, it should be useful in metal forming industries to assess the damage growth for the desired deformation level a priori. The superiority of the new model is clarified by the deviations in the predictability of test results by other models.
Reflections in computer modeling of rooms: Current approaches and possible extensions
NASA Astrophysics Data System (ADS)
Svensson, U. Peter
2005-09-01
Computer modeling of rooms is most commonly done by some calculation technique that is based on decomposing the sound field into separate reflection components. In a first step, a list of possible reflection paths is found and in a second step, an impulse response is constructed from the list of reflections. Alternatively, the list of reflections is used for generating a simpler echogram, the energy decay as function of time. A number of geometrical acoustics-based methods can handle specular reflections, diffuse reflections, edge diffraction, curved surfaces, and locally/non-locally reacting surfaces to various degrees. This presentation gives an overview of how reflections are handled in the image source method and variants of the ray-tracing methods, which are dominating today in commercial software, as well as in the radiosity method and edge diffraction methods. The use of the recently standardized scattering and diffusion coefficients of surfaces is discussed. Possibilities for combining edge diffraction, surface scattering, and impedance boundaries are demonstrated for an example surface. Finally, the number of reflection paths becomes prohibitively high when all such combinations are included as demonstrated for a simple concert hall model. [Work supported by the Acoustic Research Centre through NFR, Norway.
Extrinsic Calibration of Camera Networks Based on Pedestrians
Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried
2016-01-01
In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080
On the exactness of effective Floquet Hamiltonians employed in solid-state NMR spectroscopy
NASA Astrophysics Data System (ADS)
Garg, Rajat; Ramachandran, Ramesh
2017-05-01
Development of theoretical models based on analytic theory has remained an active pursuit in molecular spectroscopy for its utility both in the design of experiments as well as in the interpretation of spectroscopic data. In particular, the role of "Effective Hamiltonians" in the evolution of theoretical frameworks is well known across all forms of spectroscopy. Nevertheless, a constant revalidation of the approximations employed in the theoretical frameworks is necessitated by the constant improvements on the experimental front in addition to the complexity posed by the systems under study. Here in this article, we confine our discussion to the derivation of effective Floquet Hamiltonians based on the contact transformation procedure. While the importance of the effective Floquet Hamiltonians in the qualitative description of NMR experiments has been realized in simpler cases, its extension in quantifying spectral data deserves a cautious approach. With this objective, the validity of the approximations employed in the derivation of the effective Floquet Hamiltonians is re-examined through a comparison with exact numerical methods under differing experimental conditions. The limitations arising from the existing analytic methods are outlined along with remedial measures for improving the accuracy of the derived effective Floquet Hamiltonians.
NASA Technical Reports Server (NTRS)
Chang, S.-C.; Himansu, A.; Loh, C.-Y.; Wang, X.-Y.; Yu, S.-T.J.
2005-01-01
This paper reports on a significant advance in the area of nonreflecting boundary conditions (NRBCs) for unsteady flow computations. As a part of t he development of t he space-time conservation element and solution element (CE/SE) method, sets of NRBCs for 1D Euler problems are developed without using any characteristics- based techniques. These conditions are much simpler than those commonly reported in the literature, yet so robust that they are applicable to subsonic, transonic and supersonic flows even in the presence of discontinuities. In addition, the straightforward multidimensional extensions of the present 1D NRBCs have been shown numerically to be equally simple and robust. The paper details the theoretical underpinning of these NRBCs, and explains t heir unique robustness and accuracy in terms of t he conservation of space-time fluxes. Some numerical results for an extended Sod's shock-tube problem, illustrating the effectiveness of the present NRBCs are included, together with an associated simple Fortran computer program. As a preliminary to the present development, a review of the basic CE/SE schemes is also included.
Koopman Operator Framework for Time Series Modeling and Analysis
NASA Astrophysics Data System (ADS)
Surana, Amit
2018-01-01
We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.
Teaching Weight-Gravity and Gravitation in Middle School. Testing a New Instructional Approach
NASA Astrophysics Data System (ADS)
Galili, Igal; Bar, Varda; Brosh, Yaffa
2016-12-01
This study deals with the school instruction of the concept of weight. The historical review reveals the major steps in changing weight definition reflecting the epistemological changes in physics. The latest change drawing on the operation of weighing has been not widely copied into physics education. We compared the older instruction based on the gravitational definition of weight with the newer one based on the operational definition. The experimental teaching was applied in two versions, simpler and extended. The study examined the impact of this instruction on the middle school students in regular teaching environment. The experiment involved three groups ( N = 486) of 14-year-old students (ninth grade). The assessment drew on a written questionnaire and personal interviews. The elicited schemes of conceptual knowledge allowed to evaluate the impact on students' pertinent knowledge. The advantage of the new teaching manifested itself in the significant decrease of the well-known misconceptions such as "space causes weightlessness," "weight is an unchanged property of the body considered," and "heavier objects fall faster". The twofold advantage—epistemological and conceptual—of the operational definition of weight supports the correspondent curricular changes of its adoption.
Miravitlles, Marc; Soler-Cataluña, Juan José; Calle, Myriam; Molina, Jesús; Almagro, Pere; Quintano, José Antonio; Trigueros, Juan Antonio; Cosío, Borja G; Casanova, Ciro; Antonio Riesco, Juan; Simonet, Pere; Rigau, David; Soriano, Joan B; Ancochea, Julio
2017-06-01
The clinical presentation of chronic obstructive pulmonary disease (COPD) varies widely, so treatment must be tailored according to the level of risk and phenotype. In 2012, the Spanish COPD Guidelines (GesEPOC) first established pharmacological treatment regimens based on clinical phenotypes. These regimens were subsequently adopted by other national guidelines, and since then, have been backed up by new evidence. In this 2017 update, the original severity classification has been replaced by a much simpler risk classification (low or high risk), on the basis of lung function, dyspnea grade, and history of exacerbations, while determination of clinical phenotype is recommended only in high-risk patients. The same clinical phenotypes have been maintained: non-exacerbator, asthma-COPD overlap (ACO), exacerbator with emphysema, and exacerbator with bronchitis. Pharmacological treatment of COPD is based on bronchodilators, the only treatment recommended in low-risk patients. High-risk patients will receive different drugs in addition to bronchodilators, depending on their clinical phenotype. GesEPOC reflects a more individualized approach to COPD treatment, according to patient clinical characteristics and level of risk or complexity. Copyright © 2017 SEPAR. Publicado por Elsevier España, S.L.U. All rights reserved.
Blood coagulation screening using a paper-based microfluidic lateral flow device.
Li, H; Han, D; Pauletti, G M; Steckl, A J
2014-10-21
A simple approach to the evaluation of blood coagulation using a microfluidic paper-based lateral flow assay (LFA) device for point-of-care (POC) and self-monitoring screening is reported. The device utilizes whole blood, without the need for prior separation of plasma from red blood cells (RBC). Experiments were performed using animal (rabbit) blood treated with trisodium citrate to prevent coagulation. CaCl2 solutions of varying concentrations are added to citrated blood, producing Ca(2+) ions to re-establish the coagulation cascade and mimic different blood coagulation abilities in vitro. Blood samples are dispensed into a paper-based LFA device consisting of sample pad, analytical membrane and wicking pad. The porous nature of the cellulose membrane separates the aqueous plasma component from the large blood cells. Since the viscosity of blood changes with its coagulation ability, the distance RBCs travel in the membrane in a given time can be related to the blood clotting time. The distance of the RBC front is found to decrease linearly with increasing CaCl2 concentration, with a travel rate decreasing from 3.25 mm min(-1) for no added CaCl2 to 2.2 mm min(-1) for 500 mM solution. Compared to conventional plasma clotting analyzers, the LFA device is much simpler and it provides a significantly larger linear range of measurement. Using the red colour of RBCs as a visible marker, this approach can be utilized to produce a simple and clear indicator of whether the blood condition is within the appropriate range for the patient's condition.
Uchiyama, Ikuo
2008-10-31
Identifying the set of intrinsically conserved genes, or the genomic core, among related genomes is crucial for understanding prokaryotic genomes where horizontal gene transfers are common. Although core genome identification appears to be obvious among very closely related genomes, it becomes more difficult when more distantly related genomes are compared. Here, we consider the core structure as a set of sufficiently long segments in which gene orders are conserved so that they are likely to have been inherited mainly through vertical transfer, and developed a method for identifying the core structure by finding the order of pre-identified orthologous groups (OGs) that maximally retains the conserved gene orders. The method was applied to genome comparisons of two well-characterized families, Bacillaceae and Enterobacteriaceae, and identified their core structures comprising 1438 and 2125 OGs, respectively. The core sets contained most of the essential genes and their related genes, which were primarily included in the intersection of the two core sets comprising around 700 OGs. The definition of the genomic core based on gene order conservation was demonstrated to be more robust than the simpler approach based only on gene conservation. We also investigated the core structures in terms of G+C content homogeneity and phylogenetic congruence, and found that the core genes primarily exhibited the expected characteristic, i.e., being indigenous and sharing the same history, more than the non-core genes. The results demonstrate that our strategy of genome alignment based on gene order conservation can provide an effective approach to identify the genomic core among moderately related microbial genomes.
A reductionist perspective on quantum statistical mechanics: Coarse-graining of path integrals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinitskiy, Anton V.; Voth, Gregory A., E-mail: gavoth@uchicago.edu
2015-09-07
Computational modeling of the condensed phase based on classical statistical mechanics has been rapidly developing over the last few decades and has yielded important information on various systems containing up to millions of atoms. However, if a system of interest contains important quantum effects, well-developed classical techniques cannot be used. One way of treating finite temperature quantum systems at equilibrium has been based on Feynman’s imaginary time path integral approach and the ensuing quantum-classical isomorphism. This isomorphism is exact only in the limit of infinitely many classical quasiparticles representing each physical quantum particle. In this work, we present a reductionistmore » perspective on this problem based on the emerging methodology of coarse-graining. This perspective allows for the representations of one quantum particle with only two classical-like quasiparticles and their conjugate momenta. One of these coupled quasiparticles is the centroid particle of the quantum path integral quasiparticle distribution. Only this quasiparticle feels the potential energy function. The other quasiparticle directly provides the observable averages of quantum mechanical operators. The theory offers a simplified perspective on quantum statistical mechanics, revealing its most reductionist connection to classical statistical physics. By doing so, it can facilitate a simpler representation of certain quantum effects in complex molecular environments.« less
The impact of genetics on future drug discovery in schizophrenia.
Matsumoto, Mitsuyuki; Walton, Noah M; Yamada, Hiroshi; Kondo, Yuji; Marek, Gerard J; Tajinda, Katsunori
2017-07-01
Failures of investigational new drugs (INDs) for schizophrenia have left huge unmet medical needs for patients. Given the recent lackluster results, it is imperative that new drug discovery approaches (and resultant drug candidates) target pathophysiological alterations that are shared in specific, stratified patient populations that are selected based on pre-identified biological signatures. One path to implementing this paradigm is achievable by leveraging recent advances in genetic information and technologies. Genome-wide exome sequencing and meta-analysis of single nucleotide polymorphism (SNP)-based association studies have already revealed rare deleterious variants and SNPs in patient populations. Areas covered: Herein, the authors review the impact that genetics have on the future of schizophrenia drug discovery. The high polygenicity of schizophrenia strongly indicates that this disease is biologically heterogeneous so the identification of unique subgroups (by patient stratification) is becoming increasingly necessary for future investigational new drugs. Expert opinion: The authors propose a pathophysiology-based stratification of genetically-defined subgroups that share deficits in particular biological pathways. Existing tools, including lower-cost genomic sequencing and advanced gene-editing technology render this strategy ever more feasible. Genetically complex psychiatric disorders such as schizophrenia may also benefit from synergistic research with simpler monogenic disorders that share perturbations in similar biological pathways.
A reductionist perspective on quantum statistical mechanics: Coarse-graining of path integrals.
Sinitskiy, Anton V; Voth, Gregory A
2015-09-07
Computational modeling of the condensed phase based on classical statistical mechanics has been rapidly developing over the last few decades and has yielded important information on various systems containing up to millions of atoms. However, if a system of interest contains important quantum effects, well-developed classical techniques cannot be used. One way of treating finite temperature quantum systems at equilibrium has been based on Feynman's imaginary time path integral approach and the ensuing quantum-classical isomorphism. This isomorphism is exact only in the limit of infinitely many classical quasiparticles representing each physical quantum particle. In this work, we present a reductionist perspective on this problem based on the emerging methodology of coarse-graining. This perspective allows for the representations of one quantum particle with only two classical-like quasiparticles and their conjugate momenta. One of these coupled quasiparticles is the centroid particle of the quantum path integral quasiparticle distribution. Only this quasiparticle feels the potential energy function. The other quasiparticle directly provides the observable averages of quantum mechanical operators. The theory offers a simplified perspective on quantum statistical mechanics, revealing its most reductionist connection to classical statistical physics. By doing so, it can facilitate a simpler representation of certain quantum effects in complex molecular environments.
Development of a Grid-Based Gyro-Kinetic Simulation Code
NASA Astrophysics Data System (ADS)
Lapillonne, Xavier; Brunetti, Maura; Tran, Trach-Minh; Brunner, Stephan
2006-10-01
A grid-based semi-Lagrangian code using cubic spline interpolation is being developed at CRPP, for solving the electrostatic drift-kinetic equations [M. Brunetti et. al, Comp. Phys. Comm. 163, 1 (2004)] in a cylindrical system. This 4-dim code, CYGNE, is part of a project with long term aim of studying microturbulence in toroidal fusion devices, in the more general frame of gyro-kinetic equations. Towards their non-linear phase, the simulations from this code are subject to significant overshoot problems, reflected by the development of negative value regions of the distribution function, which leads to bad energy conservation. This has motivated the study of alternative schemes. On the one hand, new time integration algorithms are considered in the semi-Lagrangian frame. On the other hand, fully Eulerian schemes, which separate time and space discretisation (method of lines), are investigated. In particular, the Essentially Non Oscillatory (ENO) approach, constructed so as to minimize the overshoot problem, has been considered. All these methods have first been tested in the simpler case of the 2-dim guiding-center model for the Kelvin-Helmholtz instability, which enables to address the specific issue of the E xB drift also met in the more complex gyrokinetic-type equations. Based on these preliminary studies, the most promising methods are being implemented and tested in CYGNE.
Isotopic composition of atmospheric moisture from pan water evaporation measurements.
Devi, Pooja; Jain, Ashok Kumar; Rao, M Someshwer; Kumar, Bhishm
2015-01-01
A continuous and reliable time series data of the stable isotopic composition of atmospheric moisture is an important requirement for the wider applicability of isotope mass balance methods in atmospheric and water balance studies. This requires routine sampling of atmospheric moisture by an appropriate technique and analysis of moisture for its isotopic composition. We have, therefore, used a much simpler method based on an isotope mass balance approach to derive the isotopic composition of atmospheric moisture using a class-A drying evaporation pan. We have carried out the study by collecting water samples from a class-A drying evaporation pan and also by collecting atmospheric moisture using the cryogenic trap method at the National Institute of Hydrology, Roorkee, India, during a pre-monsoon period. We compared the isotopic composition of atmospheric moisture obtained by using the class-A drying evaporation pan method with the cryogenic trap method. The results obtained from the evaporation pan water compare well with the cryogenic based method. Thus, the study establishes a cost-effective means of maintaining time series data of the isotopic composition of atmospheric moisture at meteorological observatories. The conclusions drawn in the present study are based on experiments conducted at Roorkee, India, and may be examined at other regions for its general applicability.
Gopalapillai, Yamini; Hale, Beverley A
2017-05-02
Simultaneous determinations of internal dose ([M] tiss ) and external doses ([M] tot , {M 2+ } in solution) were conducted to study ternary mixture (Ni, Cu, Cd) chronic toxicity to Lemna minor in alkaline solution (pH 8.3). Also, concentration addition (CA) based on internal dose was evaluated as a tool for risk assessment of metal mixture. Multiple regression analysis of dose versus root growth inhibition, as well as saturation binding kinetics, provided insight into interactions. Multiple regressions were simpler for [M] tiss than [M] tot and {M 2+ }, and along with saturation kinetics to the internal biotic ligand(s) in the cytoplasm, they indicated that Ni-Cu-Cd competed for uptake into plant, but once inside, only Cu-Cd shared a binding site. Copper inorganic complexes (hydroxides, carbonates) played a role in metal bioavailability in single metal exposure but not in mixtures. Regardless of interactions, the current regulatory approach of using CA based on [M] tot can sufficiently predict mixture toxicity (∑TU close to 1), but CA based on [M] tiss was closest to unity across a range of doses. Internal dose integrates all metal-metal interactions in solution and during uptake into the organism, thereby providing a more direct metric describing toxicity.
Tabassum, Shawana; Dong, Liang; Kumar, Ratnesh
2018-03-05
We present an effective yet simple approach to study the dynamic variations in optical properties (such as the refractive index (RI)) of graphene oxide (GO) when exposed to gases in the visible spectral region, using the thin-film interference method. The dynamic variations in the complex refractive index of GO in response to exposure to a gas is an important factor affecting the performance of GO-based gas sensors. In contrast to the conventional ellipsometry, this method alleviates the need of selecting a dispersion model from among a list of model choices, which is limiting if an applicable model is not known a priori. In addition, the method used is computationally simpler, and does not need to employ any functional approximations. Further advantage over ellipsometry is that no bulky optics is required, and as a result it can be easily integrated into the sensing system, thereby allowing the reliable, simple, and dynamic evaluation of the optical performance of any GO-based gas sensor. In addition, the derived values of the dynamically changing RI values of the GO layer obtained from the method we have employed are corroborated by comparing with the values obtained from ellipsometry.
Comparison of methods for estimating the attributable risk in the context of survival analysis.
Gassama, Malamine; Bénichou, Jacques; Dartois, Laureen; Thiébaut, Anne C M
2017-01-23
The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox's model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.
Cost-Value Analysis and the SAVE: A Work in Progress, But an Option for Localised Decision Making?
Karnon, Jonathan; Partington, Andrew
2015-12-01
Cost-value analysis aims to address the limitations of the quality-adjusted life-year (QALY) by incorporating the strength of public concerns for fairness in the allocation of scarce health care resources. To date, the measurement of value has focused on equity weights to reflect societal preferences for the allocation of QALY gains. Another approach is to use a non-QALY-based measure of value, such as an outcome 'equivalent to saving the life of a young person' (a SAVE). This paper assesses the feasibility and validity of using the SAVE as a measure of value for the economic evaluation of health care technologies. A web-based person trade-off (PTO) survey was designed and implemented to estimate equivalent SAVEs for outcome events associated with the progression and treatment of early-stage breast cancer. The estimated equivalent SAVEs were applied to the outputs of an existing decision analytic model for early breast cancer. The web-based PTO survey was undertaken by 1094 respondents. Validation tests showed that 68 % of eligible responses revealed consistent ordering of responses and 32 % displayed ordinal transitivity, while 37 % of respondents showing consistency and ordinal transitivity approached cardinal transitivity. Using consistent and ordinally transitive responses, the mean incremental cost per SAVE gained was £ 3.72 million. Further research is required to improve the validity of the SAVE, which may include a simpler web-based survey format or a face-to-face format to facilitate more informed responses. A validated method for estimating equivalent SAVEs is unlikely to replace the QALY as the globally preferred measure of outcome, but the SAVE may provide a useful alternative for localized decision makers with relatively small, constrained budgets-for example, in programme budgeting and marginal analysis.
Development and validation of a new soot formation model for gas turbine combustor simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Domenico, Massimiliano; Gerlinger, Peter; Aigner, Manfred
2010-02-15
In this paper a new soot formation model for gas turbine combustor simulations is presented. A sectional approach for the description of Polycyclic Aromatic Hydrocarbons (PAHs) and a two-equation model for soot particle dynamics are introduced. By including the PAH chemistry the formulation becomes more general in that the soot formation is neither directly linked to the fuel nor to C{sub 2}-like species, as it is the case in simpler soot models currently available for CFD applications. At the same time, the sectional approach for the PAHs keeps the required computational resources low if compared to models based on amore » detailed description of the PAH kinetics. These features of the new model allow an accurate yet affordable calculation of soot in complex gas turbine combustion chambers. A careful model validation will be presented for diffusion and partially premixed flames. Fuels ranging from methane to kerosene are investigated. Thus, flames with different sooting characteristics are covered. An excellent agreement with experimental data is achieved for all configurations investigated. A fundamental feature of the new model is that with a single set of constants it is able to accurately describe the soot dynamics of different fuels at different operating conditions. (author)« less
Single vs. dual color fire detection systems: operational tradeoffs
NASA Astrophysics Data System (ADS)
Danino, Meir; Danan, Yossef; Sinvani, Moshe
2017-10-01
In attempt to supply a reasonable fire plume detection, multinational cooperation with significant capital is invested in the development of two major Infra-Red (IR) based fire detection alternatives, single-color IR (SCIR) and dual-color IR (DCIR). False alarm rate was expected to be high not only as a result of real heat sources but mainly due to the IR natural clutter especially solar reflections clutter. SCIR uses state-of-the-art technology and sophisticated algorithms to filter out threats from clutter. On the other hand, DCIR are aiming at using additional spectral band measurements (acting as a guard), to allow the implementation of a simpler and more robust approach for performing the same task. In this paper we present the basics of SCIR & DCIR architecture and the main differences between them. In addition, we will present the results from a thorough study conducted for the purpose of learning about the added value of the additional data available from the second spectral band. Here we consider the two CO2 bands of 4-5 micron and of 2.5-3 micron band as well as off peak band (guard). The findings of this study refer also to Missile warning systems (MWS) efficacy, in terms of operational value. We also present a new approach for tunable filter to such sensor.
Relative Navigation of Formation Flying Satellites
NASA Technical Reports Server (NTRS)
Long, Anne; Kelbel, David; Lee, Taesul; Leung, Dominic; Carpenter, Russell; Gramling, Cheryl; Bauer, Frank (Technical Monitor)
2002-01-01
The Guidance, Navigation, and Control Center (GNCC) at Goddard Space Flight Center (GSFC) has successfully developed high-accuracy autonomous satellite navigation systems using the National Aeronautics and Space Administration's (NASA's) space and ground communications systems and the Global Positioning System (GPS). In addition, an autonomous navigation system that uses celestial object sensor measurements is currently under development and has been successfully tested using real Sun and Earth horizon measurements.The GNCC has developed advanced spacecraft systems that provide autonomous navigation and control of formation flyers in near-Earth, high-Earth, and libration point orbits. To support this effort, the GNCC is assessing the relative navigation accuracy achievable for proposed formations using GPS, intersatellite crosslink, ground-to-satellite Doppler, and celestial object sensor measurements. This paper evaluates the performance of these relative navigation approaches for three proposed missions with two or more vehicles maintaining relatively tight formations. High-fidelity simulations were performed to quantify the absolute and relative navigation accuracy as a function of navigation algorithm and measurement type. Realistically-simulated measurements were processed using the extended Kalman filter implemented in the GPS Enhanced Inboard Navigation System (GEONS) flight software developed by GSFC GNCC. Solutions obtained by simultaneously estimating all satellites in the formation were compared with the results obtained using a simpler approach based on differencing independently estimated state vectors.
Health effects of indoor odorants.
Cone, J E; Shusterman, D
1991-01-01
People assess the quality of the air indoors primarily on the basis of its odors and on their perception of associated health risk. The major current contributors to indoor odorants are human occupant odors (body odor), environmental tobacco smoke, volatile building materials, bio-odorants (particularly mold and animal-derived materials), air fresheners, deodorants, and perfumes. These are most often present as complex mixtures, making measurement of the total odorant problem difficult. There is no current method of measuring human body odor, other than by human panel studies of expert judges of air quality. Human body odors have been quantitated in terms of the "olf" which is the amount of air pollution produced by the average person. Another quantitative unit of odorants is the "decipol," which is the perceived level of pollution produced by the average human ventilated by 10 L/sec of unpolluted air or its equivalent level of dissatisfaction from nonhuman air pollutants. The standard regulatory approach, focusing on individual constituents or chemicals, is not likely to be successful in adequately controlling odorants in indoor air. Besides the current approach of setting minimum ventilation standards to prevent health effects due to indoor air pollution, a standard based on the olf or decipol unit might be more efficacious as well as simpler to measure. PMID:1821378
Learn the Lagrangian: A Vector-Valued RKHS Approach to Identifying Lagrangian Systems.
Cheng, Ching-An; Huang, Han-Pang
2016-12-01
We study the modeling of Lagrangian systems with multiple degrees of freedom. Based on system dynamics, canonical parametric models require ad hoc derivations and sometimes simplification for a computable solution; on the other hand, due to the lack of prior knowledge in the system's structure, modern nonparametric models in machine learning face the curse of dimensionality, especially in learning large systems. In this paper, we bridge this gap by unifying the theories of Lagrangian systems and vector-valued reproducing kernel Hilbert space. We reformulate Lagrangian systems with kernels that embed the governing Euler-Lagrange equation-the Lagrangian kernels-and show that these kernels span a subspace capturing the Lagrangian's projection as inverse dynamics. By such property, our model uses only inputs and outputs as in machine learning and inherits the structured form as in system dynamics, thereby removing the need for the mundane derivations for new systems as well as the generalization problem in learning from scratches. In effect, it learns the system's Lagrangian, a simpler task than directly learning the dynamics. To demonstrate, we applied the proposed kernel to identify the robot inverse dynamics in simulations and experiments. Our results present a competitive novel approach to identifying Lagrangian systems, despite using only inputs and outputs.
The Effect of Illumination on Stereo DTM Quality: Simulations in Support of Europa Exploration
NASA Astrophysics Data System (ADS)
Kirk, R. L.; Howington-Kraus, E.; Hare, T. M.; Jorda, L.
2016-06-01
We have investigated how the quality of stereoscopically measured topography degrades with varying illumination, in particular the ranges of incidence angles and illumination differences over which useful digital topographic models (DTMs) can be recovered. Our approach is to make high-fidelity simulated image pairs of known topography and compare DTMs from stereoanalysis of these images with the input data. Well-known rules of thumb for horizontal resolution (>3-5 pixels) and matching precision (~0.2-0.3 pixels) are generally confirmed, but the best achievable resolution at high incidence angles is ~15 pixels, probably as a result of smoothing internal to the matching algorithm. Single-pass stereo imaging of Europa is likely to yield DTMs of consistent (optimal) quality for all incidence angles ≤85°, and certainly for incidence angles between 40° and 85°. Simulations with pairs of images in which the illumination is not consistent support the utility of shadow tip distance (STD) as a measure of illumination difference, but also suggest new and simpler criteria for evaluating the suitability of stereopairs based on illumination geometry. Our study was motivated by the needs of a mission to Europa, but the approach and (to first order) the results described here are relevant to a wide range of planetary investigations.
Feature Selection Methods for Zero-Shot Learning of Neural Activity.
Caceres, Carlos A; Roos, Matthew J; Rupp, Kyle M; Milsap, Griffin; Crone, Nathan E; Wolmetz, Michael E; Ratto, Christopher R
2017-01-01
Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.
Natural learning in NLDA networks.
González, Ana; Dorronsoro, José R
2007-07-01
Non Linear Discriminant Analysis (NLDA) networks combine a standard Multilayer Perceptron (MLP) transfer function with the minimization of a Fisher analysis criterion. In this work we will define natural-like gradients for NLDA network training. Instead of a more principled approach, that would require the definition of an appropriate Riemannian structure on the NLDA weight space, we will follow a simpler procedure, based on the observation that the gradient of the NLDA criterion function J can be written as the expectation nablaJ(W)=E[Z(X,W)] of a certain random vector Z and defining then I=E[Z(X,W)Z(X,W)(t)] as the Fisher information matrix in this case. This definition of I formally coincides with that of the information matrix for the MLP or other square error functions; the NLDA J criterion, however, does not have this structure. Although very simple, the proposed approach shows much faster convergence than that of standard gradient descent, even when its costlier complexity is taken into account. While the faster convergence of natural MLP batch training can be also explained in terms of its relationship with the Gauss-Newton minimization method, this is not the case for NLDA training, as we will see analytically and numerically that the hessian and information matrices are different.
Proteomic profiling of early degenerative retina of RCS rats.
Zhu, Zhi-Hong; Fu, Yan; Weng, Chuan-Huang; Zhao, Cong-Jian; Yin, Zheng-Qin
2017-01-01
To identify the underlying cellular and molecular changes in retinitis pigmentosa (RP). Label-free quantification-based proteomics analysis, with its advantages of being more economic and consisting of simpler procedures, has been used with increasing frequency in modern biological research. Dystrophic RCS rats, the first laboratory animal model for the study of RP, possess a similar pathological course as human beings with the diseases. Thus, we employed a comparative proteomics analysis approach for in-depth proteome profiling of retinas from dystrophic RCS rats and non-dystrophic congenic controls through Linear Trap Quadrupole - orbitrap MS/MS, to identify the significant differentially expressed proteins (DEPs). Bioinformatics analyses, including Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway annotation and upstream regulatory analysis, were then performed on these retina proteins. Finally, a Western blotting experiment was carried out to verify the difference in the abundance of transcript factor E2F1. In this study, we identified a total of 2375 protein groups from the retinal protein samples of RCS rats and non-dystrophic congenic controls. Four hundred thirty-four significantly DEPs were selected by Student's t -test. Based on the results of the bioinformatics analysis, we identified mitochondrial dysfunction and transcription factor E2F1 as the key initiation factors in early retinal degenerative process. We showed that the mitochondrial dysfunction and the transcription factor E2F1 substantially contribute to the disease etiology of RP. The results provide a new potential therapeutic approach for this retinal degenerative disease.
Solid-phase reductive amination for glycomic analysis.
Jiang, Kuan; Zhu, He; Xiao, Cong; Liu, Ding; Edmunds, Garrett; Wen, Liuqing; Ma, Cheng; Li, Jing; Wang, Peng George
2017-04-15
Reductive amination is an indispensable method for glycomic analysis, as it tremendously facilitates glycan characterization and quantification by coupling functional tags at the reducing ends of glycans. However, traditional in-solution derivatization based approach for the preparation of reductively aminated glycans is quite tedious and time-consuming. Here, a simpler and more efficient strategy termed solid-phase reductive amination was investigated. The general concept underlying this new approach is to streamline glycan extraction, derivatization, and purification on non-porous graphitized carbon sorbents. Neutral and sialylated standard glycans were utilized to test the feasibility of the solid-phase method. As results, almost complete labeling of those glycans with four common labels of aniline, 2-aminobenzamide (2-AB), 2-aminobenzoic acid (2-AA) and 2-amino-N-(2-aminoethyl)-benzamide (AEAB) was obtained, and negligible desialylation occurred during sample preparation. The labeled glycans derived from glycoproteins showed excellent reproducibility in high performance liquid chromatography (HPLC) and matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) analysis. Direct comparisons based on fluorescent absorbance and relative quantification using isotopic labeling demonstrated that the solid-phase strategy enabled 20-30% increase in sample recovery. In short, the solid-phase strategy is simple, reproducible, efficient, and sensitive for glycan analysis. This method was also successfully applied for N-glycan profiling of HEK 293 cells with MALDI-TOF MS, showing its attractive application in the high-throughput analysis of mammalian glycome. Published by Elsevier B.V.
Gershunov, A.; Barnett, T.P.; Cayan, D.R.; Tubbs, T.; Goddard, L.
2000-01-01
Three long-range forecasting methods have been evaluated for prediction and downscaling of seasonal and intraseasonal precipitation statistics in California. Full-statistical, hybrid-dynamical - statistical and full-dynamical approaches have been used to forecast El Nin??o - Southern Oscillation (ENSO) - related total precipitation, daily precipitation frequency, and average intensity anomalies during the January - March season. For El Nin??o winters, the hybrid approach emerges as the best performer, while La Nin??a forecasting skill is poor. The full-statistical forecasting method features reasonable forecasting skill for both La Nin??a and El Nin??o winters. The performance of the full-dynamical approach could not be evaluated as rigorously as that of the other two forecasting schemes. Although the full-dynamical forecasting approach is expected to outperform simpler forecasting schemes in the long run, evidence is presented to conclude that, at present, the full-dynamical forecasting approach is the least viable of the three, at least in California. The authors suggest that operational forecasting of any intraseasonal temperature, precipitation, or streamflow statistic derivable from the available records is possible now for ENSO-extreme years.
Should biomedical research be like Airbnb?
Bonazzi, Vivien R; Bourne, Philip E
2017-04-01
The thesis presented here is that biomedical research is based on the trusted exchange of services. That exchange would be conducted more efficiently if the trusted software platforms to exchange those services, if they exist, were more integrated. While simpler and narrower in scope than the services governing biomedical research, comparison to existing internet-based platforms, like Airbnb, can be informative. We illustrate how the analogy to internet-based platforms works and does not work and introduce The Commons, under active development at the National Institutes of Health (NIH) and elsewhere, as an example of the move towards platforms for research.
Should biomedical research be like Airbnb?
Bonazzi, Vivien R.
2017-01-01
The thesis presented here is that biomedical research is based on the trusted exchange of services. That exchange would be conducted more efficiently if the trusted software platforms to exchange those services, if they exist, were more integrated. While simpler and narrower in scope than the services governing biomedical research, comparison to existing internet-based platforms, like Airbnb, can be informative. We illustrate how the analogy to internet-based platforms works and does not work and introduce The Commons, under active development at the National Institutes of Health (NIH) and elsewhere, as an example of the move towards platforms for research. PMID:28388615
Shock-wave generation and bubble formation in the retina by lasers
NASA Astrophysics Data System (ADS)
Sun, Jinming; Gerstman, Bernard S.; Li, Bin
2000-06-01
The generation of shock waves and bubbles has been experimentally observed due to absorption of sub-nanosecond laser pulses by melanosomes, which are found in retinal pigment epithelium cells. Both the shock waves and bubbles may be the cause of retinal damage at threshold fluence levels. The theoretical modeling of shock wave parameters such as amplitude, and bubble size, is a complicated problem due to the non-linearity of the phenomena. We have used two different approaches for treating pressure variations in water: the Tait Equation and a full Equation Of State (EOS). The Tait Equation has the advantage of being developed specifically to model pressure variations in water and is therefore simpler, quicker computationally, and allows the liquid to sustain negative pressures. Its disadvantage is that it does not allow for a change of phase, which prevents modeling of bubbles and leads to non-physical behavior such as the sustaining of ridiculously large negative pressures. The full EOS treatment includes more of the true thermodynamic behavior, such as phase changes that produce bubbles and avoids the generation of large negative pressures. Its disadvantage is that the usual stable equilibrium EOS allows for no negative pressures at all, since tensile stress is unstable with respect to a transition to the vapor phase. In addition, the EOS treatment requires longer computational times. In this paper, we compare shock wave generation for various laser pulses using the two different mathematical approaches and determine the laser pulse regime for which the simpler Tait Equation can be used with confidence. We also present results of our full EOS treatment in which both shock waves and bubbles are simultaneously modeled.
Selection of Worst-Case Pesticide Leaching Scenarios for Pesticide Registration
NASA Astrophysics Data System (ADS)
Vereecken, H.; Tiktak, A.; Boesten, J.; Vanderborght, J.
2010-12-01
The use of pesticides, fertilizers and manure in intensive agriculture may have a negative impact on the quality of ground- and surface water resources. Legislative action has been undertaken in many countries to protect surface and groundwater resources from contamination by surface applied agrochemicals. Of particular concern are pesticides. The registration procedure plays an important role in the regulation of pesticide use in the European Union. In order to register a certain pesticide use, the notifier needs to prove that the use does not entail a risk of groundwater contamination. Therefore, leaching concentrations of the pesticide need to be assessed using model simulations for so called worst-case scenarios. In the current procedure, a worst-case scenario represents a parameterized pesticide fate model for a certain soil and a certain time series of weather conditions that tries to represent all relevant processes such as transient water flow, root water uptake, pesticide transport, sorption, decay and volatilisation as accurate as possible. Since this model has been parameterized for only one soil and weather time series, it is uncertain whether it represents a worst-case condition for a certain pesticide use. We discuss an alternative approach that uses a simpler model that requires less detailed information about the soil and weather conditions but still represents the effect of soil and climate on pesticide leaching using information that is available for the entire European Union. A comparison between the two approaches demonstrates that the higher precision that the detailed model provides for the prediction of pesticide leaching at a certain site is counteracted by its smaller accuracy to represent a worst case condition. The simpler model predicts leaching concentrations less precise at a certain site but has a complete coverage of the area so that it selects a worst-case condition more accurately.
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
Gorman, Jamie C; Crites, Michael J
2013-08-01
We report an experiment in which we investigated differential transfer between unimanual (one-handed), bimanual (two-handed), and intermanual (different peoples' hands) coordination modes. People perform some manual tasks faster than others ("mode effects"). However, little is known about transfer between coordination modes. To investigate differential transfer, we draw hypotheses from two perspectives--information based and constraint based--of bimanual and interpersonal coordination and skill acquisition. Participants drove a teleoperated rover around a circular path in sets of two 2-min trials using two of the different coordination modes. Speed and variability of the rover's path were measured. Order of coordination modes was manipulated to examine differential transfer and mode effects. Differential transfer analyses revealed patterns of positive transfer from simpler (localized spatiotemporal constraints) to more complex (distributed spatiotemporal constraints) coordination modes paired with negative transfer in the opposite direction. Mode effects indicated that intermanual performance was significantly faster than unimanual performance, and bimanual performance was intermediate. Importantly, all of these effects disappeared with practice. The observed patterns of differential transfer between coordination modes may be better accounted for by a constraint-based explanation of differential transfer than by an information-based one. Mode effects may be attributable to anticipatory movements based on dyads' access to mutual visual information. Although people may be faster using more-complex coordination modes, when operators transition between modes, they may be more effective transitioning from simpler (e.g., bimanual) to more complex (e.g., intermanual) modes than vice versa. However, this difference may be critical only for novel or rarely practiced tasks.
About the mechanism of ERP-system pilot test
NASA Astrophysics Data System (ADS)
Mitkov, V. V.; Zimin, V. V.
2018-05-01
In the paper the mathematical problem of defining the scope of pilot test is stated, which is a task of quadratic programming. The procedure of the problem solving includes the method of network programming based on the structurally similar network representation of the criterion and constraints and which reduces the original problem to a sequence of simpler evaluation tasks. The evaluation tasks are solved by the method of dichotomous programming.
Handling Quality Requirements for Advanced Aircraft Design: Longitudinal Mode
1979-08-01
phases of air -to- air combat, for example). This is far simpler than the general problem of control law definition. How- ever, the results of such...unlimited. Ali FORCE FUGHT DYNAMICS LABORATORYAIR FORCE WRIGHT AERONAUTICALLABORATORIES AIR FORCE SYSTEMS COMMANDI * WRIGHT-PATITERSON AIR FORCE BASE...not necessarily shared by the Air Force. Brian. W. VauVliet Project Engineer S Rorad0. Anderson, Chief Control Dynamics Branch Flight Control Division
Quantification of Viral and Prokaryotic Production Rates in Benthic Ecosystems: A Methods Comparison
Rastelli, Eugenio; Dell’Anno, Antonio; Corinaldesi, Cinzia; Middelboe, Mathias; Noble, Rachel T.; Danovaro, Roberto
2016-01-01
Viruses profoundly influence benthic marine ecosystems by infecting and subsequently killing their prokaryotic hosts, thereby impacting the cycling of carbon and nutrients. Previously conducted studies, based on different methodologies, have provided widely differing estimates of the relevance of viruses on benthic prokaryotes. There has been no attempt so far to compare these independent approaches, including contextual comparisons among different approaches for sample manipulation (i.e., dilution or not of the sediments during incubations), between methods based on epifluorescence microscopy (EFM) or radiotracers, and between the use of different radiotracers. Therefore, it has been difficult to identify the most suitable methodologies and protocols to be used as standard approaches for the quantification of viral infections of prokaryotes. Here, we compared for the first time different methods for determining viral and prokaryotic production rates in marine sediments collected at two benthic sites, differing in depth and environmental conditions. We used a highly replicated experimental design, testing the potential biases associated to the incubation of sediments as diluted or undiluted. In parallel, we also compared EFM counts with the 3H-thymidine incubations for the determination of viral production rates, and the use of 3H-thymidine versus 3H-leucine radiotracers for the determination of prokaryotic production. We show here that, independent from sediment dilution, EFM-based values of viral production ranged from 1.4 to 4.6 × 107 viruses g-1 h-1, and were similar but overall less variable compared to those obtained by the 3H-thymidine method (0.3 to 9.0 × 107 viruses g-1h-1). In addition, the prokaryotic production rates were not affected by sediment dilution, and the use of different radiotracers provided very consistent estimates (10.3–35.1 and 9.3–34.6 ngC g-1h-1 using the 3H-thymidine or 3H-leucine method, respectively). These results indicated that viral lysis was responsible for the abatement of 55–81% of the prokaryotic heterotrophic production, corroborating previous findings of the major role of viruses in benthic deep-sea ecosystems. Moreover, our methodological comparison for the analysis of viral production in marine sediments suggests that microscopy-based approaches are simpler and more cost-effective than those based on radiotracers. These approaches also reduce time to results and overcome issues related to generation of radioactive waste. PMID:27713739
On the convergence of the coupled-wave approach for lamellar diffraction gratings
NASA Technical Reports Server (NTRS)
Li, Lifeng; Haggans, Charles W.
1992-01-01
Among the many existing rigorous methods for analyzing diffraction of electromagnetic waves by diffraction gratings, the coupled-wave approach stands out because of its versatility and simplicity. It can be applied to volume gratings and surface relief gratings, and its numerical implementation is much simpler than others. In addition, its predictions were experimentally validated in several cases. These facts explain the popularity of the coupled-wave approach among many optical engineers in the field of diffractive optics. However, a comprehensive analysis of the convergence of the model predictions has never been presented, although several authors have recently reported convergence difficulties with the model when it is used for metallic gratings in TM polarization. Herein, three points are made: (1) in the TM case, the coupled-wave approach converges much slower than the modal approach of Botten et al; (2) the slow convergence is caused by the use of Fourier expansions for the permittivity and the fields in the grating region; and (3) is manifested by the slow convergence of the eigenvalues and the associated modal fields. The reader is assumed to be familiar with the mathematical formulations of the coupled-wave approach and the modal approach.
What do we gain from simplicity versus complexity in species distribution models?
Merow, Cory; Smith, Matthew J.; Edwards, Thomas C.; Guisan, Antoine; McMahon, Sean M.; Normand, Signe; Thuiller, Wilfried; Wuest, Rafael O.; Zimmermann, Niklaus E.; Elith, Jane
2014-01-01
Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence–environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence–environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building ‘under fit’ models, having insufficient flexibility to describe observed occurrence–environment relationships, we risk misunderstanding the factors shaping species distributions. By building ‘over fit’ models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.
ASSESSING THE INFLUENCE OF THE SOLAR ORBIT ON TERRESTRIAL BIODIVERSITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, F.; Bailer-Jones, C. A. L.
The terrestrial record shows a significant variation in the extinction and origination rates of species during the past half-billion years. Numerous studies have claimed an association between this variation and the motion of the Sun around the Galaxy, invoking the modulation of cosmic rays, gamma rays, and comet impact frequency as a cause of this biodiversity variation. However, some of these studies exhibit methodological problems, or were based on coarse assumptions (such as a strict periodicity of the solar orbit). Here we investigate this link in more detail, using a model of the Galaxy to reconstruct the solar orbit andmore » thus a predictive model of the temporal variation of the extinction rate due to astronomical mechanisms. We compare these predictions as well as those of various reference models with paleontological data. Our approach involves Bayesian model comparison, which takes into account the uncertainties in the paleontological data as well as the distribution of solar orbits consistent with the uncertainties in the astronomical data. We find that various versions of the orbital model are not favored beyond simpler reference models. In particular, the distribution of mass extinction events can be explained just as well by a uniform random distribution as by any other model tested. Although our negative results on the orbital model are robust to changes in the Galaxy model, the Sun's coordinates, and the errors in the data, we also find that it would be very difficult to positively identify the orbital model even if it were the true one. (In contrast, we do find evidence against simpler periodic models.) Thus, while we cannot rule out there being some connection between solar motion and biodiversity variations on the Earth, we conclude that it is difficult to give convincing positive conclusions of such a connection using current data.« less
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-07-01
Ocean biogeochemistry (OBGC) models span a wide range of complexities from highly simplified, nutrient-restoring schemes, through nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, through to models that represent a broader trophic structure by grouping organisms as plankton functional types (PFT) based on their biogeochemical role (Dynamic Green Ocean Models; DGOM) and ecosystem models which group organisms by ecological function and trait. OBGC models are now integral components of Earth System Models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here, we present an inter-comparison of six OBGC models that were candidates for implementation within the next UK Earth System Model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the Nucleus for the European Modelling of the Ocean (NEMO) ocean general circulation model (GCM), and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform or underperform all other models across all metrics. Nonetheless, the simpler models that are easier to tune are broadly closer to observations across a number of fields, and thus offer a high-efficiency option for ESMs that prioritise high resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low resolution climate dynamics and high complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-12-01
Ocean biogeochemistry (OBGC) models span a wide variety of complexities, including highly simplified nutrient-restoring schemes, nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, models that represent a broader trophic structure by grouping organisms as plankton functional types (PFTs) based on their biogeochemical role (dynamic green ocean models) and ecosystem models that group organisms by ecological function and trait. OBGC models are now integral components of Earth system models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here we present an intercomparison of six OBGC models that were candidates for implementation within the next UK Earth system model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the ocean general circulation model Nucleus for European Modelling of the Ocean (NEMO) and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform all other models across all metrics. Nonetheless, the simpler models are broadly closer to observations across a number of fields and thus offer a high-efficiency option for ESMs that prioritise high-resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low-resolution climate dynamics and high-complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
Application of Calspan pitch rate control system to the Space Shuttle for approach and landing
NASA Technical Reports Server (NTRS)
Weingarten, N. C.; Chalk, C. R.
1983-01-01
A pitch rate control system designed for use in the shuttle during approach and landing was analyzed and compared with a revised control system developed by NASA and the existing OFT control system. The design concept control system uses filtered pitch rate feedback with proportional plus integral paths in the forward loop. Control system parameters were designed as a function of flight configuration. Analysis included time and frequency domain techniques. Results indicate that both the Calspan and NASA systems significantly improve the flying qualities of the shuttle over the OFT. Better attitude and flight path control and less time delay are the primary reasons. The Calspan system is preferred because of reduced time delay and simpler mechanization. Further testing of the improved flight control systems in an in-flight simulator is recommended.
Khan, Wasim S; Hardingham, Timothy E
2012-01-01
Tissue is frequently damaged or lost in injury and disease. There has been an increasing interest in stem cell applications and tissue engineering approaches in surgical practice to deal with damaged or lost tissue. Although there have been developments in almost all surgical disciplines, the greatest advances are being made in orthopaedics, especially in cartilage repair. This is due to many factors including the familiarity with bone marrow derived mesenchymal stem cells and cartilage being a relatively simpler tissue to engineer. Unfortunately significant hurdles remain to be overcome in many areas before tissue engineering becomes more routinely used in clinical practice. In this paper we discuss the structure, function and embryology of cartilage and osteoarthritis. This is followed by a review of current treatment strategies for the repair of cartilage and the use of tissue engineering.
Nonperturbative dynamics of scalar field theories through the Feynman-Schwinger representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetin Savkli; Franz Gross; John Tjon
2004-04-01
In this paper we present a summary of results obtained for scalar field theories using the Feynman-Schwinger (FSR) approach. Specifically, scalar QED and {chi}{sup 2}{phi} theories are considered. The motivation behind the applications discussed in this paper is to use the FSR method as a rigorous tool for testing the quality of commonly used approximations in field theory. Exact calculations in a quenched theory are presented for one-, two-, and three-body bound states. Results obtained indicate that some of the commonly used approximations, such as Bethe-Salpeter ladder summation for bound states and the rainbow summation for one body problems, producemore » significantly different results from those obtained from the FSR approach. We find that more accurate results can be obtained using other, simpler, approximation schemes.« less
Radiomic analysis in prediction of Human Papilloma Virus status.
Yu, Kaixian; Zhang, Youyi; Yu, Yang; Huang, Chao; Liu, Rongjie; Li, Tengfei; Yang, Liuqing; Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Zhu, Hongtu
2017-12-01
Human Papilloma Virus (HPV) has been associated with oropharyngeal cancer prognosis. Traditionally the HPV status is tested through invasive lab test. Recently, the rapid development of statistical image analysis techniques has enabled precise quantitative analysis of medical images. The quantitative analysis of Computed Tomography (CT) provides a non-invasive way to assess HPV status for oropharynx cancer patients. We designed a statistical radiomics approach analyzing CT images to predict HPV status. Various radiomics features were extracted from CT scans, and analyzed using statistical feature selection and prediction methods. Our approach ranked the highest in the 2016 Medical Image Computing and Computer Assisted Intervention (MICCAI) grand challenge: Oropharynx Cancer (OPC) Radiomics Challenge, Human Papilloma Virus (HPV) Status Prediction. Further analysis on the most relevant radiomic features distinguishing HPV positive and negative subjects suggested that HPV positive patients usually have smaller and simpler tumors.
Virtual decoupling flight control via real-time trajectory synthesis and tracking
NASA Astrophysics Data System (ADS)
Zhang, Xuefu
The production of the General Aviation industry has declined in the past 25 years. Ironically, however, the increasing demand for air travel as a fast, safe, and high-quality mode of transportation has been far from satisfied. Addressing this demand shortfall with personal air transportation necessitates advanced systems for navigation, guidance, control, flight management, and flight traffic control. Among them, an effective decoupling flight control system will not only improve flight quality, safety, and simplicity, and increase air space usage, but also reduce expenses on pilot initial and current training, and thus expand the current market and explore new markets. Because of the formidable difficulties encountered in the actual decoupling of non-linear, time-variant, and highly coupled flight control systems through traditional approaches, a new approach, which essentially converts the decoupling problem into a real-time trajectory synthesis and tracking problem, is employed. Then, the converted problem is solved and a virtual decoupling effect is achieved. In this approach, a trajectory in inertial space can be predefined and dynamically modified based on the flight mission and the pilot's commands. A feedforward-feedback control architecture is constructed to guide the airplane along the trajectory as precisely as possible. Through this approach, the pilot has much simpler, virtually decoupled control of the airplane in terms of speed, flight path angle and horizontal radius of curvature. To verify and evaluate this approach, extensive computer simulation is performed. A great deal of test cases are designed for the flight control under different flight conditions. The simulation results show that our decoupling strategy is satisfactory and promising, and therefore the research can serve as a consolidated foundation for future practical applications.
Efficient discovery of risk patterns in medical data.
Li, Jiuyong; Fu, Ada Wai-chee; Fahey, Paul
2009-01-01
This paper studies a problem of efficiently discovering risk patterns in medical data. Risk patterns are defined by a statistical metric, relative risk, which has been widely used in epidemiological research. To avoid fruitless search in the complete exploration of risk patterns, we define optimal risk pattern set to exclude superfluous patterns, i.e. complicated patterns with lower relative risk than their corresponding simpler form patterns. We prove that mining optimal risk pattern sets conforms an anti-monotone property that supports an efficient mining algorithm. We propose an efficient algorithm for mining optimal risk pattern sets based on this property. We also propose a hierarchical structure to present discovered patterns for the easy perusal by domain experts. The proposed approach is compared with two well-known rule discovery methods, decision tree and association rule mining approaches on benchmark data sets and applied to a real world application. The proposed method discovers more and better quality risk patterns than a decision tree approach. The decision tree method is not designed for such applications and is inadequate for pattern exploring. The proposed method does not discover a large number of uninteresting superfluous patterns as an association mining approach does. The proposed method is more efficient than an association rule mining method. A real world case study shows that the method reveals some interesting risk patterns to medical practitioners. The proposed method is an efficient approach to explore risk patterns. It quickly identifies cohorts of patients that are vulnerable to a risk outcome from a large data set. The proposed method is useful for exploratory study on large medical data to generate and refine hypotheses. The method is also useful for designing medical surveillance systems.
Johnson, Blake; Runyon, Michael; Weekes, Anthony; Pearson, David
2018-01-01
Out-of-hospital cardiac arrest has high rates of morbidity and mortality, and a growing body of evidence is redefining our approach to the resuscitation of these high-risk patients. Team-focused cardiopulmonary resuscitation (TFCPR), most commonly deployed and described by prehospital care providers, is a focused approach to cardiac arrest care that emphasizes early defibrillation and high-quality, minimally interrupted chest compressions while de-emphasizing endotracheal intubation and intravenous drug administration. TFCPR is associated with statistically significant increases in survival to hospital admission, survival to hospital discharge, and survival with good neurologic outcome; however, the adoption of similar streamlined resuscitation approaches by emergency physicians has not been widely reported. In the absence of a deliberately streamlined approach, such as TFCPR, other advanced therapies and procedures that have not shown similar survival benefit may be prioritized at the expense of simpler evidence-based interventions. This review examines the current literature on cardiac arrest resuscitation. The recent prehospital success of TFCPR is highlighted, including the associated improvements in multiple patient-centered outcomes. The adaptability of TFCPR to the emergency department (ED) setting is also discussed in detail. Finally, we discuss advanced interventions frequently performed during ED cardiac arrest resuscitation that may interfere with early defibrillation and effective high-quality chest compressions. TFCPR has been associated with improved patient outcomes in the prehospital setting. The data are less compelling for other commonly used advanced resuscitation tools and procedures. Emergency physicians should consider incorporating the TFCPR approach into ED cardiac arrest resuscitation to optimize delivery of those interventions most associated with improved outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.
Spontaneous emergence of milling (vortex state) in a Vicsek-like model
NASA Astrophysics Data System (ADS)
Costanzo, A.; Hemelrijk, C. K.
2018-04-01
Collective motion is of interest to laymen and scientists in different fields. In groups of animals, many patterns of collective motion arise such as polarized schools and mills (i.e. circular motion). Collective motion can be generated in computational models of different degrees of complexity. In these models, moving individuals coordinate with others nearby. In the more complex models, individuals attract each other, aligning their headings, and avoiding collisions. Simpler models may include only one or two of these types of interactions. The collective pattern that interests us here is milling, which is observed in many animal species. It has been reproduced in the more complex models, but not in simpler models that are based only on alignment, such as the well-known Vicsek model. Our aim is to provide insight in the minimal conditions required for milling by making minimal modifications to the Vicsek model. Our results show that milling occurs when both the field of view and the maximal angular velocity are decreased. Remarkably, apart from milling, our minimal model also exhibits many of the other patterns of collective motion observed in animal groups.
On macromolecular refinement at subatomic resolution with interatomic scatterers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V., E-mail: pafonine@lbl.gov; Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2007-11-01
Modelling deformation electron density using interatomic scatters is simpler than multipolar methods, produces comparable results at subatomic resolution and can easily be applied to macromolecules. A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented bymore » additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less
Lack, Justin B; Cardeno, Charis M; Crepeau, Marc W; Taylor, William; Corbett-Detig, Russell B; Stevens, Kristian A; Langley, Charles H; Pool, John E
2015-04-01
Hundreds of wild-derived Drosophila melanogaster genomes have been published, but rigorous comparisons across data sets are precluded by differences in alignment methodology. The most common approach to reference-based genome assembly is a single round of alignment followed by quality filtering and variant detection. We evaluated variations and extensions of this approach and settled on an assembly strategy that utilizes two alignment programs and incorporates both substitutions and short indels to construct an updated reference for a second round of mapping prior to final variant detection. Utilizing this approach, we reassembled published D. melanogaster population genomic data sets and added unpublished genomes from several sub-Saharan populations. Most notably, we present aligned data from phase 3 of the Drosophila Population Genomics Project (DPGP3), which provides 197 genomes from a single ancestral range population of D. melanogaster (from Zambia). The large sample size, high genetic diversity, and potentially simpler demographic history of the DPGP3 sample will make this a highly valuable resource for fundamental population genetic research. The complete set of assemblies described here, termed the Drosophila Genome Nexus, presently comprises 623 consistently aligned genomes and is publicly available in multiple formats with supporting documentation and bioinformatic tools. This resource will greatly facilitate population genomic analysis in this model species by reducing the methodological differences between data sets. Copyright © 2015 by the Genetics Society of America.
Health surveillance for occupational asthma in the UK.
Fishwick, D; Sen, D; Barker, P; Codling, A; Fox, D; Naylor, S
2016-07-01
Periodic health surveillance (HS) of workers can identify early cases of occupational asthma. Information about its uptake and its content in the UK is lacking. To identify the overall levels of uptake and quality of HS for occupational asthma within three high-risk industry sectors in the UK. A telephone survey of employers, and their occupational health (OH) professionals, carried out in three sectors with exposures potentially capable of causing occupational asthma (bakeries, wood working and motor vehicle repair). A total of 457 organizations participated (31% response rate). About 77% employed <10 people, 17% between 10 and 50 and 6% >50. Risk assessments were common (67%) and 14% carried out some form of HS for occupational asthma, rising to 19% if only organizations reporting asthma hazards and risks were considered. HS was carried out both by in-house (31%) and external providers (69%). Organizational policies were often used to define HS approaches (80%), but infrequently shared with the OH provider. OH providers described considerable variation in practice. Record keeping was universal, but worker-held records were not reported. HS tools were generally developed in-house. Lung function was commonly measured, but only limited interpretation evident. Referral of workers to local specialist respiratory services was variable. This study provided new insights into the real world of HS for occupational asthma. We consider that future work could and should define simpler, more practical and evidence-based approaches to HS to ensure maximal consistency and use of high-quality approaches. © Crown copyright 2016.
Uncertainty in spatially explicit animal dispersal models
Mooij, Wolf M.; DeAngelis, Donald L.
2003-01-01
Uncertainty in estimates of survival of dispersing animals is a vexing difficulty in conservation biology. The current notion is that this uncertainty decreases the usefulness of spatially explicit population models in particular. We examined this problem by comparing dispersal models of three levels of complexity: (1) an event-based binomial model that considers only the occurrence of mortality or arrival, (2) a temporally explicit exponential model that employs mortality and arrival rates, and (3) a spatially explicit grid-walk model that simulates the movement of animals through an artificial landscape. Each model was fitted to the same set of field data. A first objective of the paper is to illustrate how the maximum-likelihood method can be used in all three cases to estimate the means and confidence limits for the relevant model parameters, given a particular set of data on dispersal survival. Using this framework we show that the structure of the uncertainty for all three models is strikingly similar. In fact, the results of our unified approach imply that spatially explicit dispersal models, which take advantage of information on landscape details, suffer less from uncertainly than do simpler models. Moreover, we show that the proposed strategy of model development safeguards one from error propagation in these more complex models. Finally, our approach shows that all models related to animal dispersal, ranging from simple to complex, can be related in a hierarchical fashion, so that the various approaches to modeling such dispersal can be viewed from a unified perspective.
Quantum private query based on single-photon interference
NASA Astrophysics Data System (ADS)
Xu, Sheng-Wei; Sun, Ying; Lin, Song
2016-08-01
Quantum private query (QPQ) has become a research hotspot recently. Specially, the quantum key distribution (QKD)-based QPQ attracts lots of attention because of its practicality. Various such kind of QPQ protocols have been proposed based on different technologies of quantum communications. Single-photon interference is one of such technologies, on which the famous QKD protocol GV95 is just based. In this paper, we propose two QPQ protocols based on single-photon interference. The first one is simpler and easier to realize, and the second one is loss tolerant and flexible, and more practical than the first one. Furthermore, we analyze both the user privacy and the database privacy in the proposed protocols.
Comparison of frequency-domain and time-domain rotorcraft vibration control methods
NASA Technical Reports Server (NTRS)
Gupta, N. K.
1984-01-01
Active control of rotor-induced vibration in rotorcraft has received significant attention recently. Two classes of techniques have been proposed. The more developed approach works with harmonic analysis of measured time histories and is called the frequency-domain approach. The more recent approach computes the control input directly using the measured time history data and is called the time-domain approach. The report summarizes the results of a theoretical investigation to compare the two approaches. Five specific areas were addressed: (1) techniques to derive models needed for control design (system identification methods), (2) robustness with respect to errors, (3) transient response, (4) susceptibility to noise, and (5) implementation difficulties. The system identification methods are more difficult for the time-domain models. The time-domain approach is more robust (e.g., has higher gain and phase margins) than the frequency-domain approach. It might thus be possible to avoid doing real-time system identification in the time-domain approach by storing models at a number of flight conditions. The most significant error source is the variation in open-loop vibrations caused by pilot inputs, maneuvers or gusts. The implementation requirements are similar except that the time-domain approach can be much simpler to implement if real-time system identification were not necessary.
An integrated microcombustor and photonic crystal emitter for thermophotovoltaics
NASA Astrophysics Data System (ADS)
Chan, Walker R.; Stelmakh, Veronika; Allmon, William R.; Waits, Christopher M.; Soljacic, Marin; Joannopoulos, John D.; Celanovic, Ivan
2016-11-01
Thermophotovoltaic (TPV) energy conversion is appealing for portable millimeter- scale generators because of its simplicity, but it relies on a high temperatures. The performance and reliability of the high-temperature components, a microcombustor and a photonic crystal emitter, has proven challenging because they are subjected to 1000-1200°C and stresses arising from thermal expansion mismatches. In this paper, we adopt the industrial process of diffusion brazing to fabricate an integrated microcombustor and photonic crystal by bonding stacked metal layers. Diffusion brazing is simpler and faster than previous approaches of silicon MEMS and welded metal, and the end result is more robust.
Analytical methods to predict liquid congealing in ram air heat exchangers during cold operation
NASA Astrophysics Data System (ADS)
Coleman, Kenneth; Kosson, Robert
1989-07-01
Ram air heat exchangers used to cool liquids such as lube oils or Ethylene-Glycol/water solutions can be subject to congealing in very cold ambients, resulting in a loss of cooling capability. Two-dimensional, transient analytical models have been developed to explore this phenomenon with both continuous and staggered fin cores. Staggered fin predictions are compared to flight test data from the E-2C Allison T56 engine lube oil system during winter conditions. For simpler calculations, a viscosity ratio correction was introduced and found to provide reasonable cold ambient performance predictions for the staggered fin core, using a one-dimensional approach.
Wang, Shuo; Jeon, Oju; Shankles, Peter G.; ...
2016-02-03
Here, we present a simple microfluidic technique to in-situ photopolymerize (by 365 nm ultraviolet) monodisperse oxidized methacrylated alginate (OMA) microgels using a photoinitiator (VA-086). By this technique, we generated monodisperse spherical OMA beads and discoid non-spherical beads with better shape consistency than ionic crosslinking methods do. We found that a high monomer concentration (8 w/v %), a high photoinitiator concentration (1.5 w/v %) and absence of oxygen are critical factors to cure OMA microgels. This photopolymerizing method is an alternative to current methods to form alginate microgels and is a simpler approach to generate non-spherical alginate microgels.
Quantitative Diagnosis of Continuous-Valued, Stead-State Systems
NASA Technical Reports Server (NTRS)
Rouquette, N.
1995-01-01
Quantitative diagnosis involves numerically estimating the values of unobservable parameters that best explain the observed parameter values. We consider quantitative diagnosis for continuous, lumped- parameter, steady-state physical systems because such models are easy to construct and the diagnosis problem is considerably simpler than that for corresponding dynamic models. To further tackle the difficulties of numerically inverting a simulation model to compute a diagnosis, we propose to decompose a physical system model in terms of feedback loops. This decomposition reduces the dimension of the problem and consequently decreases the diagnosis search space. We illustrate this approach on a model of thermal control system studied in earlier research.
NASA Technical Reports Server (NTRS)
Martini, W. R.
1980-01-01
Four fully disclosed reference engines and five design methods are discussed. So far, the agreement between theory and experiment is about as good for the simpler calculation methods as it is for the more complicated methods, that is, within 20%. For the simpler methods, a one number adjustable constant can be used to reduce the error in predicting power output and efficiency over the entire operating map to less than 10%.
Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe
2013-11-01
In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k - ɛ model, RNG k - ɛ model, realizable k - ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use.
A New SEYHAN's Approach in Case of Heterogeneity of Regression Slopes in ANCOVA.
Ankarali, Handan; Cangur, Sengul; Ankarali, Seyit
2018-06-01
In this study, when the assumptions of linearity and homogeneity of regression slopes of conventional ANCOVA are not met, a new approach named as SEYHAN has been suggested to use conventional ANCOVA instead of robust or nonlinear ANCOVA. The proposed SEYHAN's approach involves transformation of continuous covariate into categorical structure when the relationship between covariate and dependent variable is nonlinear and the regression slopes are not homogenous. A simulated data set was used to explain SEYHAN's approach. In this approach, we performed conventional ANCOVA in each subgroup which is constituted according to knot values and analysis of variance with two-factor model after MARS method was used for categorization of covariate. The first model is a simpler model than the second model that includes interaction term. Since the model with interaction effect has more subjects, the power of test also increases and the existing significant difference is revealed better. We can say that linearity and homogeneity of regression slopes are not problem for data analysis by conventional linear ANCOVA model by helping this approach. It can be used fast and efficiently for the presence of one or more covariates.
NASA Technical Reports Server (NTRS)
Burger, R. A.; Moraal, H.; Webb, G. M.
1985-01-01
It is shown that there is a simpler way to derive the average guiding center drift of a distribution of particles than via the so-called single particle analysis. Based on this derivation it is shown that the entire drift formalism can be considerably simplified, and that results for low order anisotropies are more generally valid than is usually appreciated. This drift analysis leads to a natural alternative derivation of the drift velocity along a neutral sheet.
Initialization of distributed spacecraft for precision formation flying
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Scharf, D. P.; Ploen, S. R.
2003-01-01
In this paper we present a solution to the formation initialization problem for N distributed spacecraft located in deep space. Our solution to the FI problem is based on a three-stage sky search procedure that reduces the FI problem for N spacecraft to the simpler problem of initializing a set of sub-formations. We demonstrate our FI algorithm in simulation using NASA's five spacecraft Terrestrial Planet Finder mission as an example.
JPRS Report, Science & Technology Europe
1992-08-12
Head on Chip Industry, Plans [Heinrich von Pierer Interview; Bonn DIE WELT, 15 Jun 92] 33 Swiss Contraves Develops High-Density Multichip Module... Investment costs are low because the method is based on low pressure, 0.1-1.0 MPA, during injection. This permits the use of simpler molds and...For example, ABS Pumpen AG [ABS Pumps German Stock Corporation] in Lohmar needed 1.5 years to define the "Ceramic Compo- nents for Friction
Matlab-Excel Interface for OpenDSS
DOE Office of Scientific and Technical Information (OSTI.GOV)
The software allows users of the OpenDSS grid modeling software to access their load flow models using a GUI interface developed in MATLAB. The circuit definitions are entered into a Microsoft Excel spreadsheet which makes circuit creation and editing a much simpler process than the basic text-based editors used in the native OpenDSS interface. Plot tools have been developed which can be accessed through a MATLAB GUI once the desired parameters have been simulated.
CFD-Based Design of Turbopump Inlet Duct for Reduced Dynamic Loads
NASA Technical Reports Server (NTRS)
Rothermel, Jeffry; Dorney, Suzanne M.; Dorney, Daniel J.
2003-01-01
Numerical simulations have been completed for a variety of designs for a 90 deg elbow duct. The objective is to identify a design that minimizes the dynamic load entering a LOX turbopump located at the elbow exit. Designs simulated to date indicate that simpler duct geometries result in lower losses. Benchmark simulations have verified that the compressible flow codes used in this study are applicable to these incompressible flow simulations.
CFD-based Design of LOX Pump Inlet Duct for Reduced Dynamic Loads
NASA Technical Reports Server (NTRS)
Rothermel, Jeffry; Dorney, Daniel J.; Dorney, Suzanne M.
2003-01-01
Numerical simulations have been completed for a variety of designs for a 90 deg elbow duct. The objective is to identify a design that minimizes the dynamic load entering a LOX turbopump located at the elbow exit. Designs simulated to date indicate that simpler duct geometries result in lower losses. Benchmark simulations have verified that the compressible flow code used in this study is applicable to these incompressible flow simulations.
Simulating Complex Satellites and a Space-Based Surveillance Sensor Simulation
2009-09-01
high-resolution imagery (Fig. 1). Thus other means for characterizing satellites will need to be developed. Research into non- resolvable space object...computing power and time . The second way, which we are using here is to create simpler models of satellite bodies and use albedo-area calculations...their position, movement, size, and physical features. However, there are many satellites in orbit that are simply too small or too far away to resolve by
Kollikkathara, Naushad; Feng, Huan; Yu, Danlin
2010-11-01
As planning for sustainable municipal solid waste management has to address several inter-connected issues such as landfill capacity, environmental impacts and financial expenditure, it becomes increasingly necessary to understand the dynamic nature of their interactions. A system dynamics approach designed here attempts to address some of these issues by fitting a model framework for Newark urban region in the US, and running a forecast simulation. The dynamic system developed in this study incorporates the complexity of the waste generation and management process to some extent which is achieved through a combination of simpler sub-processes that are linked together to form a whole. The impact of decision options on the generation of waste in the city, on the remaining landfill capacity of the state, and on the economic cost or benefit actualized by different waste processing options are explored through this approach, providing valuable insights into the urban waste-management process. Copyright © 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollikkathara, Naushad, E-mail: naushadkp@gmail.co; Feng Huan; Yu Danlin
2010-11-15
As planning for sustainable municipal solid waste management has to address several inter-connected issues such as landfill capacity, environmental impacts and financial expenditure, it becomes increasingly necessary to understand the dynamic nature of their interactions. A system dynamics approach designed here attempts to address some of these issues by fitting a model framework for Newark urban region in the US, and running a forecast simulation. The dynamic system developed in this study incorporates the complexity of the waste generation and management process to some extent which is achieved through a combination of simpler sub-processes that are linked together to formmore » a whole. The impact of decision options on the generation of waste in the city, on the remaining landfill capacity of the state, and on the economic cost or benefit actualized by different waste processing options are explored through this approach, providing valuable insights into the urban waste-management process.« less
Lazic, Stanley E
2008-07-21
Analysis of variance (ANOVA) is a common statistical technique in physiological research, and often one or more of the independent/predictor variables such as dose, time, or age, can be treated as a continuous, rather than a categorical variable during analysis - even if subjects were randomly assigned to treatment groups. While this is not common, there are a number of advantages of such an approach, including greater statistical power due to increased precision, a simpler and more informative interpretation of the results, greater parsimony, and transformation of the predictor variable is possible. An example is given from an experiment where rats were randomly assigned to receive either 0, 60, 180, or 240 mg/L of fluoxetine in their drinking water, with performance on the forced swim test as the outcome measure. Dose was treated as either a categorical or continuous variable during analysis, with the latter analysis leading to a more powerful test (p = 0.021 vs. p = 0.159). This will be true in general, and the reasons for this are discussed. There are many advantages to treating variables as continuous numeric variables if the data allow this, and this should be employed more often in experimental biology. Failure to use the optimal analysis runs the risk of missing significant effects or relationships.
Photolysis rates in correlated overlapping cloud fields: Cloud-J 7.3
Prather, M. J.
2015-05-27
A new approach for modeling photolysis rates ( J values) in atmospheres with fractional cloud cover has been developed and implemented as Cloud-J – a multi-scattering eight-stream radiative transfer model for solar radiation based on Fast-J. Using observed statistics for the vertical correlation of cloud layers, Cloud-J 7.3 provides a practical and accurate method for modeling atmospheric chemistry. The combination of the new maximum-correlated cloud groups with the integration over all cloud combinations represented by four quadrature atmospheres produces mean J values in an atmospheric column with root-mean-square errors of 4% or less compared with 10–20% errors using simpler approximations.more » Cloud-J is practical for chemistry-climate models, requiring only an average of 2.8 Fast-J calls per atmosphere, vs. hundreds of calls with the correlated cloud groups, or 1 call with the simplest cloud approximations. Another improvement in modeling J values, the treatment of volatile organic compounds with pressure-dependent cross sections is also incorporated into Cloud-J.« less
Photolysis rates in correlated overlapping cloud fields: Cloud-J 7.3c
Prather, M. J.
2015-08-14
A new approach for modeling photolysis rates ( J values) in atmospheres with fractional cloud cover has been developed and is implemented as Cloud-J – a multi-scattering eight-stream radiative transfer model for solar radiation based on Fast-J. Using observations of the vertical correlation of cloud layers, Cloud-J 7.3c provides a practical and accurate method for modeling atmospheric chemistry. The combination of the new maximum-correlated cloud groups with the integration over all cloud combinations by four quadrature atmospheres produces mean J values in an atmospheric column with root mean square (rms) errors of 4 % or less compared with 10–20 %more » errors using simpler approximations. Cloud-J is practical for chemistry–climate models, requiring only an average of 2.8 Fast-J calls per atmosphere vs. hundreds of calls with the correlated cloud groups, or 1 call with the simplest cloud approximations. Another improvement in modeling J values, the treatment of volatile organic compounds with pressure-dependent cross sections, is also incorporated into Cloud-J.« less
The influence of liquid/vapor phase change onto the Nusselt number
NASA Astrophysics Data System (ADS)
Popescu, Elena-Roxana; Colin, Catherine; Tanguy, Sebastien
2017-11-01
In spite of its significant interest in various fields, there is currently a very few information on how an external flow will modify the evaporation or the condensation of a liquid surface. Although most applications involve turbulent flows, the simpler configuration where a laminar superheated or subcooled vapor flow is shearing a saturated liquid interface has still never been solved. Based on a numerical approach, we propose to characterize the interaction between a laminar boundary layer of a superheated or subcooled vapor flow and a static liquid pool at saturation temperature. By performing a full set of simulations sweeping the parameters space, correlations are proposed for the first time on the Nusselt number depending on the dimensionless numbers that characterize both vaporization and condensation. As attended, the Nusselt number decreases or increases in the configurations involving respectively vaporization or condensation. More unexpected is the behaviour of the friction of the vapor flow on the liquid pool, for which we report that it is weakly affected by the phase change, despite the important variation of the local flow structure due to evaporation or condensation.
BRST Quantization of the Proca Model Based on the BFT and the BFV Formalism
NASA Astrophysics Data System (ADS)
Kim, Yong-Wan; Park, Mu-In; Park, Young-Jai; Yoon, Sean J.
The BRST quantization of the Abelian Proca model is performed using the Batalin-Fradkin-Tyutin and the Batalin-Fradkin-Vilkovisky formalism. First, the BFT Hamiltonian method is applied in order to systematically convert a second class constraint system of the model into an effectively first class one by introducing new fields. In finding the involutive Hamiltonian we adopt a new approach which is simpler than the usual one. We also show that in our model the Dirac brackets of the phase space variables in the original second class constraint system are exactly the same as the Poisson brackets of the corresponding modified fields in the extended phase space due to the linear character of the constraints comparing the Dirac or Faddeev-Jackiw formalisms. Then, according to the BFV formalism we obtain that the desired resulting Lagrangian preserving BRST symmetry in the standard local gauge fixing procedure naturally includes the Stückelberg scalar related to the explicit gauge symmetry breaking effect due to the presence of the mass term. We also analyze the nonstandard nonlocal gauge fixing procedure.
Finite element analysis of 6 large PMMA skull reconstructions: A multi-criteria evaluation approach
Ridwan-Pramana, Angela; Marcián, Petr; Borák, Libor; Narra, Nathaniel; Forouzanfar, Tymour; Wolff, Jan
2017-01-01
In this study 6 pre-operative designs for PMMA based reconstructions of cranial defects were evaluated for their mechanical robustness using finite element modeling. Clinical experience and engineering principles were employed to create multiple plan options, which were subsequently computationally analyzed for mechanically relevant parameters under 50N loads: stress, strain and deformation in various components of the assembly. The factors assessed were: defect size, location and shape. The major variable in the cranioplasty assembly design was the arrangement of the fixation plates. An additional study variable introduced was the location of the 50N load within the implant area. It was found that in smaller defects, it was simpler to design a symmetric distribution of plates and under limited variability in load location it was possible to design an optimal for expected loads. However, for very large defects with complex shapes, the variability in the load locations introduces complications to the intuitive design of the optimal assembly. The study shows that it can be beneficial to incorporate multi design computational analyses to decide upon the most optimal plan for a clinical case. PMID:28609471
Finite element analysis of 6 large PMMA skull reconstructions: A multi-criteria evaluation approach.
Ridwan-Pramana, Angela; Marcián, Petr; Borák, Libor; Narra, Nathaniel; Forouzanfar, Tymour; Wolff, Jan
2017-01-01
In this study 6 pre-operative designs for PMMA based reconstructions of cranial defects were evaluated for their mechanical robustness using finite element modeling. Clinical experience and engineering principles were employed to create multiple plan options, which were subsequently computationally analyzed for mechanically relevant parameters under 50N loads: stress, strain and deformation in various components of the assembly. The factors assessed were: defect size, location and shape. The major variable in the cranioplasty assembly design was the arrangement of the fixation plates. An additional study variable introduced was the location of the 50N load within the implant area. It was found that in smaller defects, it was simpler to design a symmetric distribution of plates and under limited variability in load location it was possible to design an optimal for expected loads. However, for very large defects with complex shapes, the variability in the load locations introduces complications to the intuitive design of the optimal assembly. The study shows that it can be beneficial to incorporate multi design computational analyses to decide upon the most optimal plan for a clinical case.
A wet/wet differential pressure sensor for measuring vertical hydraulic gradient.
Fritz, Brad G; Mackley, Rob D
2010-01-01
Vertical hydraulic gradient is commonly measured in rivers, lakes, and streams for studies of groundwater-surface water interaction. While a number of methods with subtle differences have been applied, these methods can generally be separated into two categories; measuring surface water elevation and pressure in the subsurface separately or making direct measurements of the head difference with a manometer. Making separate head measurements allows for the use of electronic pressure sensors, providing large datasets that are particularly useful when the vertical hydraulic gradient fluctuates over time. On the other hand, using a manometer-based method provides an easier and more rapid measurement with a simpler computation to calculate the vertical hydraulic gradient. In this study, we evaluated a wet/wet differential pressure sensor for use in measuring vertical hydraulic gradient. This approach combines the advantage of high-temporal frequency measurements obtained with instrumented piezometers with the simplicity and reduced potential for human-induced error obtained with a manometer board method. Our results showed that the wet/wet differential pressure sensor provided results comparable to more traditional methods, making it an acceptable method for future use.
Feasibility of Screening Adolescents for Suicide Risk in “Real-World” High School Settings
Hallfors, Denise; Brodish, Paul H.; Khatapoush, Shereen; Sanchez, Victoria; Cho, Hyunsan; Steckler, Allan
2006-01-01
Objectives. We evaluated the feasibility of a population-based approach to preventing adolescent suicide. Methods. A total of 1323 students in 10 high schools completed the Suicide Risk Screen. Screening results, student follow-up, staff feedback, and school responses were assessed. Results. Overall, 29% of the participants were rated as at risk of suicide. As a result of this overwhelming percentage, school staffs chose to discontinue the screening after 2 semesters. In further analyses, about half of the students identified were deemed at high risk on the basis of high levels of depression, suicidal ideation, or suicidal behavior. Priority rankings evidenced good construct validity on correlates such as drug use, hopelessness, and perceived family support. Conclusions. A simpler, more specific screening instrument than the Suicide Risk Screen would identify approximately 11% of urban high school youths for assessment, offering high school officials an important opportunity to identify young people at the greatest levels of need and to target scarce health resources. Our experiences from this study show that lack of feasibility testing greatly contributes to the gap between science and practice. PMID:16380568
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marquette, Ian, E-mail: i.marquette@uq.edu.au; Quesne, Christiane, E-mail: cquesne@ulb.ac.be
2015-06-15
We extend the construction of 2D superintegrable Hamiltonians with separation of variables in spherical coordinates using combinations of shift, ladder, and supercharge operators to models involving rational extensions of the two-parameter Lissajous systems on the sphere. These new families of superintegrable systems with integrals of arbitrary order are connected with Jacobi exceptional orthogonal polynomials of type I (or II) and supersymmetric quantum mechanics. Moreover, we present an algebraic derivation of the degenerate energy spectrum for the one- and two-parameter Lissajous systems and the rationally extended models. These results are based on finitely generated polynomial algebras, Casimir operators, realizations as deformedmore » oscillator algebras, and finite-dimensional unitary representations. Such results have only been established so far for 2D superintegrable systems separable in Cartesian coordinates, which are related to a class of polynomial algebras that display a simpler structure. We also point out how the structure function of these deformed oscillator algebras is directly related with the generalized Heisenberg algebras spanned by the nonpolynomial integrals.« less
VISdish: A new tool for canting and shape-measuring solar-dish facets.
Montecchi, Marco; Cara, Giuseppe; Benedetti, Arcangelo
2017-06-01
Solar dishes allow us to obtain highly concentrated solar fluxes used to produce electricity or feed thermal processes/storage. For practical reasons, the reflecting surface is composed by a number of facets. After the dish assembly, facet-canting is an important task for improving the concentration of solar radiation around the focus-point, as well as the capture ratio at the receiver placed there. Finally, flux profile should be measured or evaluated to verify the concentration quality. All these tasks can be achieved by the new tool we developed at ENEA, named VISdish. The instrument is based on the visual inspection system (VIS) approach and can work in two functionalities: canting and shape-measurement. The shape data are entered in a simulation software for evaluating the flux profile and concentration quality. With respect to prior methods, VISdish offers several advantages: (i) simpler data processing, because light point-source and its reflections are univocally related, (ii) higher accuracy. The instrument functionality is illustrated through the preliminary experimental results obtained on the dish recently installed in ENEA-Casaccia in the framework of the E.U. project OMSoP.
Fundamental Principles of Network Formation among Preschool Children1
Schaefer, David R.; Light, John M.; Fabes, Richard A.; Hanish, Laura D.; Martin, Carol Lynn
2009-01-01
The goal of this research was to investigate the origins of social networks by examining the formation of children’s peer relationships in 11 preschool classes throughout the school year. We investigated whether several fundamental processes of relationship formation were evident at this age, including reciprocity, popularity, and triadic closure effects. We expected these mechanisms to change in importance over time as the network crystallizes, allowing more complex structures to evolve from simpler ones in a process we refer to as structural cascading. We analyzed intensive longitudinal observational data of children’s interactions using the SIENA actor-based model. We found evidence that reciprocity, popularity, and triadic closure all shaped the formation of preschool children’s networks. The influence of reciprocity remained consistent, whereas popularity and triadic closure became increasingly important over the course of the school year. Interactions between age and endogenous network effects were nonsignificant, suggesting that these network formation processes were not moderated by age in this sample of young children. We discuss the implications of our longitudinal network approach and findings for the study of early network developmental processes. PMID:20161606
Statistical models of global Langmuir mixing
NASA Astrophysics Data System (ADS)
Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean
2017-05-01
The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.
Basu, Partha; Meheus, Filip; Chami, Youssef; Hariprasad, Roopa; Zhao, Fanghui; Sankaranarayanan, Rengaswamy
2017-07-01
Management algorithms for screen-positive women in cervical cancer prevention programs have undergone substantial changes in recent years. The WHO strongly recommends human papillomavirus (HPV) testing for primary screening, if affordable, or if not, then visual inspection with acetic acid (VIA), and promotes treatment directly following screening through the screen-and-treat approach (one or two clinic visits). While VIA-positive women can be offered immediate ablative treatment based on certain eligibility criteria, HPV-positive women need to undergo subsequent VIA to determine their eligibility. Simpler ablative methods of treatment such as cryotherapy and thermal coagulation have been demonstrated to be effective and to have excellent safety profiles, and these have become integral parts of new management algorithms. The challenges faced by low-resource countries are many and include, from the management perspective, identifying an affordable point-of-care HPV detection test, minimizing over-treatment, and installing an effective information system to ensure high compliance to treatment and follow-up. © 2017 The Authors. International Journal of Gynecology & Obstetrics published by John Wiley & Sons Ltd on behalf of International Federation of Gynecology and Obstetrics.
A new numerical algorithm for the analytic continuation of Green`s functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Natoli, V.D.; Cohen, M.H.; Fornberg, B.
1996-06-01
The need to calculate the spectral properties of a Hermitian operation H frequently arises in the technical sciences. A common approach to its solution involves the construction of the Green`s function operator G(z) = [z - H]{sup -1} in the complex z plane. For example, the energy spectrum and other physical properties of condensed matter systems can often be elegantly and naturally expressed in terms of the Kohn-Sham Green`s functions. However, the nonanalyticity of resolvents on the real axis makes them difficult to compute and manipulate. The Herglotz property of a Green`s function allows one to calculate it along anmore » arc with a small but finite imaginary part, i.e., G(x + iy), and then to continue it to the real axis to determine quantities of physical interest. In the past, finite-difference techniques have been used for this continuation. We present here a fundamentally new algorithm based on the fast Fourier transform which is both simpler and more effective. 14 refs., 9 figs.« less
NASA Astrophysics Data System (ADS)
Rosenow, Phil; Tonner, Ralf
2016-05-01
The extent of hydrogen coverage of the Si(001) c(4 × 2) surface in the presence of hydrogen gas has been studied with dispersion corrected density functional theory. Electronic energy contributions are well described using a hybrid functional. The temperature dependence of the coverage in thermodynamic equilibrium was studied computing the phonon spectrum in a supercell approach. As an approximation to these demanding computations, an interpolated phonon approach was found to give comparable accuracy. The simpler ab initio thermodynamic approach is not accurate enough for the system studied, even if corrections by the Einstein model for surface vibrations are considered. The on-set of H2 desorption from the fully hydrogenated surface is predicted to occur at temperatures around 750 K. Strong changes in hydrogen coverage are found between 1000 and 1200 K in good agreement with previous reflectance anisotropy spectroscopy experiments. These findings allow a rational choice for the surface state in the computational treatment of chemical reactions under typical metal organic vapor phase epitaxy conditions on Si(001).
Vieira, J; Cunha, M C
2011-01-01
This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenow, Phil; Tonner, Ralf, E-mail: tonner@chemie.uni-marburg.de
2016-05-28
The extent of hydrogen coverage of the Si(001) c(4 × 2) surface in the presence of hydrogen gas has been studied with dispersion corrected density functional theory. Electronic energy contributions are well described using a hybrid functional. The temperature dependence of the coverage in thermodynamic equilibrium was studied computing the phonon spectrum in a supercell approach. As an approximation to these demanding computations, an interpolated phonon approach was found to give comparable accuracy. The simpler ab initio thermodynamic approach is not accurate enough for the system studied, even if corrections by the Einstein model for surface vibrations are considered. Themore » on-set of H{sub 2} desorption from the fully hydrogenated surface is predicted to occur at temperatures around 750 K. Strong changes in hydrogen coverage are found between 1000 and 1200 K in good agreement with previous reflectance anisotropy spectroscopy experiments. These findings allow a rational choice for the surface state in the computational treatment of chemical reactions under typical metal organic vapor phase epitaxy conditions on Si(001).« less
Progress Toward Attractive Stellarators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neilson, G H; Brown, T G; Gates, D A
The quasi-axisymmetric stellarator (QAS) concept offers a promising path to a more compact stellarator reactor, closer in linear dimensions to tokamak reactors than previous stellarator designs. Concept improvements are needed, however, to make it more maintainable and more compatible with high plant availability. Using the ARIES-CS design as a starting point, compact stellarator designs with improved maintenance characteristics have been developed. While the ARIES-CS features a through-the-port maintenance scheme, we have investigated configuration changes to enable a sector-maintenance approach, as envisioned for example in ARIES AT. Three approaches are reported. The first is to make tradeoffs within the QAS designmore » space, giving greater emphasis to maintainability criteria. The second approach is to improve the optimization tools to more accurately and efficiently target the physics properties of importance. The third is to employ a hybrid coil topology, so that the plasma shaping functions of the main coils are shared more optimally, either with passive conductors made of high-temperature superconductor or with local compensation coils, allowing the main coils to become simpler. Optimization tools are being improved to test these approaches.« less
Biologically optimized helium ion plans: calculation approach and its in vitro validation
NASA Astrophysics Data System (ADS)
Mairani, A.; Dokic, I.; Magro, G.; Tessonnier, T.; Kamp, F.; Carlson, D. J.; Ciocca, M.; Cerutti, F.; Sala, P. R.; Ferrari, A.; Böhlen, T. T.; Jäkel, O.; Parodi, K.; Debus, J.; Abdollahi, A.; Haberer, T.
2016-06-01
Treatment planning studies on the biological effect of raster-scanned helium ion beams should be performed, together with their experimental verification, before their clinical application at the Heidelberg Ion Beam Therapy Center (HIT). For this purpose, we introduce a novel calculation approach based on integrating data-driven biological models in our Monte Carlo treatment planning (MCTP) tool. Dealing with a mixed radiation field, the biological effect of the primary 4He ion beams, of the secondary 3He and 4He (Z = 2) fragments and of the produced protons, deuterons and tritons (Z = 1) has to be taken into account. A spread-out Bragg peak (SOBP) in water, representative of a clinically-relevant scenario, has been biologically optimized with the MCTP and then delivered at HIT. Predictions of cell survival and RBE for a tumor cell line, characterized by {{(α /β )}\\text{ph}}=5.4 Gy, have been successfully compared against measured clonogenic survival data. The mean absolute survival variation ({μΔ \\text{S}} ) between model predictions and experimental data was 5.3% ± 0.9%. A sensitivity study, i.e. quantifying the variation of the estimations for the studied plan as a function of the applied phenomenological modelling approach, has been performed. The feasibility of a simpler biological modelling based on dose-averaged LET (linear energy transfer) has been tested. Moreover, comparisons with biophysical models such as the local effect model (LEM) and the repair-misrepair-fixation (RMF) model were performed. {μΔ \\text{S}} values for the LEM and the RMF model were, respectively, 4.5% ± 0.8% and 5.8% ± 1.1%. The satisfactorily agreement found in this work for the studied SOBP, representative of clinically-relevant scenario, suggests that the introduced approach could be applied for an accurate estimation of the biological effect for helium ion radiotherapy.
Phillips, Charles D; Hawes, Catherine; Lieberman, Trudy; Koren, Mary Jane
2007-06-25
Nursing home performance measurement systems are practically ubiquitous. The vast majority of these systems aspire to rank order all nursing homes based on quantitative measures of quality. However, the ability of such systems to identify homes differing in quality is hampered by the multidimensional nature of nursing homes and their residents. As a result, the authors doubt the ability of many nursing home performance systems to truly help consumers differentiate among homes providing different levels of quality. We also argue that, for consumers, performance measurement models are better at identifying problem facilities than potentially good homes. In response to these concerns we present a proposal for a less ambitious approach to nursing home performance measurement than previously used. We believe consumers can make better informed choice using a simpler system designed to pinpoint poor-quality nursing homes, rather than one designed to rank hundreds of facilities based on differences in quality-of-care indicators that are of questionable importance. The suggested performance model is based on five principles used in the development of the Consumers Union 2006 Nursing Home Quality Monitor. We can best serve policy-makers and consumers by eschewing nursing home reporting systems that present information about all the facilities in a city, a state, or the nation on a website or in a report. We argue for greater modesty in our efforts and a focus on identifying only the potentially poorest or best homes. In the end, however, it is important to remember that information from any performance measurement website or report is no substitute for multiple visits to a home at different times of the day to personally assess quality.
Oliveira, Tiago Roux; Costa, Luiz Rennó; Catunda, João Marcos Yamasaki; Pino, Alexandre Visintainer; Barbosa, William; Souza, Márcio Nogueira de
2017-06-01
This paper addresses the application of the sliding mode approach to control the arm movements by artificial recruitment of muscles using Neuromuscular Electrical Stimulation (NMES). Such a technique allows the activation of motor nerves using surface electrodes. The goal of the proposed control system is to move the upper limbs of subjects through electrical stimulation to achieve a desired elbow angular displacement. Since the human neuro-motor system has individual characteristics, being time-varying, nonlinear and subject to uncertainties, the use of advanced robust control schemes may represent a better solution than classical Proportional-Integral (PI) controllers and model-based approaches, being simpler than more sophisticated strategies using fuzzy logic or neural networks usually applied in this control problem. The objective is the introduction of a new time-scaling base sliding mode control (SMC) strategy for NMES and its experimental evaluation. The main qualitative advantages of the proposed controller via time-scaling procedure are its independence of the knowledge of the plant relative degree and the design/tuning simplicity. The developed sliding mode strategy allows for chattering alleviation due to the impact of the integrator in smoothing the control signal. In addition, no differentiator is applied to construct the sliding surface. The stability analysis of the closed-loop system is also carried out by using singular perturbation methods. Experimental results are conducted with healthy volunteers as well as stroke patients. Quantitative results show a reduction of 45% in terms of root mean square (RMS) error (from 5.9° to [Formula: see text] ) in comparison with PI control scheme, which is similar to that obtained in the literature. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
3D on-chip microscopy of optically cleared tissue
NASA Astrophysics Data System (ADS)
Zhang, Yibo; Shin, Yoonjung; Sung, Kevin; Yang, Sam; Chen, Harrison; Wang, Hongda; Teng, Da; Rivenson, Yair; Kulkarni, Rajan P.; Ozcan, Aydogan
2018-02-01
Traditional pathology relies on tissue biopsy, micro-sectioning, immunohistochemistry and microscopic imaging, which are relatively expensive and labor-intensive, and therefore are less accessible in resource-limited areas. Low-cost tissue clearing techniques, such as the simplified CLARITY method (SCM), are promising to potentially reduce the cost of disease diagnosis by providing 3D imaging and phenotyping of thicker tissue samples with simpler preparation steps. However, the mainstream imaging approach for cleared tissue, fluorescence microscopy, suffers from high-cost, photobleaching and signal fading. As an alternative approach to fluorescence, here we demonstrate 3D imaging of SCMcleared tissue using on-chip holography, which is based on pixel-super-resolution and multi-height phase recovery algorithms to digitally compute the sample's amplitude and phase images at various z-slices/depths through the sample. The tissue clearing procedures and the lens-free imaging system were jointly optimized to find the best illumination wavelength, tissue thickness, staining solution pH, and the number of hologram heights to maximize the imaged tissue volume, minimize the amount of acquired data, while maintaining a high contrast-to-noise ratio for the imaged cells. After this optimization, we achieved 3D imaging of a 200-μm thick cleared mouse brain tissue over a field-of-view of <20mm2 , and the resulting 3D z-stack agrees well with the images acquired with a scanning lens-based microscope (20× 0.75NA). Moreover, the lens-free microscope achieves an order-of-magnitude better data efficiency compared to its lens-based counterparts for volumetric imaging of samples. The presented low-cost and high-throughput lens-free tissue imaging technique enabled by CLARITY can be used in various biomedical applications in low-resource-settings.
Fulop, Naomi J; Ramsay, Angus I G; Perry, Catherine; Boaden, Ruth J; McKevitt, Christopher; Rudd, Anthony G; Turner, Simon J; Tyrrell, Pippa J; Wolfe, Charles D A; Morris, Stephen
2016-06-03
Implementing major system change in healthcare is not well understood. This gap may be addressed by analysing change in terms of interrelated components identified in the implementation literature, including decision to change, intervention selection, implementation approaches, implementation outcomes, and intervention outcomes. We conducted a qualitative study of two cases of major system change: the centralisation of acute stroke services in Manchester and London, which were associated with significantly different implementation outcomes (fidelity to referral pathway) and intervention outcomes (provision of evidence-based care, patient mortality). We interviewed stakeholders at national, pan-regional, and service-levels (n = 125) and analysed 653 documents. Using a framework developed for this study from the implementation science literature, we examined factors influencing implementation approaches; how these approaches interacted with the models selected to influence implementation outcomes; and their relationship to intervention outcomes. London and Manchester's differing implementation outcomes were influenced by the different service models selected and implementation approaches used. Fidelity to the referral pathway was higher in London, where a 'simpler', more inclusive model was used, implemented with a 'big bang' launch and 'hands-on' facilitation by stroke clinical networks. In contrast, a phased approach of a more complex pathway was used in Manchester, and the network acted more as a platform to share learning. Service development occurred more uniformly in London, where service specifications were linked to financial incentives, and achieving standards was a condition of service launch, in contrast to Manchester. 'Hands-on' network facilitation, in the form of dedicated project management support, contributed to achievement of these standards in London; such facilitation processes were less evident in Manchester. Using acute stroke service centralisation in London and Manchester as an example, interaction between model selected and implementation approaches significantly influenced fidelity to the model. The contrasting implementation outcomes may have affected differences in provision of evidence-based care and patient mortality. The framework used in this analysis may support planning and evaluating major system changes, but would benefit from application in different healthcare contexts.
NASA Astrophysics Data System (ADS)
Li, Yutong; Wang, Yuxin; Duffy, Alex H. B.
2014-11-01
Computer-based conceptual design for routine design has made great strides, yet non-routine design has not been given due attention, and it is still poorly automated. Considering that the function-behavior-structure(FBS) model is widely used for modeling the conceptual design process, a computer-based creativity enhanced conceptual design model(CECD) for non-routine design of mechanical systems is presented. In the model, the leaf functions in the FBS model are decomposed into and represented with fine-grain basic operation actions(BOA), and the corresponding BOA set in the function domain is then constructed. Choosing building blocks from the database, and expressing their multiple functions with BOAs, the BOA set in the structure domain is formed. Through rule-based dynamic partition of the BOA set in the function domain, many variants of regenerated functional schemes are generated. For enhancing the capability to introduce new design variables into the conceptual design process, and dig out more innovative physical structure schemes, the indirect function-structure matching strategy based on reconstructing the combined structure schemes is adopted. By adjusting the tightness of the partition rules and the granularity of the divided BOA subsets, and making full use of the main function and secondary functions of each basic structure in the process of reconstructing of the physical structures, new design variables and variants are introduced into the physical structure scheme reconstructing process, and a great number of simpler physical structure schemes to accomplish the overall function organically are figured out. The creativity enhanced conceptual design model presented has a dominant capability in introducing new deign variables in function domain and digging out simpler physical structures to accomplish the overall function, therefore it can be utilized to solve non-routine conceptual design problem.
A simple exposure-time theory for all time-nonlocal transport formulations and beyond.
NASA Astrophysics Data System (ADS)
Ginn, T. R.; Schreyer, L. G.
2016-12-01
Anomalous transport or better put, anomalous non-transport, of solutes or flowing water or suspended colloids or bacteria etc. has been the subject of intense analyses with multiple formulations appearing in scientific literature from hydrology to geomorphology to chemical engineering, to environmental microbiology to mathematical physics. Primary focus has recently been on time-nonlocal mass conservation formulations such as multirate mass transfer, fractional-time advection-dispersion, continuous-time random walks, and dual porosity modeling approaches, that employ a convolution with a memory function to reflect respective conceptual models of delays in transport. These approaches are effective or "proxy" ones that do not always distinguish transport from immobilzation delays, are generally without connection to measurable physicochemical properties, and involve variously fractional calculus, inverse Laplace or Fourier transformations, and/or complex stochastic notions including assumptions of stationarity or ergodicity at the observation scale. Here we show a much simpler approach to time-nonlocal (non-)transport that is free of all these things, and is based on expressing the memory function in terms of a rate of mobilization of immobilized mass that is a function of the continguous time immobilized. Our approach treats mass transfer completely independently from the transport process, and it allows specification of actual immobilization mechanisms or delays. To our surprize we found that for all practical purposes any memory function can be expressed this way, including all of those associated with the multi-rate mass transfer approaches, original powerlaw, different truncated powerlaws, fractional-derivative, etc. More intriguing is the fact that the exposure-time approach can be used to construct heretofore unseen memory functions, e.g., forms that generate oscillating tails of breakthrough curves such as may occur in sediment transport, forms for delay-differential equations, and so on. Because the exposure-time approach is both simple and localized, it provides a promising platform for launching forays into non-Markovian and/or nonlinear processes and into upscaling age-dependent multicomponent reaction systems.
Comparison of the efficiency control of mycotoxins by some optical immune biosensors
NASA Astrophysics Data System (ADS)
Slyshyk, N. F.; Starodub, N. F.
2013-11-01
It was compared the efficiency of patulin control at the application of such optical biosensors which were based on the surface plasmon resonance (SPR) and nano-porous silicon (sNPS). In last case the intensity of the immune reaction was registered by measuring level of chemiluminescence (ChL) or photocurrent of nPS. The sensitivity of this mycotoxin determination by first type of immune biosensor was 0.05-10 mg/L Approximately the same sensitivity as well as the overall time analysis were demonstrated by the immune biosensor based on the nPS too. Nevertheless, the last type of biosensor was simpler in technical aspect and the cost of analysis was cheapest. That is why, it was recommend the nPS based immune biosensor for wide screening application and SPR one for some additional control or verification of preliminary obtained results. In this article a special attention was given to condition of sample preparation for analysis, in particular, micotoxin extraction from potao and some juices. Moreover, it was compared the efficiency of the above mentioned immune biosensors with such traditional approach of mycotoxin determination as the ELISA-method. In the result of investigation and discussion of obtained data it was concluded that both type of the immune biosensors are able to fulfill modern practice demand in respect sensitivity, rapidity, simplicity and cheapness of analysis.
Towards a climate-dependent paradigm of ammonia emission and deposition
Sutton, Mark A.; Reis, Stefan; Riddick, Stuart N.; Dragosits, Ulrike; Nemitz, Eiko; Theobald, Mark R.; Tang, Y. Sim; Braban, Christine F.; Vieno, Massimo; Dore, Anthony J.; Mitchell, Robert F.; Wanless, Sarah; Daunt, Francis; Fowler, David; Blackall, Trevor D.; Milford, Celia; Flechard, Chris R.; Loubet, Benjamin; Massad, Raia; Cellier, Pierre; Personne, Erwan; Coheur, Pierre F.; Clarisse, Lieven; Van Damme, Martin; Ngadi, Yasmine; Clerbaux, Cathy; Skjøth, Carsten Ambelas; Geels, Camilla; Hertel, Ole; Wichink Kruit, Roy J.; Pinder, Robert W.; Bash, Jesse O.; Walker, John T.; Simpson, David; Horváth, László; Misselbrook, Tom H.; Bleeker, Albert; Dentener, Frank; de Vries, Wim
2013-01-01
Existing descriptions of bi-directional ammonia (NH3) land–atmosphere exchange incorporate temperature and moisture controls, and are beginning to be used in regional chemical transport models. However, such models have typically applied simpler emission factors to upscale the main NH3 emission terms. While this approach has successfully simulated the main spatial patterns on local to global scales, it fails to address the environment- and climate-dependence of emissions. To handle these issues, we outline the basis for a new modelling paradigm where both NH3 emissions and deposition are calculated online according to diurnal, seasonal and spatial differences in meteorology. We show how measurements reveal a strong, but complex pattern of climatic dependence, which is increasingly being characterized using ground-based NH3 monitoring and satellite observations, while advances in process-based modelling are illustrated for agricultural and natural sources, including a global application for seabird colonies. A future architecture for NH3 emission–deposition modelling is proposed that integrates the spatio-temporal interactions, and provides the necessary foundation to assess the consequences of climate change. Based on available measurements, a first empirical estimate suggests that 5°C warming would increase emissions by 42 per cent (28–67%). Together with increased anthropogenic activity, global NH3 emissions may increase from 65 (45–85) Tg N in 2008 to reach 132 (89–179) Tg by 2100. PMID:23713128
Towards a climate-dependent paradigm of ammonia emission and deposition.
Sutton, Mark A; Reis, Stefan; Riddick, Stuart N; Dragosits, Ulrike; Nemitz, Eiko; Theobald, Mark R; Tang, Y Sim; Braban, Christine F; Vieno, Massimo; Dore, Anthony J; Mitchell, Robert F; Wanless, Sarah; Daunt, Francis; Fowler, David; Blackall, Trevor D; Milford, Celia; Flechard, Chris R; Loubet, Benjamin; Massad, Raia; Cellier, Pierre; Personne, Erwan; Coheur, Pierre F; Clarisse, Lieven; Van Damme, Martin; Ngadi, Yasmine; Clerbaux, Cathy; Skjøth, Carsten Ambelas; Geels, Camilla; Hertel, Ole; Wichink Kruit, Roy J; Pinder, Robert W; Bash, Jesse O; Walker, John T; Simpson, David; Horváth, László; Misselbrook, Tom H; Bleeker, Albert; Dentener, Frank; de Vries, Wim
2013-07-05
Existing descriptions of bi-directional ammonia (NH3) land-atmosphere exchange incorporate temperature and moisture controls, and are beginning to be used in regional chemical transport models. However, such models have typically applied simpler emission factors to upscale the main NH3 emission terms. While this approach has successfully simulated the main spatial patterns on local to global scales, it fails to address the environment- and climate-dependence of emissions. To handle these issues, we outline the basis for a new modelling paradigm where both NH3 emissions and deposition are calculated online according to diurnal, seasonal and spatial differences in meteorology. We show how measurements reveal a strong, but complex pattern of climatic dependence, which is increasingly being characterized using ground-based NH3 monitoring and satellite observations, while advances in process-based modelling are illustrated for agricultural and natural sources, including a global application for seabird colonies. A future architecture for NH3 emission-deposition modelling is proposed that integrates the spatio-temporal interactions, and provides the necessary foundation to assess the consequences of climate change. Based on available measurements, a first empirical estimate suggests that 5°C warming would increase emissions by 42 per cent (28-67%). Together with increased anthropogenic activity, global NH3 emissions may increase from 65 (45-85) Tg N in 2008 to reach 132 (89-179) Tg by 2100.
A group evolving-based framework with perturbations for link prediction
NASA Astrophysics Data System (ADS)
Si, Cuiqi; Jiao, Licheng; Wu, Jianshe; Zhao, Jin
2017-06-01
Link prediction is a ubiquitous application in many fields which uses partially observed information to predict absence or presence of links between node pairs. The group evolving study provides reasonable explanations on the behaviors of nodes, relations between nodes and community formation in a network. Possible events in group evolution include continuing, growing, splitting, forming and so on. The changes discovered in networks are to some extent the result of these events. In this work, we present a group evolving-based characterization of node's behavioral patterns, and via which we can estimate the probability they tend to interact. In general, the primary aim of this paper is to offer a minimal toy model to detect missing links based on evolution of groups and give a simpler explanation on the rationality of the model. We first introduce perturbations into networks to obtain stable cluster structures, and the stable clusters determine the stability of each node. Then fluctuations, another node behavior, are assumed by the participation of each node to its own belonging group. Finally, we demonstrate that such characteristics allow us to predict link existence and propose a model for link prediction which outperforms many classical methods with a decreasing computational time in large scales. Encouraging experimental results obtained on real networks show that our approach can effectively predict missing links in network, and even when nearly 40% of the edges are missing, it also retains stationary performance.
Chaining for Flexible and High-Performance Key-Value Systems
2012-09-01
store that is fault tolerant achieves high performance and availability, and offers strong data consistency? We present a new replication protocol...effective high performance data access and analytics, many sites use simpler data model “ NoSQL ” systems. ese systems store and retrieve data only by...DRAM, Flash, and disk-based storage; can act as an unreliable cache or a durable store ; and can offer strong or weak data consistency. e value of
Mechanical alloying of a hydrogenation catalyst used for the remediation of contaminated compounds
NASA Technical Reports Server (NTRS)
Quinn, Jacqueline W. (Inventor); Geiger, Cherie L. (Inventor); Aitken, Brian S. (Inventor); Clausen, Christian A. (Inventor)
2012-01-01
A hydrogenation catalyst including a base material coated with a catalytic metal is made using mechanical milling techniques. The hydrogenation catalysts are used as an excellent catalyst for the dehalogenation of contaminated compounds and the remediation of other industrial compounds. Preferably, the hydrogenation catalyst is a bimetallic particle including zero-valent metal particles coated with a catalytic material. The mechanical milling technique is simpler and cheaper than previously used methods for producing hydrogenation catalysts.
Mechanical alloying of a hydrogenation catalyst used for the remediation of contaminated compounds
NASA Technical Reports Server (NTRS)
Quinn, Jacqueline W. (Inventor); Aitken, Brian S. (Inventor); Clausen, Christian A. (Inventor); Geiger, Cherie L. (Inventor)
2010-01-01
A hydrogenation catalyst including a base material coated with a catalytic metal is made using mechanical milling techniques. The hydrogenation catalysts are used as an excellent catalyst for the dehalogenation of contaminated compounds and the remediation of other industrial compounds. Preferably, the hydrogenation catalyst is a bimetallic particle including zero-valent metal particles coated with a catalytic material. The mechanical milling technique is simpler and cheaper than previously used methods for producing hydrogenation catalysts.
VizieR Online Data Catalog: Bayesian method for detecting stellar flares (Pitkin+, 2014)
NASA Astrophysics Data System (ADS)
Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.
2015-05-01
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N. (1 data file).
A Bayesian method for detecting stellar flares
NASA Astrophysics Data System (ADS)
Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.
2014-12-01
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of `quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.
Novel Antigens for enterotoxigenic Escherichia coli (ETEC) Vaccines
Fleckenstein, James M.; Sheikh, Alaullah; Qadri, Firdausi
2014-01-01
Enterotoxigenic Escherichia coli (ETEC) are the most common bacterial pathogens-causing diarrhea in developing countries where they cause hundreds of thousands of deaths, mostly in children. These organisms are leading cause of diarrheal illness in travelers to endemic countries. ETEC pathogenesis, and consequently vaccine approaches, have largely focused on plasmid-encoded enterotoxins or fimbrial colonization factors. To date these approaches have not yielded a broadly protective vaccine. However, recent studies suggest that ETEC pathogenesis is more complex than previously appreciated and involves additional plasmid and chromosomally-encoded virulence molecules that can be targeted in vaccines. Here, we review recent novel antigen discovery efforts, potential contribution of these proteins to the molecular pathogenesis of ETEC and protective immunity, and the potential implications for development of next generation vaccines for important pathogens. These proteins may help to improve the effectiveness of future vaccines by making simpler and possibly broadly protective because of their conserved nature. PMID:24702311
NASA Astrophysics Data System (ADS)
Chen, Gongdai; Deng, Hongchang; Yuan, Libo
2018-07-01
We aim at a more compact, flexible, and simpler core-to-fiber coupling approach, optimal combinations of two graded refractive index (GRIN) lenses have been demonstrated for the interconnection between a twin-core single-mode fiber and two single-core single-mode fibers. The optimal two-lens combinations achieve an efficient core-to-fiber separating coupling and allow the fibers and lenses to coaxially assemble. Finally, axial deviations and transverse displacements of the components are discussed, and the latter increases the coupling loss more significantly. The gap length between the two lenses is designed to be fine-tuned to compensate for the transverse displacement, and the good linear compensation relationship contributes to the device manufacturing. This approach has potential applications in low coupling loss and low crosstalk devices without sophisticated alignment and adjustment, and enables the channel separating for multicore fibers.
A decomposition approach to the design of a multiferroic memory bit
NASA Astrophysics Data System (ADS)
Acevedo, Ruben; Liang, Cheng-Yen; Carman, Gregory P.; Sepulveda, Abdon E.
2017-06-01
The objective of this paper is to present a methodology for the design of a memory bit to minimize the energy required to write data at the bit level. By straining a ferromagnetic nickel nano-dot by means of a piezoelectric substrate, its magnetization vector rotates between two stable states defined as a 1 and 0 for digital memory. The memory bit geometry, actuation mechanism and voltage control law were used as design variables. The approach used was to decompose the overall design process into simpler sub-problems whose structure can be exploited for a more efficient solution. This method minimizes the number of fully dynamic coupled finite element analyses required to converge to a near optimal design, thus decreasing the computational time for the design process. An in-plane sample design problem is presented to illustrate the advantages and flexibility of the procedure.
Macrostructure from Microstructure: Generating Whole Systems from Ego Networks
Smith, Jeffrey A.
2014-01-01
This paper presents a new simulation method to make global network inference from sampled data. The proposed simulation method takes sampled ego network data and uses Exponential Random Graph Models (ERGM) to reconstruct the features of the true, unknown network. After describing the method, the paper presents two validity checks of the approach: the first uses the 20 largest Add Health networks while the second uses the Sociology Coauthorship network in the 1990's. For each test, I take random ego network samples from the known networks and use my method to make global network inference. I find that my method successfully reproduces the properties of the networks, such as distance and main component size. The results also suggest that simpler, baseline models provide considerably worse estimates for most network properties. I end the paper by discussing the bounds/limitations of ego network sampling. I also discuss possible extensions to the proposed approach. PMID:25339783
An issue of literacy on pediatric arterial hypertension
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena; Romana, Andreia; Simão, Carla
2017-11-01
Arterial hypertension in pediatric age is a public health problem, whose prevalence has increased significantly over time. Pediatric arterial hypertension (PAH) is under-diagnosed in most cases, a highly prevalent disease, appears without notice with multiple consequences on the children's health and future adults. Children caregivers and close family must know the PAH existence, the negative consequences associated with it, the risk factors and, finally, must do prevention. In [12, 13] can be found a statistical data analysis using a simpler questionnaire introduced in [4] under the aim of a preliminary study about PAH caregivers acquaintance. A continuation of such analysis is detailed in [14]. An extension of such questionnaire was built and applied to a distinct population and it was filled online. The statistical approach is partially reproduced in the present work. Some statistical models were estimated using several approaches, namely multivariate analysis (factorial analysis), also adequate methods to analyze the kind of data in study.
Crystallization of bovine insulin on a flow-free droplet-based platform
NASA Astrophysics Data System (ADS)
Chen, Fengjuan; Du, Guanru; Yin, Di; Yin, Ruixue; Zhang, Hongbo; Zhang, Wenjun; Yang, Shih-Mo
2017-03-01
Crystallization is an important process in the pharmaceutical manufacturing industry. In this work, we report a study to create the zinc-free crystals of bovine insulin on a flow-free droplet-based platform we previously developed. The benefit of this platform is its promise to create a single type of crystals under a simpler and more stable environment and with a high throughput. The experimental result shows that the bovine insulin forms a rhombic dodecahedra shape and the coefficient variation (CV) in the size of crystals is less than 5%. These results are very promising for the insulin production.
Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids
NASA Technical Reports Server (NTRS)
Liu, Yen; Vinokur, Marcel; Wang, Z. J.
2004-01-01
A new, high-order, conservative, and efficient method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. A discussion on the Discontinuous Spectral Difference (SD) Method, locations of the unknowns and flux points and numerical results are also presented.
Recent New Ideas and Directions for Space-Based Nulling Interferometry
NASA Technical Reports Server (NTRS)
Serabyn, Eugene (Gene)
2004-01-01
This document is composed of two viewgraph presentations. The first is entitled "Recent New Ideas and Directions for Space-Based Nulling Interferometry." It reviews our understanding of interferometry compared to a year or so ago: (1) Simpler options identified, (2) A degree of flexibility is possible, allowing switching (or degradation) between some options, (3) Not necessary to define every component to the exclusion of all other possibilities and (4) MIR fibers are becoming a reality. The second, entitled "The Fiber Nuller," reviews the idea of Combining beams in a fiber instead of at a beamsplitter.
The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava
2016-08-01
This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.
Precision Laser Development for Interferometric Space Missions NGO, SGO, and GRACE Follow-On
NASA Technical Reports Server (NTRS)
Numata, Kenji; Camp, Jordan
2011-01-01
Optical fiber and semiconductor laser technologies have evolved dramatically over the last decade due to the increased demands from optical communications. We are developing a laser (master oscillator) and optical amplifier based on those technologies for interferometric space missions, including the gravitational-wave missions NGO/SGO (formerly LISA) and the climate monitoring mission GRACE Follow-On, by fully utilizing the matured wave-guided optics technologies. In space, where simpler and more reliable system is preferred, the wave-guided components are advantageous over bulk, crystal-based, free-space laser, such as NPRO (Nonplanar Ring Oscillator) and bulk-crystal amplifier.
Monnard, Pierre-Alain
2016-01-01
Cellular life is based on interacting polymer networks that serve as catalysts, genetic information and structural molecules. The complexity of the DNA, RNA and protein biochemistry suggests that it must have been preceded by simpler systems. The RNA world hypothesis proposes RNA as the prime candidate for such a primal system. Even though this proposition has gained currency, its investigations have highlighted several challenges with respect to bulk aqueous media: (1) the synthesis of RNA monomers is difficult; (2) efficient pathways for monomer polymerization into functional RNAs and their subsequent, sequence-specific replication remain elusive; and (3) the evolution of the RNA function towards cellular metabolism in isolation is questionable in view of the chemical mixtures expected on the early Earth. This review will address the question of the possible roles of heterogeneous media and catalysis as drivers for the emergence of RNA-based polymer networks. We will show that this approach to non-enzymatic polymerizations of RNA from monomers and RNA evolution cannot only solve some issues encountered during reactions in bulk aqueous solutions, but may also explain the co-emergence of the various polymers indispensable for life in complex mixtures and their organization into primitive networks. PMID:27827919
Change-based threat detection in urban environments with a forward-looking camera
NASA Astrophysics Data System (ADS)
Morton, Kenneth, Jr.; Ratto, Christopher; Malof, Jordan; Gunter, Michael; Collins, Leslie; Torrione, Peter
2012-06-01
Roadside explosive threats continue to pose a significant risk to soldiers and civilians in conflict areas around the world. These objects are easy to manufacture and procure, but due to their ad hoc nature, they are difficult to reliably detect using standard sensing technologies. Although large roadside explosive hazards may be difficult to conceal in rural environments, urban settings provide a much more complicated background where seemingly innocuous objects (e.g., piles of trash, roadside debris) may be used to obscure threats. Since direct detection of all innocuous objects would flag too many objects to be of use, techniques must be employed to reduce the number of alarms generated and highlight only a limited subset of possibly threatening regions for the user. In this work, change detection techniques are used to reduce false alarm rates and increase detection capabilities for possible threat identification in urban environments. The proposed model leverages data from multiple video streams collected over the same regions by first applying video aligning and then using various distance metrics to detect changes based on image keypoints in the video streams. Data collected at an urban warfare simulation range at an Eastern US test site was used to evaluate the proposed approach, and significant reductions in false alarm rates compared to simpler techniques are illustrated.
Chiarelli, Antonio M.; Maclin, Edward L.; Low, Kathy A.; Fantini, Sergio; Fabiani, Monica; Gratton, Gabriele
2017-01-01
Abstract. Near infrared (NIR) light has been widely used for measuring changes in hemoglobin concentration in the human brain (functional NIR spectroscopy, fNIRS). fNIRS is based on the differential measurement and estimation of absorption perturbations, which, in turn, are based on correctly estimating the absolute parameters of light propagation. To do so, it is essential to accurately characterize the baseline optical properties of tissue (absorption and reduced scattering coefficients). However, because of the diffusive properties of the medium, separate determination of absorption and scattering across the head is challenging. The effective attenuation coefficient (EAC), which is proportional to the geometric mean of absorption and reduced scattering coefficients, can be estimated in a simpler fashion by multidistance light decay measurements. EAC mapping could be of interest for the scientific community because of its absolute information content, and because light propagation is governed by the EAC for source–detector distances exceeding 1 cm, which sense depths extending beyond the scalp and skull layers. Here, we report an EAC mapping procedure that can be applied to standard fNIRS recordings, yielding topographic maps with 2- to 3-cm resolution. Application to human data indicates the importance of venous sinuses in determining regional EAC variations, a factor often overlooked. PMID:28466026
Energy-Efficient ZigBee-Based Wireless Sensor Network for Track Bicycle Performance Monitoring
Gharghan, Sadik K.; Nordin, Rosdiadee; Ismail, Mahamod
2014-01-01
In a wireless sensor network (WSN), saving power is a vital requirement. In this paper, a simple point-to-point bike WSN was considered. The data of bike parameters, speed and cadence, were monitored and transmitted via a wireless communication based on the ZigBee protocol. Since the bike parameters are monitored and transmitted on every bike wheel rotation, this means the sensor node does not sleep for a long time, causing power consumption to rise. Therefore, a newly proposed algorithm, known as the Redundancy and Converged Data (RCD) algorithm, was implemented for this application to put the sensor node into sleep mode while maintaining the performance measurements. This is achieved by minimizing the data packets transmitted as much as possible and fusing the data of speed and cadence by utilizing the correlation measurements between them to minimize the number of sensor nodes in the network to one node, which results in reduced power consumption, cost, and size, in addition to simpler hardware implementation. Execution of the proposed RCD algorithm shows that this approach can reduce the current consumption to 1.69 mA, and save 95% of the sensor node energy. Also, the comparison results with different wireless standard technologies demonstrate minimal current consumption in the sensor node. PMID:25153141
Graham, Stephen M
2017-01-01
The treatment of infection with Mycobacterium tuberculosis in young children is supported by universal policy based on strong rationale and evidence of effectiveness, but has rarely been implemented in tuberculosis endemic countries. Areas covered: This review highlights a number of important recent developments that provide an unprecedented opportunity to close the policy-practice gap, as well as ongoing needs to facilitate implementation under programmatic conditions and scale-up. Expert commentary: The WHO's End TB Strategy and Stop TB Partnership's Plan to End TB provide ambitious targets for prevention at a time when National Tuberculosis Programs in tuberculosis endemic countries are increasing attention to the challenges of management and prevention of tuberculosis disease in children. This opportunity is greatly enhanced by recent evidence of the effectiveness of shorter, simpler and safer regimens to treat tuberculosis infection. The scale of the challenge for implementation will require a decentralized, integrated, community-based approach. An accurate and low-cost point-of-care test for tuberculous infection would be a major advance to support such implementation. Specific guidance for the treatment of infection in young child contacts of multidrug-resistant tuberculosis cases is a major current need while awaiting further evidence.
Dental radiographic indicators, a key to age estimation
Panchbhai, AS
2011-01-01
Objective The present review article is aimed at describing the radiological methods utilized for human age identification. Methods The application and importance of radiological methods in human age assessment was discussed through the literature survey. Results Following a literature search, 46 articles were included in the study and the relevant information is depicted in the article. Dental tissue is often preserved indefinitely after death. Implementation of radiography is based on the assessment of the extent of calcification of teeth and in turn the degree of formation of crown and root structures, along with the sequence and the stages of eruption. Several radiological techniques can be used to assist in both individual and general identification, including determination of gender, ethnic group and age. The radiographic method is a simpler and cheaper method of age identification compared with histological and biochemical methods. Radiographic and tomographic images have become an essential aid for human identification in forensic dentistry, particularly with the refinement of techniques and the incorporation of information technology resources. Conclusion Based on an appropriate knowledge of the available methods, forensic dentists can choose the most appropriate since the validity of age estimation crucially depends on the method used and its proper application. The multifactorial approach will lead to optimum age assessment. The legal requirements also have to be considered. PMID:21493876
Theory and methodology for utilizing genes as biomarkers to determine potential biological mixtures.
Shrestha, Sadeep; Smith, Michael W; Beaty, Terri H; Strathdee, Steffanie A
2005-01-01
Genetically determined mixture information can be used as a surrogate for physical or behavioral characteristics in epidemiological studies examining research questions related to socially stigmatized behaviors and horizontally transmitted infections. A new measure, the probability of mixture discrimination (PMD), was developed to aid mixture analysis that estimates the ability to differentiate single from multiple genomes in biological mixtures. Four autosomal short tandem repeats (STRs) were identified, genotyped and evaluated in African American, European American, Hispanic, and Chinese individuals to estimate PMD. Theoretical PMD frameworks were also developed for autosomal and sex-linked (X and Y) STR markers in potential male/male, male/female and female/female mixtures. Autosomal STRs genetically determine the presence of multiple genomes in mixture samples of unknown genders with more power than the apparently simpler X and Y chromosome STRs. Evaluation of four autosomal STR loci enables the detection of mixtures of DNA from multiple sources with above 99% probability in all four racial/ethnic populations. The genetic-based approach has applications in epidemiology that provide viable alternatives to survey-based study designs. The analysis of genes as biomarkers can be used as a gold standard for validating measurements from self-reported behaviors that tend to be sensitive or socially stigmatizing, such as those involving sex and drugs.
Reverse engineering and analysis of large genome-scale gene networks
Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas
2013-01-01
Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249
Bayes factors for the linear ballistic accumulator model of decision-making.
Evans, Nathan J; Brown, Scott D
2018-04-01
Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.
Chiarelli, Antonio M; Maclin, Edward L; Low, Kathy A; Fantini, Sergio; Fabiani, Monica; Gratton, Gabriele
2017-04-01
Near infrared (NIR) light has been widely used for measuring changes in hemoglobin concentration in the human brain (functional NIR spectroscopy, fNIRS). fNIRS is based on the differential measurement and estimation of absorption perturbations, which, in turn, are based on correctly estimating the absolute parameters of light propagation. To do so, it is essential to accurately characterize the baseline optical properties of tissue (absorption and reduced scattering coefficients). However, because of the diffusive properties of the medium, separate determination of absorption and scattering across the head is challenging. The effective attenuation coefficient (EAC), which is proportional to the geometric mean of absorption and reduced scattering coefficients, can be estimated in a simpler fashion by multidistance light decay measurements. EAC mapping could be of interest for the scientific community because of its absolute information content, and because light propagation is governed by the EAC for source-detector distances exceeding 1 cm, which sense depths extending beyond the scalp and skull layers. Here, we report an EAC mapping procedure that can be applied to standard fNIRS recordings, yielding topographic maps with 2- to 3-cm resolution. Application to human data indicates the importance of venous sinuses in determining regional EAC variations, a factor often overlooked.
Energy-efficient ZigBee-based wireless sensor network for track bicycle performance monitoring.
Gharghan, Sadik K; Nordin, Rosdiadee; Ismail, Mahamod
2014-08-22
In a wireless sensor network (WSN), saving power is a vital requirement. In this paper, a simple point-to-point bike WSN was considered. The data of bike parameters, speed and cadence, were monitored and transmitted via a wireless communication based on the ZigBee protocol. Since the bike parameters are monitored and transmitted on every bike wheel rotation, this means the sensor node does not sleep for a long time, causing power consumption to rise. Therefore, a newly proposed algorithm, known as the Redundancy and Converged Data (RCD) algorithm, was implemented for this application to put the sensor node into sleep mode while maintaining the performance measurements. This is achieved by minimizing the data packets transmitted as much as possible and fusing the data of speed and cadence by utilizing the correlation measurements between them to minimize the number of sensor nodes in the network to one node, which results in reduced power consumption, cost, and size, in addition to simpler hardware implementation. Execution of the proposed RCD algorithm shows that this approach can reduce the current consumption to 1.69 mA, and save 95% of the sensor node energy. Also, the comparison results with different wireless standard technologies demonstrate minimal current consumption in the sensor node.
NASA Astrophysics Data System (ADS)
Toman, Blaza; Nelson, Michael A.; Bedner, Mary
2017-06-01
Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).
Searching for the definition of macrosomia through an outcome-based approach.
Ye, Jiangfeng; Zhang, Lin; Chen, Yan; Fang, Fang; Luo, ZhongCheng; Zhang, Jun
2014-01-01
Macrosomia has been defined in various ways by obstetricians and researchers. The purpose of the present study was to search for a definition of macrosomia through an outcome-based approach. In a study of 30,831,694 singleton term live births and 38,053 stillbirths in the U.S. Linked Birth-Infant Death Cohort datasets (1995-2004), we compared the occurrence of stillbirth, neonatal death, and 5-min Apgar score less than four in subgroups of birthweight (4000-4099 g, 4100-4199 g, 4200-4299 g, 4300-4399 g, 4400-4499 g, 4500-4999 g vs. reference group 3500-4000 g) and birthweight percentile for gestational age (90th-94th percentile, 95th-96th, and ≥ 97th percentile, vs. reference group 75th-90th percentile). There was no significant increase in adverse perinatal outcomes until birthweight exceeded the 97th percentile. Weight-specific odds ratios (ORs) elevated substantially to 2 when birthweight exceeded 4500 g in Whites. In Blacks and Hispanics, the aORs exceeded 2 for 5-min Apgar less than four when birthweight exceeded 4300 g. For vaginal deliveries, the aORs of perinatal morbidity and mortality were larger for most of the subgroups, but the patterns remained the same. A birthweight greater than 4500 g in Whites, or 4300 g in Blacks and Hispanics regardless of gestational age is the optimal threshold to define macrosomia. A birthweight greater than the 97th percentile for a given gestational age, irrespective of race is also reasonable to define macrosomia. The former may be more clinically useful and simpler to apply.
Close-range laser scanning in forests: towards physically based semantics across scales.
Morsdorf, F; Kükenbrink, D; Schneider, F D; Abegg, M; Schaepman, M E
2018-04-06
Laser scanning with its unique measurement concept holds the potential to revolutionize the way we assess and quantify three-dimensional vegetation structure. Modern laser systems used at close range, be it on terrestrial, mobile or unmanned aerial platforms, provide dense and accurate three-dimensional data whose information just waits to be harvested. However, the transformation of such data to information is not as straightforward as for airborne and space-borne approaches, where typically empirical models are built using ground truth of target variables. Simpler variables, such as diameter at breast height, can be readily derived and validated. More complex variables, e.g. leaf area index, need a thorough understanding and consideration of the physical particularities of the measurement process and semantic labelling of the point cloud. Quantified structural models provide a framework for such labelling by deriving stem and branch architecture, a basis for many of the more complex structural variables. The physical information of the laser scanning process is still underused and we show how it could play a vital role in conjunction with three-dimensional radiative transfer models to shape the information retrieval methods of the future. Using such a combined forward and physically based approach will make methods robust and transferable. In addition, it avoids replacing observer bias from field inventories with instrument bias from different laser instruments. Still, an intensive dialogue with the users of the derived information is mandatory to potentially re-design structural concepts and variables so that they profit most of the rich data that close-range laser scanning provides.
Proteomic profiling of early degenerative retina of RCS rats
Zhu, Zhi-Hong; Fu, Yan; Weng, Chuan-Huang; Zhao, Cong-Jian; Yin, Zheng-Qin
2017-01-01
AIM To identify the underlying cellular and molecular changes in retinitis pigmentosa (RP). METHODS Label-free quantification-based proteomics analysis, with its advantages of being more economic and consisting of simpler procedures, has been used with increasing frequency in modern biological research. Dystrophic RCS rats, the first laboratory animal model for the study of RP, possess a similar pathological course as human beings with the diseases. Thus, we employed a comparative proteomics analysis approach for in-depth proteome profiling of retinas from dystrophic RCS rats and non-dystrophic congenic controls through Linear Trap Quadrupole - orbitrap MS/MS, to identify the significant differentially expressed proteins (DEPs). Bioinformatics analyses, including Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway annotation and upstream regulatory analysis, were then performed on these retina proteins. Finally, a Western blotting experiment was carried out to verify the difference in the abundance of transcript factor E2F1. RESULTS In this study, we identified a total of 2375 protein groups from the retinal protein samples of RCS rats and non-dystrophic congenic controls. Four hundred thirty-four significantly DEPs were selected by Student's t-test. Based on the results of the bioinformatics analysis, we identified mitochondrial dysfunction and transcription factor E2F1 as the key initiation factors in early retinal degenerative process. CONCLUSION We showed that the mitochondrial dysfunction and the transcription factor E2F1 substantially contribute to the disease etiology of RP. The results provide a new potential therapeutic approach for this retinal degenerative disease. PMID:28730077
Rupp, K; Jungemann, C; Hong, S-M; Bina, M; Grasser, T; Jüngel, A
The Boltzmann transport equation is commonly considered to be the best semi-classical description of carrier transport in semiconductors, providing precise information about the distribution of carriers with respect to time (one dimension), location (three dimensions), and momentum (three dimensions). However, numerical solutions for the seven-dimensional carrier distribution functions are very demanding. The most common solution approach is the stochastic Monte Carlo method, because the gigabytes of memory requirements of deterministic direct solution approaches has not been available until recently. As a remedy, the higher accuracy provided by solutions of the Boltzmann transport equation is often exchanged for lower computational expense by using simpler models based on macroscopic quantities such as carrier density and mean carrier velocity. Recent developments for the deterministic spherical harmonics expansion method have reduced the computational cost for solving the Boltzmann transport equation, enabling the computation of carrier distribution functions even for spatially three-dimensional device simulations within minutes to hours. We summarize recent progress for the spherical harmonics expansion method and show that small currents, reasonable execution times, and rare events such as low-frequency noise, which are all hard or even impossible to simulate with the established Monte Carlo method, can be handled in a straight-forward manner. The applicability of the method for important practical applications is demonstrated for noise simulation, small-signal analysis, hot-carrier degradation, and avalanche breakdown.
The hypergraph regularity method and its applications
Rödl, V.; Nagle, B.; Skokan, J.; Schacht, M.; Kohayakawa, Y.
2005-01-01
Szemerédi's regularity lemma asserts that every graph can be decomposed into relatively few random-like subgraphs. This random-like behavior enables one to find and enumerate subgraphs of a given isomorphism type, yielding the so-called counting lemma for graphs. The combined application of these two lemmas is known as the regularity method for graphs and has proved useful in graph theory, combinatorial geometry, combinatorial number theory, and theoretical computer science. Here, we report on recent advances in the regularity method for k-uniform hypergraphs, for arbitrary k ≥ 2. This method, purely combinatorial in nature, gives alternative proofs of density theorems originally due to E. Szemerédi, H. Furstenberg, and Y. Katznelson. Further results in extremal combinatorics also have been obtained with this approach. The two main components of the regularity method for k-uniform hypergraphs, the regularity lemma and the counting lemma, have been obtained recently: Rödl and Skokan (based on earlier work of Frankl and Rödl) generalized Szemerédi's regularity lemma to k-uniform hypergraphs, and Nagle, Rödl, and Schacht succeeded in proving a counting lemma accompanying the Rödl–Skokan hypergraph regularity lemma. The counting lemma is proved by reducing the counting problem to a simpler one previously investigated by Kohayakawa, Rödl, and Skokan. Similar results were obtained independently by W. T. Gowers, following a different approach. PMID:15919821
3D printing of nano- and micro-structures
NASA Astrophysics Data System (ADS)
Ramasamy, Mouli; Varadan, Vijay K.
2016-04-01
Additive manufacturing or 3D printing techniques are being vigorously investigated as a replacement to the traditional and conventional methods in fabrication to bring forth cost and time effective approaches. Introduction of 3D printing has led to printing micro and nanoscale structures including tissues and organelles, bioelectric sensors and devices, artificial bones and transplants, microfluidic devices, batteries and various other biomaterials. Various microfabrication processes have been developed to fabricate micro components and assemblies at lab scale. 3D Fabrication processes that can accommodate the functional and geometrical requirements to realize complicated structures are becoming feasible through advances in additive manufacturing. This advancement could lead to simpler development mechanisms of novel components and devices exhibiting complex features. For instance, development of microstructure electrodes that can penetrate the epidermis of the skin to collect the bio potential signal may prove very effective than the electrodes that measure signal from the skin's surface. The micro and nanostructures will have to possess extraordinary material and mechanical properties for its dexterity in the applications. A substantial amount of research being pursued on stretchable and flexible devices based on PDMA, textiles, and organic electronics. Despite the numerous advantages these substrates and techniques could solely offer, 3D printing enables a multi-dimensional approach towards finer and complex applications. This review emphasizes the use of 3D printing to fabricate micro and nanostructures for that can be applied for human healthcare.
Feature Selection Methods for Zero-Shot Learning of Neural Activity
Caceres, Carlos A.; Roos, Matthew J.; Rupp, Kyle M.; Milsap, Griffin; Crone, Nathan E.; Wolmetz, Michael E.; Ratto, Christopher R.
2017-01-01
Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy. PMID:28690513
Maji, Somnath; Agarwal, Tarun; Das, Joyjyoti; Maiti, Tapas Kumar
2018-06-01
The present study delineates a relatively simpler approach for fabrication of a macroporous three-dimensional scaffold for bone tissue engineering. The novelty of the work is to obtain a scaffold with macroporosity (interconnected networks) through a combined approach of high stirring induced foaming of the gelatin/carboxymethyl chitosan (CMC)/nano-hydroxyapatite (nHAp) matrix followed by freeze drying. The fabricated macroporous (SGC) scaffold had a greater pore size, higher porosity, higher water retention capacity, slow and sustained enzymatic degradation rate along with higher compressive strength compared to that of non-macroporous (NGC, prepared by conventional freeze drying methodology) scaffold. The biological studies revealed the increased percentage of viability, proliferation, and differentiation as well as higher mineralization of differentiated human Wharton's jelly MSC microtissue (wjhMSC-MT) on SGC as compared to NGC scaffold. RT-PCR also showed enhanced expression level of collagen type I, osteocalcin and Runx2 when seeded on SGC. μCT and histological analysis further revealed a penetration of cellular spheroid to a greater depth in SGC scaffold than NGC scaffold. Furthermore, the effect of cryopreservation on microtissue survival on the three-dimensional construct revealed significant higher viability upon revival in macroporous SGC scaffolds. These results together suggest that high stirring based macroporous scaffolds could have a potential application in bone tissue engineering. Copyright © 2018 Elsevier Ltd. All rights reserved.
Insights into dietary flavonoids as molecular templates for the design of anti-platelet drugs
Wright, Bernice; Spencer, Jeremy P.E.; Lovegrove, Julie A.; Gibbins, Jonathan M.
2013-01-01
Flavonoids are low-molecular weight, aromatic compounds derived from fruits, vegetables, and other plant components. The consumption of these phytochemicals has been reported to be associated with reduced cardiovascular disease (CVD) risk, attributed to their anti-inflammatory, anti-proliferative, and anti-thrombotic actions. Flavonoids exert these effects by a number of mechanisms which include attenuation of kinase activity mediated at the cell-receptor level and/or within cells, and are characterized as broad-spectrum kinase inhibitors. Therefore, flavonoid therapy for CVD is potentially complex; the use of these compounds as molecular templates for the design of selective and potent small-molecule inhibitors may be a simpler approach to treat this condition. Flavonoids as templates for drug design are, however, poorly exploited despite the development of analogues based on the flavonol, isoflavonone, and isoflavanone subgroups. Further exploitation of this family of compounds is warranted due to a structural diversity that presents great scope for creating novel kinase inhibitors. The use of computational methodologies to define the flavonoid pharmacophore together with biological investigations of their effects on kinase activity, in appropriate cellular systems, is the current approach to characterize key structural features that will inform drug design. This focussed review highlights the potential of flavonoids to guide the design of clinically safer, more selective, and potent small-molecule inhibitors of cell signalling, applicable to anti-platelet therapy. PMID:23024269
Regulation of Silk Material Structure by Temperature-Controlled Water Vapor Annealing
Hu, Xiao; Shmelev, Karen; Sun, Lin; Gil, Eun-Seok; Park, Sang-Hyug; Cebe, Peggy; Kaplan, David L.
2011-01-01
We present a simple and effective method to obtain refined control of the molecular structure of silk biomaterials through physical temperature-controlled water vapor annealing (TCWVA). The silk materials can be prepared with control of crystallinity, from a low content using conditions at 4°C (alpha-helix dominated silk I structure), to highest content of ~60% crystallinity at 100°C (beta-sheet dominated silk II structure). This new physical approach covers the range of structures previously reported to govern crystallization during the fabrication of silk materials, yet offers a simpler, green chemistry, approach with tight control of reproducibility. The transition kinetics, thermal, mechanical, and biodegradation properties of the silk films prepared at different temperatures were investigated and compared by Fourier transform infrared spectroscopy (FTIR), differential scanning calorimetry (DSC), uniaxial tensile studies, and enzymatic degradation studies. The results revealed that this new physical processing method accurately controls structure, in turn providing control of mechanical properties, thermal stability, enzyme degradation rate, and human mesenchymal stem cell interactions. The mechanistic basis for the control is through the temperature controlled regulation of water vapor, to control crystallization. Control of silk structure via TCWVA represents a significant improvement in the fabrication of silk-based biomaterials, where control of structure-property relationships is key to regulating material properties. This new approach to control crystallization also provides an entirely new green approach, avoiding common methods which use organic solvents (methanol, ethanol) or organic acids. The method described here for silk proteins would also be universal for many other structural proteins (and likely other biopolymers), where water controls chain interactions related to material properties. PMID:21425769
Arheiam, Arheiam; Brown, Stephen L; Higham, Susan M; Albadri, Sondos; Harris, Rebecca V
2016-12-01
Diet diaries are recommended for dentists to monitor children's sugar consumption. Diaries provide multifaceted dietary information, but patients respond better to simpler advice. We explore how dentists integrate information from diet diaries to deliver useable advice to patients. As part of a questionnaire study of general dental practitioners (GDPs) in Northwest England, we asked dentists to specify the advice they would give a hypothetical patient based upon a diet diary case vignette. A sequential mixed method approach was used for data analysis: an initial inductive content analysis (ICA) to develop coding system to capture the complexity of dietary assessment and delivered advice. Using these codes, a quantitative analysis was conducted to examine correspondences between identified dietary problems and advice given. From these correspondences, we inferred how dentists reduced problems to give simple advice. A total of 229 dentists' responses were analysed. ICA on 40 questionnaires identified two distinctive approaches of developing diet advice: a summative (summary of issues into an all-encompassing message) and a selective approach (selection of a main message approach). In the quantitative analysis of all responses, raw frequencies indicated that dentists saw more problems than they advised on and provided highly specific advice on a restricted number of problems (e.g. not eating sugars before bedtime 50.7% or harmful items 42.4%, rather than simply reducing the amount of sugar 9.2%). Binary logistic regression models indicate that dentists provided specific advice that was tailored to the key problems that they identified. Dentists provided specific recommendations to address what they felt were key problems, whilst not intervening to address other problems that they may have felt less pressing. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
First, Michael B
2010-11-01
Work is currently under way on the Diagnostic and Statistical Manual of Mental Disorders (DSM), Fifth Edition, due to be published by the American Psychiatric Association in 2013. Dissatisfaction with the current categorical descriptive approach has led to aspirations for a paradigm shift for DSM-5. A historical review of past revisions of the DSM was performed. Efforts undertaken before the start of the DSM-5 development process to conduct a state-of-the science review and set a research agenda were examined to determine if results supported a paradigm shift for DSM-5. Proposals to supplement DSM-5 categorical diagnosis with dimensional assessments are reviewed and critiqued. DSM revisions have alternated between paradigm shifts (the first edition of the DSM in 1952 and DSM-III in 1980) and incremental improvements (DSM-II in 1968, DSM-III-R in 1987, and DSM-IV in 1994). The results of the review of the DSM-5 research planning initiatives suggest that despite the scientific advances that have occurred since the descriptive approach was first introduced in 1980, the field lacks a sufficiently deep understanding of mental disorders to justify abandoning the descriptive approach in favour of a more etiologically based alternative. Proposals to add severity and cross-cutting dimensions throughout DSM-5 are neither paradigm shifting, given that simpler versions of such dimensions are already a component of DSM-IV, nor likely to be used by busy clinicians without evidence that they improve clinical outcomes. Despite initial aspirations that DSM would undergo a paradigm shift with this revision, DSM-5 will continue to adopt a descriptive categorical approach, albeit with a greatly expanded dimensional component.
Edwards, D. L.; Saleh, A. A.; Greenspan, S. L.
2015-01-01
Summary We performed a systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for DXA-determined osteoporosis or low bone density. Commonly evaluated risk instruments showed high sensitivity approaching or exceeding 90 % at particular thresholds within various populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. Introduction The purpose of the study is to systematically review the performance of clinical risk assessment instruments for screening for dual-energy X-ray absorptiometry (DXA)-determined osteoporosis or low bone density. Methods Systematic review and meta-analysis were performed. Multiple literature sources were searched, and data extracted and analyzed from included references. Results One hundred eight references met inclusion criteria. Studies assessed many instruments in 34 countries, most commonly the Osteoporosis Self-Assessment Tool (OST), the Simple Calculated Osteoporosis Risk Estimation (SCORE) instrument, the Osteoporosis Self-Assessment Tool for Asians (OSTA), the Osteoporosis Risk Assessment Instrument (ORAI), and body weight criteria. Meta-analyses of studies evaluating OST using a cutoff threshold of <1 to identify US postmenopausal women with osteoporosis at the femoral neck provided summary sensitivity and specificity estimates of 89 % (95%CI 82–96 %) and 41 % (95%CI 23–59 %), respectively. Meta-analyses of studies evaluating OST using a cutoff threshold of 3 to identify US men with osteoporosis at the femoral neck, total hip, or lumbar spine provided summary sensitivity and specificity estimates of 88 % (95%CI 79–97 %) and 55 % (95%CI 42–68 %), respectively. Frequently evaluated instruments each had thresholds and populations for which sensitivity for osteoporosis or low bone mass detection approached or exceeded 90 % but always with a trade-off of relatively low specificity. Conclusions Commonly evaluated clinical risk assessment instruments each showed high sensitivity approaching or exceeding 90 % for identifying individuals with DXA-determined osteoporosis or low BMD at certain thresholds in different populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. PMID:25644147
Pharmacological Approaches for Treatment-resistant Bipolar Disorder
Poon, Shi Hui; Sim, Kang; Baldessarini, Ross J.
2015-01-01
Bipolar disorder is prevalent, with high risks of disability, substance abuse and premature mortality. Treatment responses typically are incomplete, especially for depressive components, so that many cases can be considered “treatment resistant.” We reviewed reports on experimental treatments for such patients: there is a striking paucity of such research, mainly involving small incompletely controlled trials of add-on treatment, and findings remain preliminary. Encouraging results have been reported by adding aripiprazole, bupropion, clozapine, ketamine, memantine, pramipexole, pregabalin, and perhaps tri-iodothyronine in resistant manic or depressive phases. The urgency of incomplete responses in such a severe illness underscores the need for more systematic, simpler, and better controlled studies in more homogeneous samples of patients. PMID:26467409
A refined analysis of composite laminates. [theory of statics and dynamics
NASA Technical Reports Server (NTRS)
Srinivas, S.
1973-01-01
The purpose of this paper is to develop a sufficiently accurate analysis, which is much simpler than exact three-dimensional analysis, for statics and dynamics of composite laminates. The governing differential equations and boundary conditions are derived by following a variational approach. The displacements are assumed piecewise linear across the thickness and the effects of transverse shear deformations and rotary inertia are included. A procedure for obtaining the general solution of the above governing differential equations in the form of hyperbolic-trigonometric series is given. The accuracy of the present theory is assessed by obtaining results for free vibrations and flexure of simply supported rectangular laminates and comparing them with results from exact three-dimensional analysis.
Long-Range Pre-Thermal Time Crystals
NASA Astrophysics Data System (ADS)
Machado, Francisco; Else, Dominic V.; Nayak, Chetan; Yao, Norman
Driven quantum systems have recently enabled the realization of a discrete time crystal - an intrinsically out-of-equilibrium phase of matter that spontaneously breaks time translation symmetry. One strategy to prevent the drive-induced, runaway heating of the time crystal phase is the presence of strong disorder leading to many-body localization. A simpler disorder-less approach is to work in the pre-thermal regime where time crystalline order can persist to long times, before ultimately being destroyed by thermalization. In this talk, we will consider the interplay between long-range interactions, dimensionality, and pre-thermal time-translation symmetry breaking. As an example, we will consider the phase diagram of a 1D long-range pre-thermal time crystal.
Active Control of Solar Array Dynamics During Spacecraft Maneuvers
NASA Technical Reports Server (NTRS)
Ross, Brant A.; Woo, Nelson; Kraft, Thomas G.; Blandino, Joseph R.
2016-01-01
Recent NASA mission plans require spacecraft to undergo potentially significant maneuvers (or dynamic loading events) with large solar arrays deployed. Therefore there is an increased need to understand and possibly control the nonlinear dynamics in the spacecraft system during such maneuvers. The development of a nonlinear controller is described. The utility of using a nonlinear controller to reduce forces and motion in a solar array wing during a loading event is demonstrated. The result is dramatic reductions in system forces and motion during a 10 second loading event. A motion curve derived from the simulation with the closed loop controller is used to obtain similar benefits with a simpler motion control approach.
Fock expansion of multimode pure Gaussian states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cariolaro, Gianfranco; Pierobon, Gianfranco, E-mail: gianfranco.pierobon@unipd.it
2015-12-15
The Fock expansion of multimode pure Gaussian states is derived starting from their representation as displaced and squeezed multimode vacuum states. The approach is new and appears to be simpler and more general than previous ones starting from the phase-space representation given by the characteristic or Wigner function. Fock expansion is performed in terms of easily evaluable two-variable Hermite–Kampé de Fériet polynomials. A relatively simple and compact expression for the joint statistical distribution of the photon numbers in the different modes is obtained. In particular, this result enables one to give a simple characterization of separable and entangled states, asmore » shown for two-mode and three-mode Gaussian states.« less
Self-driven cooling loop for a large superconducting magnet in space
NASA Technical Reports Server (NTRS)
Mord, A. J.; Snyder, H. A.
1992-01-01
Pressurized cooling loops in which superfluid helium circulation is driven by the heat being removed have been previously demonstrated in laboratory tests. A simpler and lighter version which eliminates a heat exchanger by mixing the returning fluid directly with the superfluid helium bath was analyzed. A carefully designed flow restriction must be used to prevent boiling in this low-pressure system. A candidate design for Astromag is shown that can keep the magnet below 2.0 K during magnet charging. This gives a greater margin against accidental quench than approaches that allow the coolant to warm above the lambda point. A detailed analysis of one candidate design is presented.
Kokaram, Anil C
2004-03-01
Image sequence restoration has been steadily gaining in importance with the increasing prevalence of visual digital media. The demand for content increases the pressure on archives to automate their restoration activities for preservation of the cultural heritage that they hold. There are many defects that affect archived visual material and one central issue is that of Dirt and Sparkle, or "Blotches." Research in archive restoration has been conducted for more than a decade and this paper places that material in context to highlight the advances made during that time. The paper also presents a new and simpler Bayesian framework that achieves joint processing of noise, missing data, and occlusion.
Mirus, B.B.; Ebel, B.A.; Heppner, C.S.; Loague, K.
2011-01-01
Concept development simulation with distributed, physics-based models provides a quantitative approach for investigating runoff generation processes across environmental conditions. Disparities within data sets employed to design and parameterize boundary value problems used in heuristic simulation inevitably introduce various levels of bias. The objective was to evaluate the impact of boundary value problem complexity on process representation for different runoff generation mechanisms. The comprehensive physics-based hydrologic response model InHM has been employed to generate base case simulations for four well-characterized catchments. The C3 and CB catchments are located within steep, forested environments dominated by subsurface stormflow; the TW and R5 catchments are located in gently sloping rangeland environments dominated by Dunne and Horton overland flows. Observational details are well captured within all four of the base case simulations, but the characterization of soil depth, permeability, rainfall intensity, and evapotranspiration differs for each. These differences are investigated through the conversion of each base case into a reduced case scenario, all sharing the same level of complexity. Evaluation of how individual boundary value problem characteristics impact simulated runoff generation processes is facilitated by quantitative analysis of integrated and distributed responses at high spatial and temporal resolution. Generally, the base case reduction causes moderate changes in discharge and runoff patterns, with the dominant process remaining unchanged. Moderate differences between the base and reduced cases highlight the importance of detailed field observations for parameterizing and evaluating physics-based models. Overall, similarities between the base and reduced cases indicate that the simpler boundary value problems may be useful for concept development simulation to investigate fundamental controls on the spectrum of runoff generation mechanisms. Copyright 2011 by the American Geophysical Union.
Charging and coagulation of radioactive and nonradioactive particles in the atmosphere
Kim, Yong-ha; Yiacoumi, Sotira; Nenes, Athanasios; ...
2016-01-01
Charging and coagulation influence one another and impact the particle charge and size distributions in the atmosphere. However, few investigations to date have focused on the coagulation kinetics of atmospheric particles accumulating charge. This study presents three approaches to include mutual effects of charging and coagulation on the microphysical evolution of atmospheric particles such as radioactive particles. The first approach employs ion balance, charge balance, and a bivariate population balance model (PBM) to comprehensively calculate both charge accumulation and coagulation rates of particles. The second approach involves a much simpler description of charging, and uses a monovariate PBM and subsequentmore » effects of charge on particle coagulation. The third approach is further simplified assuming that particles instantaneously reach their steady-state charge distributions. It is found that compared to the other two approaches, the first approach can accurately predict time-dependent changes in the size and charge distributions of particles over a wide size range covering from the free molecule to continuum regimes. The other two approaches can reliably predict both charge accumulation and coagulation rates for particles larger than about 0.04 micrometers and atmospherically relevant conditions. These approaches are applied to investigate coagulation kinetics of particles accumulating charge in a radioactive neutralizer, the urban atmosphere, and an atmospheric system containing radioactive particles. Limitations of the approaches are discussed.« less
Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
2016-12-08
In neutron multiplicity counting one may fit a curve by minimizing an objective function, χmore » $$2\\atop{n}$$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W -1 is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$$2\\atop{n}$$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.« less
Bosse, Stefan
2015-01-01
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550
Bosse, Stefan
2015-02-16
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.
The effects of hillslope-scale variability in burn severity on post-fire sediment delivery
NASA Astrophysics Data System (ADS)
Quinn, Dylan; Brooks, Erin; Dobre, Mariana; Lew, Roger; Robichaud, Peter; Elliot, William
2017-04-01
With the increasing frequency of wildfire and the costs associated with managing the burned landscapes, there is an increasing need for decision support tools that can be used to assess the effectiveness of targeted post-fire management strategies. The susceptibility of landscapes to post-fire soil erosion and runoff have been closely linked with the severity of the wildfire. Wildfire severity maps are often spatial complex and largely dependent upon total vegetative biomass, fuel moisture patterns, direction of burn, wind patterns, and other factors. The decision to apply targeted treatment to a specific landscape and the amount of resources dedicated to treating a landscape should ideally be based on the potential for excessive sediment delivery from a particular hillslope. Recent work has suggested that the delivery of sediment to a downstream water body from a hillslope will be highly influenced by the distribution of wildfire severity across a hillslope and that models that do not capture this hillslope scale variability would not provide reliable sediment and runoff predictions. In this project we compare detailed (10 m) grid-based model predictions to lumped and semi-lumped hillslope approaches where hydrologic parameters are fixed based on hillslope scale averaging techniques. We use the watershed scale version of the process-based Watershed Erosion Prediction Projection (WEPP) model and its GIS interface, GeoWEPP, to simulate the fire impacts on runoff and sediment delivery using burn severity maps at a watershed scale. The flowpath option in WEPP allows for the most detail representation of wildfire severity patterns (10 m) but depending upon the size of the watershed, simulations are time consuming and computational demanding. The hillslope version is a simpler approach which assigns wildfire severity based on the severity level that is assigned to the majority of the hillslope area. In the third approach we divided hillslopes in overland flow elements (OFEs) and assigned representative input values on a finer scale within single hillslopes. Each of these approaches were compared for several large wildfires in the mountainous ranges of central Idaho, USA. Simulations indicated that predictions based on lumped hillslope modeling over-predict sediment transport by as much as 4.8x in areas of high to moderate burn severity. Annual sediment yield within the simulated watersheds ranged from 1.7 tonnes/ha to 6.8 tonnes/ha. The disparity between simulated sediment yield with these approaches was attributed to hydrologic connectivity of the burn patterns within the hillslope. High infiltration rates between high severity sites can greatly reduce the delivery of sediment. This research underlines the importance of accurately representing soil burn severity along individual hillslopes in hydrologic models and the need for modeling approaches to capture this variability to reliability simulate soil erosion.
Estimating the system price of redox flow batteries for grid storage
NASA Astrophysics Data System (ADS)
Ha, Seungbum; Gallagher, Kevin G.
2015-11-01
Low-cost energy storage systems are required to support extensive deployment of intermittent renewable energy on the electricity grid. Redox flow batteries have potential advantages to meet the stringent cost target for grid applications as compared to more traditional batteries based on an enclosed architecture. However, the manufacturing process and therefore potential high-volume production price of redox flow batteries is largely unquantified. We present a comprehensive assessment of a prospective production process for aqueous all vanadium flow battery and nonaqueous lithium polysulfide flow battery. The estimated investment and variable costs are translated to fixed expenses, profit, and warranty as a function of production volume. When compared to lithium-ion batteries, redox flow batteries are estimated to exhibit lower costs of manufacture, here calculated as the unit price less materials costs, owing to their simpler reactor (cell) design, lower required area, and thus simpler manufacturing process. Redox flow batteries are also projected to achieve the majority of manufacturing scale benefits at lower production volumes as compared to lithium-ion. However, this advantage is offset due to the dramatically lower present production volume of flow batteries compared to competitive technologies such as lithium-ion.
Tufto, Jarle
2010-01-01
Domesticated species frequently spread their genes into populations of wild relatives through interbreeding. The domestication process often involves artificial selection for economically desirable traits. This can lead to an indirect response in unknown correlated traits and a reduction in fitness of domesticated individuals in the wild. Previous models for the effect of gene flow from domesticated species to wild relatives have assumed that evolution occurs in one dimension. Here, I develop a quantitative genetic model for the balance between migration and multivariate stabilizing selection. Different forms of correlational selection consistent with a given observed ratio between average fitness of domesticated and wild individuals offsets the phenotypic means at migration-selection balance away from predictions based on simpler one-dimensional models. For almost all parameter values, correlational selection leads to a reduction in the migration load. For ridge selection, this reduction arises because the distance the immigrants deviates from the local optimum in effect is reduced. For realistic parameter values, however, the effect of correlational selection on the load is small, suggesting that simpler one-dimensional models may still be adequate in terms of predicting mean population fitness and viability.
Formal Compiler Implementation in a Logical Framework
2003-04-29
variable set [], we omit the brackets and use the simpler notation v. MetaPRL is a tactic-based prover that uses OCaml [20] as its meta-language. When a...rewrite is defined in MetaPRL, the framework creates an OCaml expression that can be used to apply the rewrite. Code to guide the application of...rewrites is written in OCaml , using a rich set of primitives provided by MetaPRL. MetaPRL automates the construction of most guidance code; we describe
Two improved coherent optical feedback systems for optical information processing
NASA Technical Reports Server (NTRS)
Lee, S. H.; Bartholomew, B.; Cederquist, J.
1976-01-01
Coherent optical feedback systems are Fabry-Perot interferometers modified to perform optical information processing. Two new systems based on plane parallel and confocal Fabry-Perot interferometers are introduced. The plane parallel system can be used for contrast control, intensity level selection, and image thresholding. The confocal system can be used for image restoration and solving partial differential equations. These devices are simpler and less expensive than previous systems. Experimental results are presented to demonstrate their potential for optical information processing.
An ODE-Based Wall Model for Turbulent Flow Simulations
NASA Technical Reports Server (NTRS)
Berger, Marsha J.; Aftosmis, Michael J.
2017-01-01
Fully automated meshing for Reynolds-Averaged Navier-Stokes Simulations, Mesh generation for complex geometry continues to be the biggest bottleneck in the RANS simulation process; Fully automated Cartesian methods routinely used for inviscid simulations about arbitrarily complex geometry; These methods lack of an obvious & robust way to achieve near wall anisotropy; Goal: Extend these methods for RANS simulation without sacrificing automation, at an affordable cost; Note: Nothing here is limited to Cartesian methods, and much becomes simpler in a body-fitted setting.
Analysis of accelerated motion in the theory of relativity
NASA Technical Reports Server (NTRS)
Jones, R. T.
1976-01-01
Conventional treatments of accelerated motion in the theory of relativity have led to certain difficulties of interpretation. Certain reversals in the apparent gravitational field of an accelerated body may be avoided by simpler analysis based on the use of restricted conformal transformations. In the conformal theory the velocity of light remains constant even for experimenters in accelerated motion. The problem considered is that of rectilinear motion with a variable velocity. The motion takes place along the x or x' axis of two coordinate systems.
NASA Technical Reports Server (NTRS)
1985-01-01
Research at Langley on skin friction drag was described in Tech Briefs. 3M engineers suggested to Langley that grooves molded into a lightweight plastic film with adhesive backing and pressed on an airplane would be simpler than cutting grooves directly onto the surface. Boeing became involved and tested the "riblet" on an olympic rowing shell; the US won a silver medal. Based on the riblet-like projections on shark's skins, the technology may provide a 5 percent fuel saving for airplanes. Product is no longer commercially available.
A Volunteer Computing Project for Solving Geoacoustic Inversion Problems
NASA Astrophysics Data System (ADS)
Zaikin, Oleg; Petrov, Pavel; Posypkin, Mikhail; Bulavintsev, Vadim; Kurochkin, Ilya
2017-12-01
A volunteer computing project aimed at solving computationally hard inverse problems in underwater acoustics is described. This project was used to study the possibilities of the sound speed profile reconstruction in a shallow-water waveguide using a dispersion-based geoacoustic inversion scheme. The computational capabilities provided by the project allowed us to investigate the accuracy of the inversion for different mesh sizes of the sound speed profile discretization grid. This problem suits well for volunteer computing because it can be easily decomposed into independent simpler subproblems.
An Estimation of the Logarithmic Timescale in Ergodic Dynamics
NASA Astrophysics Data System (ADS)
Gomez, Ignacio S.
An estimation of the logarithmic timescale in quantum systems having an ergodic dynamics in the semiclassical limit, is presented. The estimation is based on an extension of the Krieger’s finite generator theorem for discretized σ-algebras and using the time rescaling property of the Kolmogorov-Sinai entropy. The results are in agreement with those obtained in the literature but with a simpler mathematics and within the context of the ergodic theory. Moreover, some consequences of the Poincaré’s recurrence theorem are also explored.
A VME-based software trigger system using UNIX processors
NASA Astrophysics Data System (ADS)
Atmur, Robert; Connor, David F.; Molzon, William
1997-02-01
We have constructed a distributed computing platform with eight processors to assemble and filter data from digitization crates. The filtered data were transported to a tape-writing UNIX computer via ethernet. Each processor ran a UNIX operating system and was installed in its own VME crate. Each VME crate contained dual-port memories which interfaced with the digitizers. Using standard hardware and software (VME and UNIX) allows us to select from a wide variety of non-proprietary products and makes upgrades simpler, if they are necessary.
The algorithms for rational spline interpolation of surfaces
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1986-01-01
Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.
Simulation of a navigator algorithm for a low-cost GPS receiver
NASA Technical Reports Server (NTRS)
Hodge, W. F.
1980-01-01
The analytical structure of an existing navigator algorithm for a low cost global positioning system receiver is described in detail to facilitate its implementation on in-house digital computers and real-time simulators. The material presented includes a simulation of GPS pseudorange measurements, based on a two-body representation of the NAVSTAR spacecraft orbits, and a four component model of the receiver bias errors. A simpler test for loss of pseudorange measurements due to spacecraft shielding is also noted.
Concept of multiple-cell cavity for axion dark matter search
NASA Astrophysics Data System (ADS)
Jeong, Junu; Youn, SungWoo; Ahn, Saebyeok; Kim, Jihn E.; Semertzidis, Yannis K.
2018-02-01
In cavity-based axion dark matter search experiments exploring high mass regions, multiple-cavity design is under consideration as a method to increase the detection volume within a given magnet bore. We introduce a new idea, referred to as a multiple-cell cavity, which provides various benefits including a larger detection volume, simpler experimental setup, and easier phase-matching mechanism. We present the characteristics of this concept and demonstrate the experimental feasibility with an example of a double-cell cavity.
Planar dielectric waveguides in rotation are optical fibers: comparison with the classical model.
Peña García, Antonio; Pérez-Ocón, Francisco; Jiménez, José Ramón
2008-01-21
A novel and simpler method to calculate the main parameters in fiber optics is presented. This method is based in a planar dielectric waveguide in rotation and, as an example, it is applied to calculate the turning points and the inner caustic in an optical fiber with a parabolic refractive index. It is shown that the solution found using this method agrees with the standard (and more complex) method, whose solutions for these points are also summarized in this paper.
NASA Technical Reports Server (NTRS)
Sarracino, Marcello
1941-01-01
The present article deals with what is considered to be a simpler and more accurate method of determining, from the results of bench tests under approved rating conditions, the power at altitude of a supercharged aircraft engine, without application of correction formulas. The method of calculating the characteristics at altitude, of supercharged engines, based on the consumption of air, is a more satisfactory and accurate procedure, especially at low boost pressures.
Highly Physical Solar Radiation Pressure Modeling During Penumbra Transitions
NASA Astrophysics Data System (ADS)
Robertson, Robert V.
Solar radiation pressure (SRP) is one of the major non-gravitational forces acting on spacecraft. Acceleration by radiation pressure depends on the radiation flux; on spacecraft shape, attitude, and mass; and on the optical properties of the spacecraft surfaces. Precise modeling of SRP is needed for dynamic satellite orbit determination, space mission design and control, and processing of data from space-based science instruments. During Earth penumbra transitions, sunlight is passing through Earth's lower atmosphere and, in the process, its path, intensity, spectral composition, and shape are significantly affected. This dissertation presents a new method for highly physical SRP modeling in Earth's penumbra called Solar radiation pressure with Oblateness and Lower Atmospheric Absorption, Refraction, and Scattering (SOLAARS). The fundamental geometry and approach mirrors past work, where the solar radiation field is modeled using a number of light rays, rather than treating the Sun as a single point source. This dissertation aims to clarify this approach, simplify its implementation, and model previously overlooked factors. The complex geometries involved in modeling penumbra solar radiation fields are described in a more intuitive and complete way to simplify implementation. Atmospheric effects due to solar radiation passing through the troposphere and stratosphere are modeled, and the results are tabulated to significantly reduce computational cost. SOLAARS includes new, more efficient and accurate approaches to modeling atmospheric effects which allow us to consider the spatial and temporal variability in lower atmospheric conditions. A new approach to modeling the influence of Earth's polar flattening draws on past work to provide a relatively simple but accurate method for this important effect. Previous penumbra SRP models tend to lie at two extremes of complexity and computational cost, and so the significant improvement in accuracy provided by the complex models has often been lost in the interest of convenience and efficiency. This dissertation presents a simple model which provides an accurate alternative to the full, high precision SOLAARS model with reduced complexity and computational cost. This simpler method is based on curve fitting to results of the full SOLAARS model and is called SOLAARS Curve Fit (SOLAARS-CF). Both the high precision SOLAARS model and the simpler SOLAARS-CF model are applied to the Gravity Recovery and Climate Experiment (GRACE) satellites. Modeling results are compared to the sub-nm/s2 precision GRACE accelerometer data and the results of a traditional penumbra SRP model. These comparisons illustrate the improved accuracy of the SOLAARS and SOLAARS-CF models. A sensitivity analyses for the GRACE orbit illustrates the significance of various input parameters and features of the SOLAARS model on results. The SOLAARS-CF model is applied to a study of penumbra SRP and the Earth flyby anomaly. Beyond the value of its results to the scientific community, this study provides an application example where the computational efficiency of the simplified SOLAARS-CF model is necessary. The Earth flyby anomaly is an open question in orbit determination which has gone unsolved for over 20 years. This study quantifies the influence of penumbra SRP modeling errors on the observed anomalies from the Galileo, Cassini, and Rosetta Earth flybys. The results of this study prove that penumbra SRP is not an explanation for or significant contributor to the Earth flyby anomaly.
Comparing capacity value estimation techniques for photovoltaic solar power
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-09-28
In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Traino, A. C.; Xhafa, B.; Sezione di Fisica Medica, U.O. Fisica Sanitaria, Azienda Ospedaliero-Universitaria Pisana, via Roma n. 67, Pisa 56125
2009-04-15
One of the major challenges to the more widespread use of individualized, dosimetry-based radioiodine treatment of Graves' disease is the development of a reasonably fast, simple, and cost-effective method to measure thyroidal {sup 131}I kinetics in patients. Even though the fixed activity administration method does not optimize the therapy, giving often too high or too low a dose to the gland, it provides effective treatment for almost 80% of patients without consuming excessive time and resources. In this article two simple methods for the evaluation of the kinetics of {sup 131}I in the thyroid gland are presented and discussed. Themore » first is based on two measurements 4 and 24 h after a diagnostic {sup 131}I administration and the second on one measurement 4 h after such an administration and a linear correlation between this measurement and the maximum uptake in the thyroid. The thyroid absorbed dose calculated by each of the two methods is compared to that calculated by a more complete {sup 131}I kinetics evaluation, based on seven thyroid uptake measurements for 35 patients at various times after the therapy administration. There are differences in the thyroid absorbed doses between those derived by each of the two simpler methods and the ''reference'' value (derived by more complete uptake measurements following the therapeutic {sup 131}I administration), with 20% median and 40% 90-percentile differences for the first method (i.e., based on two thyroid uptake measurements at 4 and 24 h after {sup 131}I administration) and 25% median and 45% 90-percentile differences for the second method (i.e., based on one measurement at 4 h post-administration). Predictably, although relatively fast and convenient, neither of these simpler methods appears to be as accurate as thyroid dose estimates based on more complete kinetic data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less
Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe
2013-01-01
Abstract In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k−ɛ model, RNG k−ɛ model, realizable k−ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use. PMID:24302850
NASA Astrophysics Data System (ADS)
Mettot, Clément; Sipp, Denis; Bézard, Hervé
2014-04-01
This article presents a quasi-laminar stability approach to identify in high-Reynolds number flows the dominant low-frequencies and to design passive control means to shift these frequencies. The approach is based on a global linear stability analysis of mean-flows, which correspond to the time-average of the unsteady flows. Contrary to the previous work by Meliga et al. ["Sensitivity of 2-D turbulent flow past a D-shaped cylinder using global stability," Phys. Fluids 24, 061701 (2012)], we use the linearized Navier-Stokes equations based solely on the molecular viscosity (leaving aside any turbulence model and any eddy viscosity) to extract the least stable direct and adjoint global modes of the flow. Then, we compute the frequency sensitivity maps of these modes, so as to predict before hand where a small control cylinder optimally shifts the frequency of the flow. In the case of the D-shaped cylinder studied by Parezanović and Cadot [J. Fluid Mech. 693, 115 (2012)], we show that the present approach well captures the frequency of the flow and recovers accurately the frequency control maps obtained experimentally. The results are close to those already obtained by Meliga et al., who used a more complex approach in which turbulence models played a central role. The present approach is simpler and may be applied to a broader range of flows since it is tractable as soon as mean-flows — which can be obtained either numerically from simulations (Direct Numerical Simulation (DNS), Large Eddy Simulation (LES), unsteady Reynolds-Averaged-Navier-Stokes (RANS), steady RANS) or from experimental measurements (Particle Image Velocimetry - PIV) — are available. We also discuss how the influence of the control cylinder on the mean-flow may be more accurately predicted by determining an eddy-viscosity from numerical simulations or experimental measurements. From a technical point of view, we finally show how an existing compressible numerical simulation code may be used in a black-box manner to extract the global modes and sensitivity maps.
MOSES: A Matlab-based open-source stochastic epidemic simulator.
Varol, Huseyin Atakan
2016-08-01
This paper presents an open-source stochastic epidemic simulator. Discrete Time Markov Chain based simulator is implemented in Matlab. The simulator capable of simulating SEQIJR (susceptible, exposed, quarantined, infected, isolated and recovered) model can be reduced to simpler models by setting some of the parameters (transition probabilities) to zero. Similarly, it can be extended to more complicated models by editing the source code. It is designed to be used for testing different control algorithms to contain epidemics. The simulator is also designed to be compatible with a network based epidemic simulator and can be used in the network based scheme for the simulation of a node. Simulations show the capability of reproducing different epidemic model behaviors successfully in a computationally efficient manner.
Validity of a simpler definition of major depressive disorder.
Zimmerman, Mark; Galione, Janine N; Chelminski, Iwona; Young, Diane; Dalrymple, Kristy; Witt, Caren Francione
2010-10-01
In previous reports from the Rhode Island Methods to Improve Diagnostic Assessment and Services project, we developed a briefer definition of major depressive disorder (MDD), and found high levels of agreement between the simplified and DSM-IV definitions of MDD. The goal of the present study was to examine the validity of the simpler definition of MDD. We hypothesized that compared to patients with adjustment disorder, patients with MDD would be more severely depressed, have poorer psychosocial functioning, have greater suicidal ideation at the time of the intake evaluation, and have an increased morbid risk for depression in their first-degree family members. We compared 1,486 patients who met the symptom criteria for current MDD according to either DSM-IV or the simpler definition to 145 patients with a current diagnosis of adjustment disorder with depressed mood or depressed and anxious mood. The patients with MDD were more severely depressed, more likely to have missed time from work due to psychiatric reasons, reported higher levels of suicidal ideation, and had a significantly higher morbid risk for depression in their first-degree family members. Both definitions of MDD were valid. The simpler definition of MDD was as valid as the DSM-IV definition. This new definition offers two advantages over the DSM-IV definition-it is briefer and therefore more likely to be recalled and applied in clinical practice, and it is free of somatic symptoms thereby making it easier to apply with medically ill patients. Depression and Anxiety, 2010. © 2010 Wiley-Liss, Inc.
Torres-Montúfar, Alejandro; Borsch, Thomas; Ochoterena, Helga
2018-05-01
The conceptualization and coding of characters is a difficult issue in phylogenetic systematics, no matter which inference method is used when reconstructing phylogenetic trees or if the characters are just mapped onto a specific tree. Complex characters are groups of features that can be divided into simpler hierarchical characters (reductive coding), although the implied hierarchical relational information may change depending on the type of coding (composite vs. reductive). Up to now, there is no common agreement to either code characters as complex or simple. Phylogeneticists have discussed which coding method is best but have not incorporated the heuristic process of reciprocal illumination to evaluate the coding. Composite coding allows to test whether 1) several characters were linked resulting in a structure described as a complex character or trait or 2) independently evolving characters resulted in the configuration incorrectly interpreted as a complex character. We propose that complex characters or character states should be decomposed iteratively into simpler characters when the original homology hypothesis is not corroborated by a phylogenetic analysis, and the character or character state is retrieved as homoplastic. We tested this approach using the case of fruit types within subfamily Cinchonoideae (Rubiaceae). The iterative reductive coding of characters associated with drupes allowed us to unthread fruit evolution within Cinchonoideae. Our results show that drupes and berries are not homologous. As a consequence, a more precise ontology for the Cinchonoideae drupes is required.
The effects of deep network topology on mortality prediction.
Hao Du; Ghassemi, Mohammad M; Mengling Feng
2016-08-01
Deep learning has achieved remarkable results in the areas of computer vision, speech recognition, natural language processing and most recently, even playing Go. The application of deep-learning to problems in healthcare, however, has gained attention only in recent years, and it's ultimate place at the bedside remains a topic of skeptical discussion. While there is a growing academic interest in the application of Machine Learning (ML) techniques to clinical problems, many in the clinical community see little incentive to upgrade from simpler methods, such as logistic regression, to deep learning. Logistic regression, after all, provides odds ratios, p-values and confidence intervals that allow for ease of interpretation, while deep nets are often seen as `black-boxes' which are difficult to understand and, as of yet, have not demonstrated performance levels far exceeding their simpler counterparts. If deep learning is to ever take a place at the bedside, it will require studies which (1) showcase the performance of deep-learning methods relative to other approaches and (2) interpret the relationships between network structure, model performance, features and outcomes. We have chosen these two requirements as the goal of this study. In our investigation, we utilized a publicly available EMR dataset of over 32,000 intensive care unit patients and trained a Deep Belief Network (DBN) to predict patient mortality at discharge. Utilizing an evolutionary algorithm, we demonstrate automated topology selection for DBNs. We demonstrate that with the correct topology selection, DBNs can achieve better prediction performance compared to several bench-marking methods.
Numerical modeling of reflux solar receivers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, R.E. Jr.
1993-05-01
Using reflux solar receivers to collect solar energy for dish-Stirling electric power generation systems is presently being investigated by several organizations, including Sandia National Laboratories, Albuquerque, N. Mex. In support of this program, Sandia has developed two numerical models describing the thermal performance of pool-boiler and heat-pipe reflux receivers. Both models are applicable to axisymmetric geometries and they both consider the radiative and convective energy transfer within the receiver cavity, the conductive and convective energy transfer from the receiver housing, and the energy transfer to the receiver working fluid. The primary difference between the models is the level of detailmore » in modeling the heat conduction through the receiver walls. The more detailed model uses a two-dimensional finite control volume method, whereas the simpler model uses a one-dimensional thermal resistance approach. The numerical modeling concepts presented are applicable to conventional tube-type solar receivers, as well as to reflux receivers. Good agreement between the two models is demonstrated by comparing the predicted and measured performance of a pool-boiler reflux receiver being tested at Sandia. For design operating conditions, the receiver thermal efficiencies agree within 1 percent and the average receiver cavity temperature within 1.3 percent. The thermal efficiency and receiver temperatures predicted by the simpler thermal resistance model agree well with experimental data from on-sun tests of the Sandia reflux pool-boiler receiver. An analysis of these comparisons identifies several plausible explanations for the differences between the predicted results and the experimental data.« less
Variational Principles, Occam Razor and Simplicity Paradox
NASA Astrophysics Data System (ADS)
Berezin, Alexander A.
2004-05-01
Variational minimum principles (VMP) refer to energy (statics, Thomson and Earnshaw theorems in electrostatics), action (Maupertuis, Euler, Lagrange, Hamilton), light (Fermat), quantum paths (Feynman), etc. Historically, VMP appeal to some economy in nature, similarly to Occam Razor Parsimony (ORP) principle. Version of ORP are "best world" (Leibniz), Panglossianism (Voltaire), and "most interesting world" (Dyson). Conceptually, VMP exemplify curious fact that infinite set is often simpler than its subsets (e.g., set of all integers is simpler than set of primes). Algorithmically very simple number 0.1234567... (Champernowne constant) contains Library of Babel of "all books" (Borges) and codes (infinitely many times) everything countably possible. Likewise, full Megaverse (Everett, Deutsch, Guth, Linde) is simpler than our specific ("Big Bang") universe. Dynamically, VMP imply memory effects akin to hysteresis. Similar ideas are "water memory" (Benveniste, Josephson) and isotopic biology (Berezin). Paradoxically, while ORP calls for economy (simplicity), unfolding of ORP in VMP seemingly works in the opposite direction allowing for complexity emergence (e.g., symmetry breaking in Jahn-Teller effect). Metaphysical extrapolation of this complimentarity may lead to "it-from-bit" (Wheeler) reflection of why there is something rather than nothing.
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
2015-10-30
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
A robust data-driven approach for gene ontology annotation.
Li, Yanpeng; Yu, Hong
2014-01-01
Gene ontology (GO) and GO annotation are important resources for biological information management and knowledge discovery, but the speed of manual annotation became a major bottleneck of database curation. BioCreative IV GO annotation task aims to evaluate the performance of system that automatically assigns GO terms to genes based on the narrative sentences in biomedical literature. This article presents our work in this task as well as the experimental results after the competition. For the evidence sentence extraction subtask, we built a binary classifier to identify evidence sentences using reference distance estimator (RDE), a recently proposed semi-supervised learning method that learns new features from around 10 million unlabeled sentences, achieving an F1 of 19.3% in exact match and 32.5% in relaxed match. In the post-submission experiment, we obtained 22.1% and 35.7% F1 performance by incorporating bigram features in RDE learning. In both development and test sets, RDE-based method achieved over 20% relative improvement on F1 and AUC performance against classical supervised learning methods, e.g. support vector machine and logistic regression. For the GO term prediction subtask, we developed an information retrieval-based method to retrieve the GO term most relevant to each evidence sentence using a ranking function that combined cosine similarity and the frequency of GO terms in documents, and a filtering method based on high-level GO classes. The best performance of our submitted runs was 7.8% F1 and 22.2% hierarchy F1. We found that the incorporation of frequency information and hierarchy filtering substantially improved the performance. In the post-submission evaluation, we obtained a 10.6% F1 using a simpler setting. Overall, the experimental analysis showed our approaches were robust in both the two tasks. © The Author(s) 2014. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Erpenbeck, John
Universities and other higher education institutions are predominantly organizations that convey knowledge, more than developing competences - these are often the verbally proclaimed but only rarely achieved goals. There can be two reasons for this discrepancy. First, conveying informational as well as subject-specific and specialized knowledge can even today be planned, assessed, and checked much more easily than conveying competences - an approach for teaching, which needs new patterns of thought and actions. Teachers and learners, assistants and assessing staff, and especially actors and planners who are concerned with questions of educational politics therefore form a "conspiracy of assessors," which has chosen the simpler and seemingly safer approach. This approach, however, seems to be ignorant of future developments. Second, conveying competences needs different forms of learning and teaching than conveying knowledge. The question of the acquisition (interiorization) of rules, assessments, and results of assessments (= values) and norms in the form of the learners' own emotions and motivations is central. Becoming emotionally labilized is pivotal to this appropriation. Emotional labilization also provides a criterion for assessing the effectiveness of Web 2.0 instruments for developing competences.
Use Hierarchical Storage and Analysis to Exploit Intrinsic Parallelism
NASA Astrophysics Data System (ADS)
Zender, C. S.; Wang, W.; Vicente, P.
2013-12-01
Big Data is an ugly name for the scientific opportunities and challenges created by the growing wealth of geoscience data. How to weave large, disparate datasets together to best reveal their underlying properties, to exploit their strengths and minimize their weaknesses, to continually aggregate more information than the world knew yesterday and less than we will learn tomorrow? Data analytics techniques (statistics, data mining, machine learning, etc.) can accelerate pattern recognition and discovery. However, often researchers must, prior to analysis, organize multiple related datasets into a coherent framework. Hierarchical organization permits entire dataset to be stored in nested groups that reflect their intrinsic relationships and similarities. Hierarchical data can be simpler and faster to analyze by coding operators to automatically parallelize processes over isomorphic storage units, i.e., groups. The newest generation of netCDF Operators (NCO) embody this hierarchical approach, while still supporting traditional analysis approaches. We will use NCO to demonstrate the trade-offs involved in processing a prototypical Big Data application (analysis of CMIP5 datasets) using hierarchical and traditional analysis approaches.
Acetabular fractures: anatomic and clinical considerations.
Lawrence, David A; Menn, Kirsten; Baumgaertner, Michael; Haims, Andrew H
2013-09-01
Classifying acetabular fractures can be an intimidating topic. However, it is helpful to remember that there are only three basic types of acetabular fractures: column fractures, transverse fractures, and wall fractures. Within this framework, acetabular fractures are classified into two broad categories: elementary or associated fractures. We will review the osseous anatomy of the pelvis and provide systematic approaches for reviewing both radiographs and CT scans to effectively evaluate the acetabulum. Although acetabular fracture classification may seem intimidating, the descriptions and distinctions discussed and shown in this article hopefully make the topic simpler to understand. Approach the task by recalling that there are only three basic types of acetabular fractures: column fractures (coronally oriented on CT images), transverse fractures (sagittally oriented on CT images), and wall fractures (obliquely oriented on CT images). We have provided systematic approaches for reviewing both conventional radiographs and CT scans to effectively assess the acetabulum. The clinical implications of the different fracture patterns have also been reviewed because it is critically important to include pertinent information for our clinical colleagues to provide the most efficient and timely clinical care.
Batten, W M J; Harrison, M E; Bahaj, A S
2013-02-28
The actuator disc-RANS model has widely been used in wind and tidal energy to predict the wake of a horizontal axis turbine. The model is appropriate where large-scale effects of the turbine on a flow are of interest, for example, when considering environmental impacts, or arrays of devices. The accuracy of the model for modelling the wake of tidal stream turbines has not been demonstrated, and flow predictions presented in the literature for similar modelled scenarios vary significantly. This paper compares the results of the actuator disc-RANS model, where the turbine forces have been derived using a blade-element approach, to experimental data measured in the wake of a scaled turbine. It also compares the results with those of a simpler uniform actuator disc model. The comparisons show that the model is accurate and can predict up to 94 per cent of the variation in the experimental velocity data measured on the centreline of the wake, therefore demonstrating that the actuator disc-RANS model is an accurate approach for modelling a turbine wake, and a conservative approach to predict performance and loads. It can therefore be applied to similar scenarios with confidence.
Reja, Abu Hena Hasanoor; De, Abhishek; Biswas, Supratik; Chattopadhyay, Amitabha; Chatterjee, Gobinda; Bhattacharya, Basudev; Sarda, Aarti; Aggarwal, Ishad
2013-01-01
The diagnosis of pure neural leprosy (PNL) remained subjective because of over-dependence of clinical expertise and a lack of simple yet reliable diagnostic tool. The criteria for diagnosis, proposed by Jardim et al., are not routinely done by clinicians in developing country as it involves invasive nerve biopsy and sophisticated anti-PGL-1 detection. We conducted a study using fine needle aspiration cytology (FNAC) coupled with Ziehl Neelsen staining (ZN staining) and Multiplex-Polymerase Chain Reaction (PCR) specific for M. leprae for an objective diagnosis of pure neural leprosy (PNL), which may be simpler and yet reliable. The aim of the study is to couple FNAC with ZN staining and multiplex PCR to diagnose pure neural leprosy patients rapidly, in simpler and yet reliable way. Thirteen patients of PNL as diagnosed by two independent consultants were included as case, and 5 patients other than PNL were taken as control in the study. Fine needle aspiration was done on the affected nerve, and aspirates were evaluated for cytology, ZN staining and multiplex-PCR. Out of the 13 cases where fine needle aspiration was done, M. leprae could be elicited in the nerve tissue aspirates in 5 cases (38.4%) with the help of conventional acid-fast staining and 11 cases (84.6%) with the help of multiplex PCR. On cytological examination of the aspirates, only 3 (23%) cases showed specific epithelioid cells, whereas 8 (61.5%) cases showed non-specific inflammation, and 2 (15.3%) cases had no inflammatory cells. Our study demonstrates that in the field of laboratory diagnosis of PNL cases, FNAC in combination with ZN staining for acid-fast bacilli (AFB) and Multiplex-PCR can provide a rapid and definitive diagnosis for the majority of PNL cases. FNAC is a less-invasive, outdoor-based and simpler technique than invasive nerve biopsy procedure. Thus, this study may enlighten the future path for easy and reliable diagnosis of PNL.
Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher
2017-05-18
Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.
RchyOptimyx: Cellular Hierarchy Optimization for Flow Cytometry
Aghaeepour, Nima; Jalali, Adrin; O’Neill, Kieran; Chattopadhyay, Pratip K.; Roederer, Mario; Hoos, Holger H.; Brinkman, Ryan R.
2013-01-01
Analysis of high-dimensional flow cytometry datasets can reveal novel cell populations with poorly understood biology. Following discovery, characterization of these populations in terms of the critical markers involved is an important step, as this can help to both better understand the biology of these populations and aid in designing simpler marker panels to identify them on simpler instruments and with fewer reagents (i.e., in resource poor or highly regulated clinical settings). However, current tools to design panels based on the biological characteristics of the target cell populations work exclusively based on technical parameters (e.g., instrument configurations, spectral overlap, and reagent availability). To address this shortcoming, we developed RchyOptimyx (cellular hieraRCHY OPTIMization), a computational tool that constructs cellular hierarchies by combining automated gating with dynamic programming and graph theory to provide the best gating strategies to identify a target population to a desired level of purity or correlation with a clinical outcome, using the simplest possible marker panels. RchyOptimyx can assess and graphically present the trade-offs between marker choice and population specificity in high-dimensional flow or mass cytometry datasets. We present three proof-of-concept use cases for RchyOptimyx that involve 1) designing a panel of surface markers for identification of rare populations that are primarily characterized using their intracellular signature; 2) simplifying the gating strategy for identification of a target cell population; 3) identification of a non-redundant marker set to identify a target cell population. PMID:23044634
CLIMLAB: a Python-based software toolkit for interactive, process-oriented climate modeling
NASA Astrophysics Data System (ADS)
Rose, B. E. J.
2015-12-01
Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The IPython notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields. However CLIMLAB is well suited to be deployed as a computational back-end for a graphical gaming environment based on earth-system modeling.
Venous thromboembolism prevention guidelines for medical inpatients: mind the (implementation) gap.
Maynard, Greg; Jenkins, Ian H; Merli, Geno J
2013-10-01
Hospital-associated nonsurgical venous thromboembolism (VTE) is an important problem addressed by new guidelines from the American College of Physicians (ACP) and American College of Chest Physicians (AT9). Narrative review and critique. Both guidelines discount asymptomatic VTE outcomes and caution against overprophylaxis, but have different methodologies and estimates of risk/benefit. Guideline complexity and lack of consensus on VTE risk assessment contribute to an implementation gap. Methods to estimate prophylaxis benefit have significant limitations because major trials included mostly screening-detected events. AT9 relies on a single Italian cohort study to conclude that those with a Padua score ≥4 have a very high VTE risk, whereas patients with a score <4 (60% of patients) have a very small risk. However, the cohort population has less comorbidity than US inpatients, and over 1% of patients with a score of 3 suffered pulmonary emboli. The ACP guideline does not endorse any risk-assessment model. AT9 includes the Padua model and Caprini point-based system for nonsurgical inpatients and surgical inpatients, respectively, but there is no evidence they are more effective than simpler risk-assessment models. New VTE prevention guidelines provide varied guidance on important issues including risk assessment. If Padua is used, a threshold of 3, as well as 4, should be considered. Simpler VTE risk-assessment models may be superior to complicated point-based models in environments without sophisticated clinical decision support. © 2013 Society of Hospital Medicine.
Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.
Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja
2015-06-01
Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.
Assessment of intraocular pressure sensing using an implanted reflective flexible membrane
NASA Astrophysics Data System (ADS)
Nazarov, Andrey; Knyazer, Boris; Lifshitz, Tova; Schvartzman, Mark; Abdulhalim, Ibrahim
2017-04-01
Glaucoma is a neurodegenerative condition that is the leading cause of irreversible blindness worldwide. Elevated intraocular pressure (IOP) is the main cause for the development of the disease. The symptoms of this form, such as deterioration of vision and scotomas (loss of visual fields), appear in the latter stages of the disease. Therefore, an IOP monitoring device is needed for better, simpler, and faster diagnosis, and to enable a fast treatment response. We present a theoretical assessment as well as preliminary experimental results of a simple approach for easy, optical, IOP self-monitoring. It is based on a polydimethylsiloxane membrane coated with a reflective layer and a Hartmann-Shack wavefront sensor. Nearly linear correlation is found between membrane deformation and Zernike coefficients representing defocus primary spherical aberration, with high sensitivity and negligible dependence on the measurement distance. The proposed device is expected to provide an accurate IOP measurement resolution of less than ±0.2 mm Hg with a pressure dependence on working distances <0.7 mm Hg/cm for a thick membrane; the corresponding values for a thin membrane are ±0.45 mm Hg and <0.6 mm Hg/cm, respectively, at typical IOP values-up to 40 mm Hg.
Forty-five degree backscattering-mode nonlinear absorption imaging in turbid media.
Cui, Liping; Knox, Wayne H
2010-01-01
Two-color nonlinear absorption imaging has been previously demonstrated with endogenous contrast of hemoglobin and melanin in turbid media using transmission-mode detection and a dual-laser technology approach. For clinical applications, it would be generally preferable to use backscattering mode detection and a simpler single-laser technology. We demonstrate that imaging in backscattering mode in turbid media using nonlinear absorption can be obtained with as little as 1-mW average power per beam with a single laser source. Images have been achieved with a detector receiving backscattered light at a 45-deg angle relative to the incoming beams' direction. We obtain images of capillary tube phantoms with resolution as high as 20 microm and penetration depth up to 0.9 mm for a 300-microm tube at SNR approximately 1 in calibrated scattering solutions. Simulation results of the backscattering and detection process using nonimaging optics are demonstrated. A Monte Carlo-based method shows that the nonlinear signal drops exponentially as the depth increases, which agrees well with our experimental results. Simulation also shows that with our current detection method, only 2% of the signal is typically collected with a 5-mm-radius detector.
SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.
Zenke, Friedemann; Ganguli, Surya
2018-06-01
A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.
NASA Astrophysics Data System (ADS)
Magnusson, Robert; Yoon, Jae Woong; Amin, Mohammad Shyiq; Khaleque, Tanzina; Uddin, Mohammad Jalal
2014-03-01
For selected device concepts that are members of an evolving class of photonic devices enabled by guided-mode resonance (GMR) effects, we review physics of operation, design, fabrication, and characterization. We summarize the application potential of this field and provide new and emerging aspects. Our chosen examples include resonance elements with extremely wide reflection bands. Thus, in a multilevel structure with conformal germanium (Ge) films, reflectance exceeds 99% for spectral widths approaching 1100 nm. A simpler design, incorporating a partially etched single Ge layer on a glass substrate, exhibits a high-reflectance bandwidth close to 900 nm. We present a couple of interesting new device concepts enabled by GMRs coexisting with the Rayleigh anomaly. Our example Rayleigh reflector exhibits a wideband high-efficiency flattop spectrum and extremely rapid angular transitions. Moreover, we show that it is possible to fashion transmission filters by excitation of leaky resonant modes at the Rayleigh anomaly in a subwavelength nanograting. A unique transmission spectrum results, which is tightly delimited in angle and wavelength as experimentally demonstrated. We update our application list with new developments including GMR-based coherent perfect absorbers, multiparametric biosensors, and omnidirectional wideband absorbers.
A New Model Based on Adaptation of the External Loop to Compensate the Hysteresis of Tactile Sensors
Sánchez-Durán, José A.; Vidal-Verdú, Fernando; Oballe-Peinado, Óscar; Castellanos-Ramos, Julián; Hidalgo-López, José A.
2015-01-01
This paper presents a novel method to compensate for hysteresis nonlinearities observed in the response of a tactile sensor. The External Loop Adaptation Method (ELAM) performs a piecewise linear mapping of the experimentally measured external curves of the hysteresis loop to obtain all possible internal cycles. The optimal division of the input interval where the curve is approximated is provided by the error minimization algorithm. This process is carried out off line and provides parameters to compute the split point in real time. A different linear transformation is then performed at the left and right of this point and a more precise fitting is achieved. The models obtained with the ELAM method are compared with those obtained from three other approaches. The results show that the ELAM method achieves a more accurate fitting. Moreover, the involved mathematical operations are simpler and therefore easier to implement in devices such as Field Programmable Gate Array (FPGAs) for real time applications. Furthermore, the method needs to identify fewer parameters and requires no previous selection process of operators or functions. Finally, the method can be applied to other sensors or actuators with complex hysteresis loop shapes. PMID:26501279
Manyi-Loh, Christy E.; Mamphweli, Sampson N.; Meyer, Edson L.; Okoh, Anthony I.; Makaka, Golden; Simon, Michael
2013-01-01
With an ever increasing population rate; a vast array of biomass wastes rich in organic and inorganic nutrients as well as pathogenic microorganisms will result from the diversified human, industrial and agricultural activities. Anaerobic digestion is applauded as one of the best ways to properly handle and manage these wastes. Animal wastes have been recognized as suitable substrates for anaerobic digestion process, a natural biological process in which complex organic materials are broken down into simpler molecules in the absence of oxygen by the concerted activities of four sets of metabolically linked microorganisms. This process occurs in an airtight chamber (biodigester) via four stages represented by hydrolytic, acidogenic, acetogenic and methanogenic microorganisms. The microbial population and structure can be identified by the combined use of culture-based, microscopic and molecular techniques. Overall, the process is affected by bio-digester design, operational factors and manure characteristics. The purpose of anaerobic digestion is the production of a renewable energy source (biogas) and an odor free nutrient-rich fertilizer. Conversely, if animal wastes are accidentally found in the environment, it can cause a drastic chain of environmental and public health complications. PMID:24048207
Virtual view image synthesis for eye-contact in TV conversation system
NASA Astrophysics Data System (ADS)
Murayama, Daisuke; Kimura, Keiichi; Hosaka, Tadaaki; Hamamoto, Takayuki; Shibuhisa, Nao; Tanaka, Seiichi; Sato, Shunichi; Saito, Sakae
2010-02-01
Eye-contact plays an important role for human communications in the sense that it can convey unspoken information. However, it is highly difficult to realize eye-contact in teleconferencing systems because of camera configurations. Conventional methods to overcome this difficulty mainly resorted to space-consuming optical devices such as half mirrors. In this paper, we propose an alternative approach to achieve eye-contact by techniques of arbitrary view image synthesis. In our method, multiple images captured by real cameras are converted to the virtual viewpoint (the center of the display) by homography, and evaluation of matching errors among these projected images provides the depth map and the virtual image. Furthermore, we also propose a simpler version of this method by using a single camera to save the computational costs, in which the only one real image is transformed to the virtual viewpoint based on the hypothesis that the subject is located at a predetermined distance. In this simple implementation, eye regions are separately generated by comparison with pre-captured frontal face images. Experimental results of both the methods show that the synthesized virtual images enable the eye-contact favorably.