Spitzer Space Telescope proposal process
NASA Astrophysics Data System (ADS)
Laine, S.; Silbermann, N. A.; Rebull, L. M.; Storrie-Lombardi, L. J.
2006-06-01
This paper discusses the Spitzer Space Telescope General Observer proposal process. Proposals, consisting of the scientific justification, basic contact information for the observer, and observation requests, are submitted electronically using a client-server Java package called Spot. The Spitzer Science Center (SSC) uses a one-phase proposal submission process, meaning that fully-planned observations are submitted for most proposals at the time of submission, not months after acceptance. Ample documentation and tools are available to the observers on SSC web pages to support the preparation of proposals, including an email-based Helpdesk. Upon submission proposals are immediately ingested into a database which can be queried at the SSC for program information, statistics, etc. at any time. Large proposals are checked for technical feasibility and all proposals are checked against duplicates of already approved observations. Output from these tasks is made available to the Time Allocation Committee (TAC) members. At the review meeting, web-based software is used to record reviewer comments and keep track of the voted scores. After the meeting, another Java-based web tool, Griffin, is used to track the approved programs as they go through technical reviews, duplication checks and minor modifications before the observations are released for scheduling. In addition to detailing the proposal process, lessons learned from the first two General Observer proposal calls are discussed.
Generalized Structured Component Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun; Takane, Yoshio
2004-01-01
We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…
Zhu, Hong; Xu, Xiaohan; Ahn, Chul
2017-01-01
Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.
Local and gauge invariant observables in gravity
NASA Astrophysics Data System (ADS)
Khavkine, Igor
2015-09-01
It is well known that general relativity (GR) does not possess any non-trivial local (in a precise standard sense) and diffeomorphism invariant observable. We propose a generalized notion of local observables, which retain the most important properties that follow from the standard definition of locality, yet is flexible enough to admit a large class of diffeomorphism invariant observables in GR. The generalization comes at a small price—that the domain of definition of a generalized local observable may not cover the entire phase space of GR and two such observables may have distinct domains. However, the subset of metrics on which generalized local observables can be defined is in a sense generic (its open interior is non-empty in the Whitney strong topology). Moreover, generalized local gauge invariant observables are sufficient to separate diffeomorphism orbits on this admissible subset of the phase space. Connecting the construction with the notion of differential invariants gives a general scheme for defining generalized local gauge invariant observables in arbitrary gauge theories, which happens to agree with well-known results for Maxwell and Yang-Mills theories.
Using HST: From proposal to science
NASA Technical Reports Server (NTRS)
Shames, P.
1991-01-01
The following subject areas are covered: a short history; uses of network STSCII (general communication, science collaboration, functional activities, internal data management, and external data access); proposal/observation handling; DMF access; and future uses and requirements.
The SIRTF Legacy Observing Program
NASA Astrophysics Data System (ADS)
Greenhouse, M. A.; Leisawitz, D.; Gehrz, R. D.; Clemens, D. P.; Force, Sirtf Community Task
1997-12-01
Legacy Observations and General Observations(GO)are separate categories in which SIRTF observing time will be allocated through peer reviewed community proposals. The Legacy Program will embrace several projects, each headed by a Legacy Principal Investigator. Legacy Observations are distinguished from General Observations by the following three criteria: [1] the project is a large, coherent investigation whose scientific goals can not be met by a number of smaller, uncoordinated projects; [2] the data will be of both general and lasting importance to the broad astronomical community and of immediate utility in motivating and planning follow-on GO investigations with SIRTF; and [3] the data (unprocessed, fully processed, and at intermediate steps in processing) will be placed in a public data base immediately and with no proprietary period. The goals of the SIRTF Legacy program are: [1] enable community use of SIRTF for large coherent survey observations, [2] provide prompt community access to SIRTF survey data, and [3] enable GO program observations based on Legacy program results. A likely attribute (but not a requirement) for Legacy projects is that they may involve hundreds, and perhaps thousands, of hours of observing time. It is anticipated that as much as 6000 hours of telescope time will be allocated through the Legacy program. To meet Legacy program goal [3], allocation of as much as 70% of SIRTF's first year on orbit to Legacy projects may be necessary, and the observing phase of the Legacy program will be completed during the following year. A Legacy call for proposals will be issued 1 year prior to launch or sooner, and will be open to all scientists and science topics. In this poster, we display Legacy program definition and schedule items that will be of interest to those intending to propose under this unique opportunity.
Wideband Motion Control by Position and Acceleration Input Based Disturbance Observer
NASA Astrophysics Data System (ADS)
Irie, Kouhei; Katsura, Seiichiro; Ohishi, Kiyoshi
The disturbance observer can observe and suppress the disturbance torque within its bandwidth. Recent motion systems begin to spread in the society and they are required to have ability to contact with unknown environment. Such a haptic motion requires much wider bandwidth. However, since the conventional disturbance observer attains the acceleration response by the second order derivative of position response, the bandwidth is limited due to the derivative noise. This paper proposes a novel structure of a disturbance observer. The proposed disturbance observer uses an acceleration sensor for enlargement of bandwidth. Generally, the bandwidth of an acceleration sensor is from 1Hz to more than 1kHz. To cover DC range, the conventional position sensor based disturbance observer is integrated. Thus, the performance of the proposed Position and Acceleration input based disturbance observer (PADO) is superior to the conventional one. The PADO is applied to position control (infinity stiffness) and force control (zero stiffness). The numerical and experimental results show viability of the proposed method.
Observational viability and stability of nonlocal cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deser, S.; Woodard, R.P., E-mail: deser@brandeis.edu, E-mail: woodard@phys.ufl.edu
2013-11-01
We show that the nonlocal gravity models, proposed to explain current cosmic acceleration without dark energy, pass two essential tests: first, they can be defined so as not to alter the, observationally correct, general relativity predictions for gravitationally bound systems. Second, they are stable, ghost-free, with no additional excitations beyond those of general relativity. In this they differ from their, ghostful, localized versions. The systems' initial value constraints are the same as in general relativity, and our nonlocal modifications never convert the original gravitons into ghosts.
Fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, R.
1986-01-01
A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.
Search For Debris Disks Around A Few Radio Pulsars
NASA Astrophysics Data System (ADS)
Wang, Zhongxiang; Kaplan, David; Kaspi, Victoria
2007-05-01
We propose to observe 7 radio pulsars with Spitzer/IRAC at 4.5 and 8.0 microns, in an effort to probe the general existence of debris disks around isolated neutron stars. Such disks, probably formed from fallback or pushback material left over from supernova explosions, has been suggested to be associated with various phenomena seen in radio pulsars. Recently, new evidence for such a disk around an isolated young neutron star was found in Spitzer observations of an X-ray pulsar. If they exist, the disks could be illuminated by energy output from central pulsars and thus be generally detectable in the infrared by IRAC. We have selected 40 relatively young, energetic pulsars from the most recent pulsar catalogue as the preliminary targets for our ground-based near-IR imaging survey. Based on the results from the survey observations, 7 pulsars are further selected because of their relatively sparse field and estimated low extinction. Combined with our near-IR images, Spitzer/IRAC observations will allow us to unambiguously identify disks if they are detected at the source positions. This Spitzer observation program we propose here probably represents the best test we can do on the general existence of disks around radio pulsars.
Search automation of the generalized method of device operational characteristics improvement
NASA Astrophysics Data System (ADS)
Petrova, I. Yu; Puchkova, A. A.; Zaripova, V. M.
2017-01-01
The article presents brief results of analysis of existing search methods of the closest patents, which can be applied to determine generalized methods of device operational characteristics improvement. There were observed the most widespread clustering algorithms and metrics for determining the proximity degree between two documents. The article proposes the technique of generalized methods determination; it has two implementation variants and consists of 7 steps. This technique has been implemented in the “Patents search” subsystem of the “Intellect” system. Also the article gives an example of the use of the proposed technique.
Do People Use Their Implicit Theories of Creativity as General Theories?
ERIC Educational Resources Information Center
Lee, Hong; Kim, Jungsik; Ryu, Yeonjae; Song, Seokjong
2015-01-01
This study examines whether people use the general implicit theories of creativity or not when applying them to themselves and others. On the basis of the actor-observer asymmetry theory, the authors propose that conception of creativity would be differently constructed depending on the targets of attention: general, self, and other. Three studies…
ERIC Educational Resources Information Center
Zhong, Zhenshan; Sun, Mengyao
2018-01-01
The power of general education curriculum comes from the enduring classics. The authors apply research methods such as questionnaire survey, interview, and observation to investigate the state of general education curriculum implementation at N University and analyze problems faced by incorporating classics. Based on this, the authors propose that…
Extending the Applicability of the Generalized Likelihood Function for Zero-Inflated Data Series
NASA Astrophysics Data System (ADS)
Oliveira, Debora Y.; Chaffe, Pedro L. B.; Sá, João. H. M.
2018-03-01
Proper uncertainty estimation for data series with a high proportion of zero and near zero observations has been a challenge in hydrologic studies. This technical note proposes a modification to the Generalized Likelihood function that accounts for zero inflation of the error distribution (ZI-GL). We compare the performance of the proposed ZI-GL with the original Generalized Likelihood function using the entire data series (GL) and by simply suppressing zero observations (GLy>0). These approaches were applied to two interception modeling examples characterized by data series with a significant number of zeros. The ZI-GL produced better uncertainty ranges than the GL as measured by the precision, reliability and volumetric bias metrics. The comparison between ZI-GL and GLy>0 highlights the need for further improvement in the treatment of residuals from near zero simulations when a linear heteroscedastic error model is considered. Aside from the interception modeling examples illustrated herein, the proposed ZI-GL may be useful for other hydrologic studies, such as for the modeling of the runoff generation in hillslopes and ephemeral catchments.
Observations of two-phase flow patterns in a horizontal circular channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, M.E.; Weinandy, J.J.; Christensen, R.N.
1999-01-01
Horizontal two-phase flow patterns were observed in a transparent circular channel (1.90 cm I.D.) using adiabatic mixtures of air and water. Visual identification of the flow regimes was supplemented with photographic data and the results were plotted on the flow regime map which has been proposed by Breber et al. for condensation applications. The results indicate general consistency between the observations and the predictions of the map, and, by providing data for different fluids and conditions from which the map was developed, support its general applicability.
A Hilbert Space Representation of Generalized Observables and Measurement Processes in the ESR Model
NASA Astrophysics Data System (ADS)
Sozzo, Sandro; Garola, Claudio
2010-12-01
The extended semantic realism ( ESR) model recently worked out by one of the authors embodies the mathematical formalism of standard (Hilbert space) quantum mechanics in a noncontextual framework, reinterpreting quantum probabilities as conditional instead of absolute. We provide here a Hilbert space representation of the generalized observables introduced by the ESR model that satisfy a simple physical condition, propose a generalization of the projection postulate, and suggest a possible mathematical description of the measurement process in terms of evolution of the compound system made up of the measured system and the measuring apparatus.
Ouari, Kamel; Rekioua, Toufik; Ouhrouche, Mohand
2014-01-01
In order to make a wind power generation truly cost-effective and reliable, an advanced control techniques must be used. In this paper, we develop a new control strategy, using nonlinear generalized predictive control (NGPC) approach, for DFIG-based wind turbine. The proposed control law is based on two points: NGPC-based torque-current control loop generating the rotor reference voltage and NGPC-based speed control loop that provides the torque reference. In order to enhance the robustness of the controller, a disturbance observer is designed to estimate the aerodynamic torque which is considered as an unknown perturbation. Finally, a real-time simulation is carried out to illustrate the performance of the proposed controller. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
1973-01-01
This chart describes the Skylab student experiment X-Ray Stellar Classes, proposed by Joe Reihs of Baton Rouge, Louisiana. This experiment utilized Skylab's X-Ray Spectrographic Telescope to observe and determine the general characteristics and location of x-ray sources. In March 1972, NASA and the National Science Teachers Association selected 25 experiment proposals for flight on Skylab. Science advisors from the Marshall Space Flight Center aided and assisted the students in developing the proposals for flight on Skylab.
Hubble Space Telescope cycle 5 call for proposals
NASA Technical Reports Server (NTRS)
Bond, Howard E. (Editor)
1994-01-01
This document invites and supports participation by the international astronomical community in the HST General Observer and Archival Research programs. These documents contain the basic procedural and technical information required for HST proposal preparation and submission, including applicable deadlines. The telescope and its instruments were built under the auspices of the NASA and the European Space Agency.
Quantum-Like Bayesian Networks for Modeling Decision Making
Moreira, Catarina; Wichert, Andreas
2016-01-01
In this work, we explore an alternative quantum structure to perform quantum probabilistic inferences to accommodate the paradoxical findings of the Sure Thing Principle. We propose a Quantum-Like Bayesian Network, which consists in replacing classical probabilities by quantum probability amplitudes. However, since this approach suffers from the problem of exponential growth of quantum parameters, we also propose a similarity heuristic that automatically fits quantum parameters through vector similarities. This makes the proposed model general and predictive in contrast to the current state of the art models, which cannot be generalized for more complex decision scenarios and that only provide an explanatory nature for the observed paradoxes. In the end, the model that we propose consists in a nonparametric method for estimating inference effects from a statistical point of view. It is a statistical model that is simpler than the previous quantum dynamic and quantum-like models proposed in the literature. We tested the proposed network with several empirical data from the literature, mainly from the Prisoner's Dilemma game and the Two Stage Gambling game. The results obtained show that the proposed quantum Bayesian Network is a general method that can accommodate violations of the laws of classical probability theory and make accurate predictions regarding human decision-making in these scenarios. PMID:26858669
Generalization of Entropy Based Divergence Measures for Symbolic Sequence Analysis
Ré, Miguel A.; Azad, Rajeev K.
2014-01-01
Entropy based measures have been frequently used in symbolic sequence analysis. A symmetrized and smoothed form of Kullback-Leibler divergence or relative entropy, the Jensen-Shannon divergence (JSD), is of particular interest because of its sharing properties with families of other divergence measures and its interpretability in different domains including statistical physics, information theory and mathematical statistics. The uniqueness and versatility of this measure arise because of a number of attributes including generalization to any number of probability distributions and association of weights to the distributions. Furthermore, its entropic formulation allows its generalization in different statistical frameworks, such as, non-extensive Tsallis statistics and higher order Markovian statistics. We revisit these generalizations and propose a new generalization of JSD in the integrated Tsallis and Markovian statistical framework. We show that this generalization can be interpreted in terms of mutual information. We also investigate the performance of different JSD generalizations in deconstructing chimeric DNA sequences assembled from bacterial genomes including that of E. coli, S. enterica typhi, Y. pestis and H. influenzae. Our results show that the JSD generalizations bring in more pronounced improvements when the sequences being compared are from phylogenetically proximal organisms, which are often difficult to distinguish because of their compositional similarity. While small but noticeable improvements were observed with the Tsallis statistical JSD generalization, relatively large improvements were observed with the Markovian generalization. In contrast, the proposed Tsallis-Markovian generalization yielded more pronounced improvements relative to the Tsallis and Markovian generalizations, specifically when the sequences being compared arose from phylogenetically proximal organisms. PMID:24728338
Generalization of entropy based divergence measures for symbolic sequence analysis.
Ré, Miguel A; Azad, Rajeev K
2014-01-01
Entropy based measures have been frequently used in symbolic sequence analysis. A symmetrized and smoothed form of Kullback-Leibler divergence or relative entropy, the Jensen-Shannon divergence (JSD), is of particular interest because of its sharing properties with families of other divergence measures and its interpretability in different domains including statistical physics, information theory and mathematical statistics. The uniqueness and versatility of this measure arise because of a number of attributes including generalization to any number of probability distributions and association of weights to the distributions. Furthermore, its entropic formulation allows its generalization in different statistical frameworks, such as, non-extensive Tsallis statistics and higher order Markovian statistics. We revisit these generalizations and propose a new generalization of JSD in the integrated Tsallis and Markovian statistical framework. We show that this generalization can be interpreted in terms of mutual information. We also investigate the performance of different JSD generalizations in deconstructing chimeric DNA sequences assembled from bacterial genomes including that of E. coli, S. enterica typhi, Y. pestis and H. influenzae. Our results show that the JSD generalizations bring in more pronounced improvements when the sequences being compared are from phylogenetically proximal organisms, which are often difficult to distinguish because of their compositional similarity. While small but noticeable improvements were observed with the Tsallis statistical JSD generalization, relatively large improvements were observed with the Markovian generalization. In contrast, the proposed Tsallis-Markovian generalization yielded more pronounced improvements relative to the Tsallis and Markovian generalizations, specifically when the sequences being compared arose from phylogenetically proximal organisms.
Observability of nonlinear dynamics: normalized results and a time-series approach.
Aguirre, Luis A; Bastos, Saulo B; Alves, Marcela A; Letellier, Christophe
2008-03-01
This paper investigates the observability of nonlinear dynamical systems. Two difficulties associated with previous studies are dealt with. First, a normalized degree observability is defined. This permits the comparison of different systems, which was not generally possible before. Second, a time-series approach is proposed based on omnidirectional nonlinear correlation functions to rank a set of time series of a system in terms of their potential use to reconstruct the original dynamics without requiring the knowledge of the system equations. The two approaches proposed in this paper and a former method were applied to five benchmark systems and an overall agreement of over 92% was found.
Cataclysmic variables and related objects
NASA Technical Reports Server (NTRS)
Hack, Margherita; Ladous, Constanze; Jordan, Stuart D. (Editor); Thomas, Richard N. (Editor); Goldberg, Leo; Pecker, Jean-Claude
1993-01-01
This volume begins with an introductory chapter on general properties of cataclysmic variables. Chapters 2 through 5 of Part 1 are devoted to observations and interpretation of dwarf novae and nova-like stars. Chapters 6 through 10, Part 2, discuss the general observational properties of classical and recurrent novae, the theoretical models, and the characteristics and models for some well observed classical novae and recurrent novae. Chapters 11 through 14 of Part 3 are devoted to an overview of the observations of symbiotic stars, to a description of the various models proposed for explaining the symbiotic phenomenon, and to a discussion of a few selected objects, respectively. Chapter 15 briefly examines the many unsolved problems posed by the observations of the different classes of cataclysmic variables and symbiotic stars.
SOFIA general investigator science program
NASA Astrophysics Data System (ADS)
Young, Erick T.; Andersson, B.-G.; Becklin, Eric E.; Reach, William T.; Sankrit, Ravi; Zinnecker, Hans; Krabbe, Alfred
2014-07-01
SOFIA is a joint project between NASA and DLR, the German Aerospace Center, to provide the worldwide astronomical community with an observatory that offers unique capabilities from visible to far-infrared wavelengths. SOFIA consists of a 2.7-m telescope mounted in a highly modified Boeing 747-SP aircraft, a suite of instruments, and the scientific and operational infrastructure to support the observing program. This paper describes the current status of the observatory and details the General Investigator program. The observatory has recently completed major development activities, and it has transitioned into full operational status. Under the General Investigator program, astronomers submit proposals that are peer reviewed for observation on the facility. We describe the results from the first two cycles of the General Investigator program. We also describe some of the new observational capabilities that will be available for Cycle 3, which will begin in 2015.
VLBI observations of Infrared-Faint Radio Sources
NASA Astrophysics Data System (ADS)
Middelberg, Enno; Phillips, Chris; Norris, Ray; Tingay, Steven
2006-10-01
We propose to observe a small sample of radio sources from the ATLAS project (ATLAS = Australia Telescope Large Area Survey) with the LBA, to determine their compactness and map their structures. The sample consists of three radio sources with no counterpart in the co-located SWIRE survey (3.6 um to 160 um), carried out with the Spitzer Space Telescope. This rare class of sources, dubbed Infrared-Faint Radio Sources, or IFRS, is inconsistent with current galaxy evolution models. VLBI observations are an essential way to obtain further clues on what these objects are and why they are hidden from infrared observations: we will map their structure to test whether they resemble core-jet or double-lobed morphologies, and we will measure the flux densities on long baselines, to determine their compactness. Previous snapshot-style LBA observations of two other IFRS yielded no detections, hence we propose to use disk-based recording with 512 Mbps where possible, for highest sensitivity. With the observations proposed here, we will increase the number of VLBI-observed IFRS from two to five, soon allowing us to draw general conclusions about this intriguing new class of objects.
NASA Astrophysics Data System (ADS)
Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L.; Soares, Edward J.; Lemahieu, Ignace; Glick, Stephen J.
2006-06-01
In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).
A frequency-domain approach to improve ANNs generalization quality via proper initialization.
Chaari, Majdi; Fekih, Afef; Seibi, Abdennour C; Hmida, Jalel Ben
2018-08-01
The ability to train a network without memorizing the input/output data, thereby allowing a good predictive performance when applied to unseen data, is paramount in ANN applications. In this paper, we propose a frequency-domain approach to evaluate the network initialization in terms of quality of training, i.e., generalization capabilities. As an alternative to the conventional time-domain methods, the proposed approach eliminates the approximate nature of network validation using an excess of unseen data. The benefits of the proposed approach are demonstrated using two numerical examples, where two trained networks performed similarly on the training and the validation data sets, yet they revealed a significant difference in prediction accuracy when tested using a different data set. This observation is of utmost importance in modeling applications requiring a high degree of accuracy. The efficiency of the proposed approach is further demonstrated on a real-world problem, where unlike other initialization methods, a more conclusive assessment of generalization is achieved. On the practical front, subtle methodological and implementational facets are addressed to ensure reproducibility and pinpoint the limitations of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.
Modeling Answer Change Behavior: An Application of a Generalized Item Response Tree Model
ERIC Educational Resources Information Center
Jeon, Minjeong; De Boeck, Paul; van der Linden, Wim
2017-01-01
We present a novel application of a generalized item response tree model to investigate test takers' answer change behavior. The model allows us to simultaneously model the observed patterns of the initial and final responses after an answer change as a function of a set of latent traits and item parameters. The proposed application is illustrated…
NASA Astrophysics Data System (ADS)
Costantini, Mario; Malvarosa, Fabio; Minati, Federico
2010-03-01
Phase unwrapping and integration of finite differences are key problems in several technical fields. In SAR interferometry and differential and persistent scatterers interferometry digital elevation models and displacement measurements can be obtained after unambiguously determining the phase values and reconstructing the mean velocities and elevations of the observed targets, which can be performed by integrating differential estimates of these quantities (finite differences between neighboring points).In this paper we propose a general formulation for robust and efficient integration of finite differences and phase unwrapping, which includes standard techniques methods as sub-cases. The proposed approach allows obtaining more reliable and accurate solutions by exploiting redundant differential estimates (not only between nearest neighboring points) and multi-dimensional information (e.g. multi-temporal, multi-frequency, multi-baseline observations), or external data (e.g. GPS measurements). The proposed approach requires the solution of linear or quadratic programming problems, for which computationally efficient algorithms exist.The validation tests obtained on real SAR data confirm the validity of the method, which was integrated in our production chain and successfully used also in massive productions.
Measures of dependence for multivariate Lévy distributions
NASA Astrophysics Data System (ADS)
Boland, J.; Hurd, T. R.; Pivato, M.; Seco, L.
2001-02-01
Recent statistical analysis of a number of financial databases is summarized. Increasing agreement is found that logarithmic equity returns show a certain type of asymptotic behavior of the largest events, namely that the probability density functions have power law tails with an exponent α≈3.0. This behavior does not vary much over different stock exchanges or over time, despite large variations in trading environments. The present paper proposes a class of multivariate distributions which generalizes the observed qualities of univariate time series. A new consequence of the proposed class is the "spectral measure" which completely characterizes the multivariate dependences of the extreme tails of the distribution. This measure on the unit sphere in M-dimensions, in principle completely general, can be determined empirically by looking at extreme events. If it can be observed and determined, it will prove to be of importance for scenario generation in portfolio risk management.
Bremsstrahlung function, leading Lüscher correction at weak coupling and localization
NASA Astrophysics Data System (ADS)
Bonini, Marisa; Griguolo, Luca; Preti, Michelangelo; Seminara, Domenico
2016-02-01
We discuss the near BPS expansion of the generalized cusp anomalous dimension with L units of R-charge. Integrability provides an exact solution, obtained by solving a general TBA equation in the appropriate limit: we propose here an alternative method based on supersymmetric localization. The basic idea is to relate the computation to the vacuum expectation value of certain 1/8 BPS Wilson loops with local operator insertions along the contour. These observables localize on a two-dimensional gauge theory on S 2, opening the possibility of exact calculations. As a test of our proposal, we reproduce the leading Lüscher correction at weak coupling to the generalized cusp anomalous dimension. This result is also checked against a genuine Feynman diagram approach in {N}=4 Super Yang-Mills theory.
Hubble space telescope: The GO and GTO observing programs, version 3.0
NASA Technical Reports Server (NTRS)
Downes, Ron
1992-01-01
A portion of the observing time with the Hubble Space Telescope (HST) was awarded by NASA to scientists involved in the development of the HST and its instruments. These scientists are the Guaranteed Time Observers (GTO's). Observing time was also awarded to General Observers (GO's) on the basis of the proposal reviews in 1989 and 1991. The majority of the 1989 programs have been completed during 'Cycle 1', while the 1991 programs will be completed during 'Cycle 2', nominally a 12-month period beginning July 1992. This document presents abstracts of these GO and GTO programs, and detailed listings of the specific targets and exposures contained in them. These programs and exposures are protected by NASA policy, as detailed in the HST Call for Proposals (CP), and are not to be duplicated by new programs.
Characterizing Exoplanet Atmospheres with the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Greene, Tom
2017-01-01
The James Webb Space Telescope (JWST) will have numerous modes for acquiring photometry and spectra of stars, planets, galaxies, and other astronomical objects over wavelengths of 0.6 - 28 microns. Several of these modes are well-suited for observing atomic and molecular features in the atmospheres of transiting or spatially resolved exoplanets. I will present basic information on JWST capabilities, highlight modes that are well-suited for observing exoplanets, and give examples of what may be learned from JWST observations. This will include simulated spectra and expected retrieved chemical abundance, composition, equilibrium, and thermal information and uncertainties. JWST Cycle 1 general observer proposals are expected to be due in March 2018 with launch in October 2018, and the greater scientific community is encouraged to propose investigations to study exoplanet atmospheres and other topics.
Guidelines for reporting evaluations based on observational methodology.
Portell, Mariona; Anguera, M Teresa; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana
2015-01-01
Observational methodology is one of the most suitable research designs for evaluating fidelity of implementation, especially in complex interventions. However, the conduct and reporting of observational studies is hampered by the absence of specific guidelines, such as those that exist for other evaluation designs. This lack of specific guidance poses a threat to the quality and transparency of these studies and also constitutes a considerable publication hurdle. The aim of this study thus was to draw up a set of proposed guidelines for reporting evaluations based on observational methodology. The guidelines were developed by triangulating three sources of information: observational studies performed in different fields by experts in observational methodology, reporting guidelines for general studies and studies with similar designs to observational studies, and proposals from experts in observational methodology at scientific meetings. We produced a list of guidelines grouped into three domains: intervention and expected outcomes, methods, and results. The result is a useful, carefully crafted set of simple guidelines for conducting and reporting observational studies in the field of program evaluation.
Polynomial fuzzy observer designs: a sum-of-squares approach.
Tanaka, Kazuo; Ohtake, Hiroshi; Seo, Toshiaki; Tanaka, Motoyasu; Wang, Hua O
2012-10-01
This paper presents a sum-of-squares (SOS) approach to polynomial fuzzy observer designs for three classes of polynomial fuzzy systems. The proposed SOS-based framework provides a number of innovations and improvements over the existing linear matrix inequality (LMI)-based approaches to Takagi-Sugeno (T-S) fuzzy controller and observer designs. First, we briefly summarize previous results with respect to a polynomial fuzzy system that is a more general representation of the well-known T-S fuzzy system. Next, we propose polynomial fuzzy observers to estimate states in three classes of polynomial fuzzy systems and derive SOS conditions to design polynomial fuzzy controllers and observers. A remarkable feature of the SOS design conditions for the first two classes (Classes I and II) is that they realize the so-called separation principle, i.e., the polynomial fuzzy controller and observer for each class can be separately designed without lack of guaranteeing the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. Although, for the last class (Class III), the separation principle does not hold, we propose an algorithm to design polynomial fuzzy controller and observer satisfying the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. All the design conditions in the proposed approach can be represented in terms of SOS and are symbolically and numerically solved via the recently developed SOSTOOLS and a semidefinite-program solver, respectively. To illustrate the validity and applicability of the proposed approach, three design examples are provided. The examples demonstrate the advantages of the SOS-based approaches for the existing LMI approaches to T-S fuzzy observer designs.
A general class of multinomial mixture models for anuran calling survey data
Royle, J. Andrew; Link, W.A.
2005-01-01
We propose a general framework for modeling anuran abundance using data collected from commonly used calling surveys. The data generated from calling surveys are indices of calling intensity (vocalization of males) that do not have a precise link to actual population size and are sensitive to factors that influence anuran behavior. We formulate a model for calling-index data in terms of the maximum potential calling index that could be observed at a site (the 'latent abundance class'), given its underlying breeding population, and we focus attention on estimating the distribution of this latent abundance class. A critical consideration in estimating the latent structure is imperfect detection, which causes the observed abundance index to be less than or equal to the latent abundance class. We specify a multinomial sampling model for the observed abundance index that is conditional on the latent abundance class. Estimation of the latent abundance class distribution is based on the marginal likelihood of the index data, having integrated over the latent class distribution. We apply the proposed modeling framework to data collected as part of the North American Amphibian Monitoring Program (NAAMP).
Is the firewall consistent? Gedanken experiments on black hole complementarity and firewall proposal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, Dong-il; Lee, Bum-Hoon; Yeom, Dong-han, E-mail: dongil.j.hwang@gmail.com, E-mail: bhl@sogang.ac.kr, E-mail: innocent.yeom@gmail.com
2013-01-01
In this paper, we discuss the black hole complementarity and the firewall proposal at length. Black hole complementarity is inevitable if we assume the following five things: unitarity, entropy-area formula, existence of an information observer, semi-classical quantum field theory for an asymptotic observer, and the general relativity for an in-falling observer. However, large N rescaling and the AMPS argument show that black hole complementarity is inconsistent. To salvage the basic philosophy of the black hole complementarity, AMPS introduced a firewall around the horizon. According to large N rescaling, the firewall should be located close to the apparent horizon. We investigatemore » the consistency of the firewall with the two critical conditions: the firewall should be near the time-like apparent horizon and it should not affect the future infinity. Concerning this, we have introduced a gravitational collapse with a false vacuum lump which can generate a spacetime structure with disconnected apparent horizons. This reveals a situation that there is a firewall outside of the event horizon, while the apparent horizon is absent. Therefore, the firewall, if it exists, not only does modify the general relativity for an in-falling observer, but also modify the semi-classical quantum field theory for an asymptotic observer.« less
Is the firewall consistent? Gedanken experiments on black hole complementarity and firewall proposal
NASA Astrophysics Data System (ADS)
Hwang, Dong-il; Lee, Bum-Hoon; Yeom, Dong-han
2013-01-01
In this paper, we discuss the black hole complementarity and the firewall proposal at length. Black hole complementarity is inevitable if we assume the following five things: unitarity, entropy-area formula, existence of an information observer, semi-classical quantum field theory for an asymptotic observer, and the general relativity for an in-falling observer. However, large N rescaling and the AMPS argument show that black hole complementarity is inconsistent. To salvage the basic philosophy of the black hole complementarity, AMPS introduced a firewall around the horizon. According to large N rescaling, the firewall should be located close to the apparent horizon. We investigate the consistency of the firewall with the two critical conditions: the firewall should be near the time-like apparent horizon and it should not affect the future infinity. Concerning this, we have introduced a gravitational collapse with a false vacuum lump which can generate a spacetime structure with disconnected apparent horizons. This reveals a situation that there is a firewall outside of the event horizon, while the apparent horizon is absent. Therefore, the firewall, if it exists, not only does modify the general relativity for an in-falling observer, but also modify the semi-classical quantum field theory for an asymptotic observer.
NASA Astrophysics Data System (ADS)
Yin, Shui-qing; Wang, Zhonglei; Zhu, Zhengyuan; Zou, Xu-kai; Wang, Wen-ting
2018-07-01
Extreme precipitation can cause flooding and may result in great economic losses and deaths. The return level is a commonly used measure of extreme precipitation events and is required for hydrological engineer designs, including those of sewerage systems, dams, reservoirs and bridges. In this paper, we propose a two-step method to estimate the return level and its uncertainty for a study region. In the first step, we use the generalized extreme value distribution, the L-moment method and the stationary bootstrap to estimate the return level and its uncertainty at the site with observations. In the second step, a spatial model incorporating the heterogeneous measurement errors and covariates is trained to estimate return levels at sites with no observations and to improve the estimates at sites with limited information. The proposed method is applied to the daily rainfall data from 273 weather stations in the Haihe river basin of North China. We compare the proposed method with two alternatives: the first one is based on the ordinary Kriging method without measurement error, and the second one smooths the estimated location and scale parameters of the generalized extreme value distribution by the universal Kriging method. Results show that the proposed method outperforms its counterparts. We also propose a novel approach to assess the two-step method by comparing it with the at-site estimation method with a series of reduced length of observations. Estimates of the 2-, 5-, 10-, 20-, 50- and 100-year return level maps and the corresponding uncertainties are provided for the Haihe river basin, and a comparison with those released by the Hydrology Bureau of Ministry of Water Resources of China is made.
A new polytopic approach for the unknown input functional observer design
NASA Astrophysics Data System (ADS)
Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed
2018-03-01
In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.
NASA Astrophysics Data System (ADS)
Jiang, Shu-Han; Xu, Zhen-Peng; Su, Hong-Yi; Pati, Arun Kumar; Chen, Jing-Ling
2018-01-01
Here, we present the most general framework for n -particle Hardy's paradoxes, which include Hardy's original one and Cereceda's extension as special cases. Remarkably, for any n ≥3 , we demonstrate that there always exist generalized paradoxes (with the success probability as high as 1 /2n -1) that are stronger than the previous ones in showing the conflict of quantum mechanics with local realism. An experimental proposal to observe the stronger paradox is also presented for the case of three qubits. Furthermore, from these paradoxes we can construct the most general Hardy's inequalities, which enable us to detect Bell's nonlocality for more quantum states.
NASA Astrophysics Data System (ADS)
Kalayeh, Mahdi M.; Marin, Thibault; Pretorius, P. Hendrik; Wernick, Miles N.; Yang, Yongyi; Brankov, Jovan G.
2011-03-01
In this paper, we present a numerical observer for image quality assessment, aiming to predict human observer accuracy in a cardiac perfusion defect detection task for single-photon emission computed tomography (SPECT). In medical imaging, image quality should be assessed by evaluating the human observer accuracy for a specific diagnostic task. This approach is known as task-based assessment. Such evaluations are important for optimizing and testing imaging devices and algorithms. Unfortunately, human observer studies with expert readers are costly and time-demanding. To address this problem, numerical observers have been developed as a surrogate for human readers to predict human diagnostic performance. The channelized Hotelling observer (CHO) with internal noise model has been found to predict human performance well in some situations, but does not always generalize well to unseen data. We have argued in the past that finding a model to predict human observers could be viewed as a machine learning problem. Following this approach, in this paper we propose a channelized relevance vector machine (CRVM) to predict human diagnostic scores in a detection task. We have previously used channelized support vector machines (CSVM) to predict human scores and have shown that this approach offers better and more robust predictions than the classical CHO method. The comparison of the proposed CRVM with our previously introduced CSVM method suggests that CRVM can achieve similar generalization accuracy, while dramatically reducing model complexity and computation time.
Deformed exponentials and portfolio selection
NASA Astrophysics Data System (ADS)
Rodrigues, Ana Flávia P.; Guerreiro, Igor M.; Cavalcante, Charles Casimiro
In this paper, we present a method for portfolio selection based on the consideration on deformed exponentials in order to generalize the methods based on the gaussianity of the returns in portfolio, such as the Markowitz model. The proposed method generalizes the idea of optimizing mean-variance and mean-divergence models and allows a more accurate behavior for situations where heavy-tails distributions are necessary to describe the returns in a given time instant, such as those observed in economic crises. Numerical results show the proposed method outperforms the Markowitz portfolio for the cumulated returns with a good convergence rate of the weights for the assets which are searched by means of a natural gradient algorithm.
Relative size perception at a distance is best at eye level
NASA Technical Reports Server (NTRS)
Bertamini, M.; Yang, T. L.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)
1998-01-01
Relative size judgments were collected for two objects at 30.5 m and 23.8 from the observer in order to assess how performance depends on the relationship between the size of the objects and the eye level of the observer. In three experiments in an indoor hallway and in one experiment outdoors, accuracy was higher for objects in the neighborhood of eye level. We consider these results in the light of two hypotheses. One proposes that observers localize the horizon as a reference for judging relative size, and the other proposes that observers perceive the general neighborhood of the horizon and then employ a height-in-visual-field heuristic. The finding that relative size judgments are best around the horizon implies that information that is independent of distance perception is used in perceiving size.
Observed Altruism in Dental Students: An Experiment Using the Ultimatum Game.
Crutchfield, Parker A S; Jarvis, Justin S; Olson, Terry L; Wilson, Matthew S
2017-11-01
The conventional wisdom in dental and medical education is that dental and medical students experience "ethical erosion" over the duration of dental and medical school. There is some evidence for this claim, but in the case of dental education the evidence consists entirely of survey research, which does not measure behavior. The aim of this study was to measure the altruistic behavior of dental students in order to fill the significant gap in knowledge of how students are disposed to behave, rather than how they are disposed to think. To test the altruistic behavior of dental students, the authors conducted a field experiment using the Ultimatum Game, a two-player game used in economics to observe social behavior. In the game, the "proposer" is given a pot of resources, typically money, to split with the "responder." The proposer proposes a split of the pot to the responder. If the responder accepts the proposed split, both participants keep the amounts offered. If the proposal is rejected, then neither participant receives anything. In this study, the students played the proposer, and the responder was a fictional individual although the students believed they were playing the computerized game with a real person. In fall 2015, dental students from each of the four years at one university played the game. All 160 students were invited to participate, and 136 did so, for a response rate of 85%. The results showed that the students exhibited greater levels of altruism than the general population typically does. The students' altruism was at its highest in year four and was associated with the socioeconomic status of responder. This result raises the possibility that if a decreasing ability to behave altruistically is observed during dental school, it may not be due to a general disposition of students, but rather some factor specific to the educational environment.
Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S
2016-06-01
We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.
The ESO Observing Programmes Committee
NASA Astrophysics Data System (ADS)
Westerlund, B. E.
1982-06-01
Since 1978 the ESO Observing Programmes Committee (OPC) has "the function to inspect and rank the proposals made for observing programmes at La Silla, and thereby to advise the Director General on the distribution of observing time". The members (one from each member country) and their alternates are nominated by the respective national committees for five-year terms (not immediately renewable). The terms are staggered so that each year one or two persons are replaced. The Chairman is appointed annually by the Council. He is invited to attend Council meetings and to report to its members.
Generalized superradiant assembly for nanophotonic thermal emitters
NASA Astrophysics Data System (ADS)
Mallawaarachchi, Sudaraka; Gunapala, Sarath D.; Stockman, Mark I.; Premaratne, Malin
2018-03-01
Superradiance explains the collective enhancement of emission, observed when nanophotonic emitters are arranged within subwavelength proximity and perfect symmetry. Thermal superradiant emitter assemblies with variable photon far-field coupling rates are known to be capable of outperforming their conventional, nonsuperradiant counterparts. However, due to the inability to account for assemblies comprising emitters with various materials and dimensional configurations, existing thermal superradiant models are inadequate and incongruent. In this paper, a generalized thermal superradiant assembly for nanophotonic emitters is developed from first principles. Spectral analysis shows that not only does the proposed model outperform existing models in power delivery, but also portrays unforeseen and startling characteristics during emission. These electromagnetically induced transparency like (EIT-like) and superscattering-like characteristics are reported here for a superradiant assembly, and the effects escalate as the emitters become increasingly disparate. The fact that the EIT-like characteristics are in close agreement with a recent experimental observation involving the superradiant decay of qubits strongly bolsters the validity of the proposed model.
Distributed multi-sensor particle filter for bearings-only tracking
NASA Astrophysics Data System (ADS)
Zhang, Jungen; Ji, Hongbing
2012-02-01
In this article, the classical bearings-only tracking (BOT) problem for a single target is addressed, which belongs to the general class of non-linear filtering problems. Due to the fact that the radial distance observability of the target is poor, the algorithm-based sequential Monte-Carlo (particle filtering, PF) methods generally show instability and filter divergence. A new stable distributed multi-sensor PF method is proposed for BOT. The sensors process their measurements at their sites using a hierarchical PF approach, which transforms the BOT problem from Cartesian coordinate to the logarithmic polar coordinate and separates the observable components from the unobservable components of the target. In the fusion centre, the target state can be estimated by utilising the multi-sensor optimal information fusion rule. Furthermore, the computation of a theoretical Cramer-Rao lower bound is given for the multi-sensor BOT problem. Simulation results illustrate that the proposed tracking method can provide better performances than the traditional PF method.
Equivalent Theories and Changing Hamiltonian Observables in General Relativity
NASA Astrophysics Data System (ADS)
Pitts, J. Brian
2018-03-01
Change and local spatial variation are missing in Hamiltonian general relativity according to the most common definition of observables as having 0 Poisson bracket with all first-class constraints. But other definitions of observables have been proposed. In pursuit of Hamiltonian-Lagrangian equivalence, Pons, Salisbury and Sundermeyer use the Anderson-Bergmann-Castellani gauge generator G, a tuned sum of first-class constraints. Kuchař waived the 0 Poisson bracket condition for the Hamiltonian constraint to achieve changing observables. A systematic combination of the two reforms might use the gauge generator but permit non-zero Lie derivative Poisson brackets for the external gauge symmetry of General Relativity. Fortunately one can test definitions of observables by calculation using two formulations of a theory, one without gauge freedom and one with gauge freedom. The formulations, being empirically equivalent, must have equivalent observables. For de Broglie-Proca non-gauge massive electromagnetism, all constraints are second-class, so everything is observable. Demanding equivalent observables from gauge Stueckelberg-Utiyama electromagnetism, one finds that the usual definition fails while the Pons-Salisbury-Sundermeyer definition with G succeeds. This definition does not readily yield change in GR, however. Should GR's external gauge freedom of general relativity share with internal gauge symmetries the 0 Poisson bracket (invariance), or is covariance (a transformation rule) sufficient? A graviton mass breaks the gauge symmetry (general covariance), but it can be restored by parametrization with clock fields. By requiring equivalent observables, one can test whether observables should have 0 or the Lie derivative as the Poisson bracket with the gauge generator G. The latter definition is vindicated by calculation. While this conclusion has been reported previously, here the calculation is given in some detail.
Equivalent Theories and Changing Hamiltonian Observables in General Relativity
NASA Astrophysics Data System (ADS)
Pitts, J. Brian
2018-05-01
Change and local spatial variation are missing in Hamiltonian general relativity according to the most common definition of observables as having 0 Poisson bracket with all first-class constraints. But other definitions of observables have been proposed. In pursuit of Hamiltonian-Lagrangian equivalence, Pons, Salisbury and Sundermeyer use the Anderson-Bergmann-Castellani gauge generator G, a tuned sum of first-class constraints. Kuchař waived the 0 Poisson bracket condition for the Hamiltonian constraint to achieve changing observables. A systematic combination of the two reforms might use the gauge generator but permit non-zero Lie derivative Poisson brackets for the external gauge symmetry of General Relativity. Fortunately one can test definitions of observables by calculation using two formulations of a theory, one without gauge freedom and one with gauge freedom. The formulations, being empirically equivalent, must have equivalent observables. For de Broglie-Proca non-gauge massive electromagnetism, all constraints are second-class, so everything is observable. Demanding equivalent observables from gauge Stueckelberg-Utiyama electromagnetism, one finds that the usual definition fails while the Pons-Salisbury-Sundermeyer definition with G succeeds. This definition does not readily yield change in GR, however. Should GR's external gauge freedom of general relativity share with internal gauge symmetries the 0 Poisson bracket (invariance), or is covariance (a transformation rule) sufficient? A graviton mass breaks the gauge symmetry (general covariance), but it can be restored by parametrization with clock fields. By requiring equivalent observables, one can test whether observables should have 0 or the Lie derivative as the Poisson bracket with the gauge generator G. The latter definition is vindicated by calculation. While this conclusion has been reported previously, here the calculation is given in some detail.
Generalized quantum interference of correlated photon pairs.
Kim, Heonoh; Lee, Sang Min; Moon, Han Seb
2015-05-07
Superposition and indistinguishablility between probability amplitudes have played an essential role in observing quantum interference effects of correlated photons. The Hong-Ou-Mandel interference and interferences of the path-entangled photon number state are of special interest in the field of quantum information technologies. However, a fully generalized two-photon quantum interferometric scheme accounting for the Hong-Ou-Mandel scheme and path-entangled photon number states has not yet been proposed. Here we report the experimental demonstrations of the generalized two-photon interferometry with both the interferometric properties of the Hong-Ou-Mandel effect and the fully unfolded version of the path-entangled photon number state using photon-pair sources, which are independently generated by spontaneous parametric down-conversion. Our experimental scheme explains two-photon interference fringes revealing single- and two-photon coherence properties in a single interferometer setup. Using the proposed interferometric measurement, it is possible to directly estimate the joint spectral intensity of a photon pair source.
On the Composition of Risk Preference and Belief
ERIC Educational Resources Information Center
Wakkar, Peter P.
2004-01-01
Prospect theory assumes nonadditive decision weights for preferences over risky gambles. Such decision weights generalize additive probabilities. This article proposes a decomposition of decision weights into a component reflecting risk attitude and a new component depending on belief. The decomposition is based on an observable preference…
Lu, Ji; Pan, Junhao; Zhang, Qiang; Dubé, Laurette; Ip, Edward H
2015-01-01
With intensively collected longitudinal data, recent advances in the experience-sampling method (ESM) benefit social science empirical research, but also pose important methodological challenges. As traditional statistical models are not generally well equipped to analyze a system of variables that contain feedback loops, this paper proposes the utility of an extended hidden Markov model to model reciprocal the relationship between momentary emotion and eating behavior. This paper revisited an ESM data set (Lu, Huet, & Dube, 2011) that observed 160 participants' food consumption and momentary emotions 6 times per day in 10 days. Focusing on the analyses on feedback loop between mood and meal-healthiness decision, the proposed reciprocal Markov model (RMM) can accommodate both hidden ("general" emotional states: positive vs. negative state) and observed states (meal: healthier, same or less healthy than usual) without presuming independence between observations and smooth trajectories of mood or behavior changes. The results of RMM analyses illustrated the reciprocal chains of meal consumption and mood as well as the effect of contextual factors that moderate the interrelationship between eating and emotion. A simulation experiment that generated data consistent with the empirical study further demonstrated that the procedure is promising in terms of recovering the parameters.
Eolian Dust and the Origin of Sedimentary Chert
Cecil, C. Blaine
2004-01-01
This paper proposes an alternative model for the primary source of silica contained in bedded sedimentary chert. The proposed model is derived from three principal observations as follows: (1) eolian processes in warm-arid climates produce copious amounts of highly reactive fine-grained quartz particles (dust), (2) eolian processes in warm-arid climates export enormous quantities of quartzose dust to marine environments, and (3) bedded sedimentary cherts generally occur in marine strata that were deposited in warm-arid paleoclimates where dust was a potential source of silica. An empirical integration of these observations suggests that eolian dust best explains both the primary and predominant source of silica for most bedded sedimentary cherts.
Nonlocal symmetries and Bäcklund transformations for the self-dual Yang-Mills system
NASA Astrophysics Data System (ADS)
Papachristou, C. J.; Harrison, B. Kent
1988-01-01
The observation is made that generalized evolutionary isovectors of the self-dual Yang-Mills equation, obtained by ``verticalization'' of the geometrical isovectors derived in a previous paper [J. Math. Phys. 28, 1261 (1987)], generate Bäcklund transformations for the self-dual system. In particular, new Bäcklund transformations are obtained by ``verticalizing'' the generators of point transformations on the solution manifold. A geometric ansatz for the derivation of such (generally nonlocal) symmetries is proposed.
NASA Astrophysics Data System (ADS)
Weng, B. S.; Yan, D. H.; Wang, H.; Liu, J. H.; Yang, Z. Y.; Qin, T. L.; Yin, J.
2015-08-01
Drought is firstly a resource issue, and with its development it evolves into a disaster issue. Drought events usually occur in a determinate but a random manner. Drought has become one of the major factors to affect sustainable socioeconomic development. In this paper, we propose the generalized drought assessment index (GDAI) based on water resources systems for assessing drought events. The GDAI considers water supply and water demand using a distributed hydrological model. We demonstrate the use of the proposed index in the Dongliao River basin in northeastern China. The results simulated by the GDAI are compared to observed drought disaster records in the Dongliao River basin. In addition, the temporal distribution of drought events and the spatial distribution of drought frequency from the GDAI are compared with the traditional approaches in general (i.e., standard precipitation index, Palmer drought severity index and rate of water deficit index). Then, generalized drought times, generalized drought duration, and generalized drought severity were calculated by theory of runs. Application of said runs at various drought levels (i.e., mild drought, moderate drought, severe drought, and extreme drought) during the period 1960-2010 shows that the centers of gravity of them all distribute in the middle reaches of Dongliao River basin, and change with time. The proposed methodology may help water managers in water-stressed regions to quantify the impact of drought, and consequently, to make decisions for coping with drought.
NASA Technical Reports Server (NTRS)
Johnson, R. A.; Wehrly, T.
1976-01-01
Population models for dependence between two angular measurements and for dependence between an angular and a linear observation are proposed. The method of canonical correlations first leads to new population and sample measures of dependence in this latter situation. An example relating wind direction to the level of a pollutant is given. Next, applied to pairs of angular measurements, the method yields previously proposed sample measures in some special cases and a new sample measure in general.
Physical concepts in the development of constitutive equations
NASA Technical Reports Server (NTRS)
Cassenti, B. N.
1985-01-01
Proposed viscoplastic material models include in their formulation observed material response but do not generally incorporate principles from thermodynamics, statistical mechanics, and quantum mechanics. Numerous hypotheses were made for material response based on first principles. Many of these hypotheses were tested experimentally. The proposed viscoplastic theories and the experimental basis of these hypotheses must be checked against the hypotheses. The physics of thermodynamics, statistical mechanics and quantum mechanics, and the effects of defects, are reviewed for their application to the development of constitutive laws.
How does new evidence change our estimates of probabilities? Carnap's formula revisited
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik; Quintana, Chris
1992-01-01
The formula originally proposed by R. Carnap in his analysis of induction is reviewed and its natural generalization is presented. A situation is considered where the probability of a certain event is determined without using standard statistical methods due to the lack of observation.
Online Variational Bayesian Filtering-Based Mobile Target Tracking in Wireless Sensor Networks
Zhou, Bingpeng; Chen, Qingchun; Li, Tiffany Jing; Xiao, Pei
2014-01-01
The received signal strength (RSS)-based online tracking for a mobile node in wireless sensor networks (WSNs) is investigated in this paper. Firstly, a multi-layer dynamic Bayesian network (MDBN) is introduced to characterize the target mobility with either directional or undirected movement. In particular, it is proposed to employ the Wishart distribution to approximate the time-varying RSS measurement precision's randomness due to the target movement. It is shown that the proposed MDBN offers a more general analysis model via incorporating the underlying statistical information of both the target movement and observations, which can be utilized to improve the online tracking capability by exploiting the Bayesian statistics. Secondly, based on the MDBN model, a mean-field variational Bayesian filtering (VBF) algorithm is developed to realize the online tracking of a mobile target in the presence of nonlinear observations and time-varying RSS precision, wherein the traditional Bayesian filtering scheme cannot be directly employed. Thirdly, a joint optimization between the real-time velocity and its prior expectation is proposed to enable online velocity tracking in the proposed online tacking scheme. Finally, the associated Bayesian Cramer–Rao Lower Bound (BCRLB) analysis and numerical simulations are conducted. Our analysis unveils that, by exploiting the potential state information via the general MDBN model, the proposed VBF algorithm provides a promising solution to the online tracking of a mobile node in WSNs. In addition, it is shown that the final tracking accuracy linearly scales with its expectation when the RSS measurement precision is time-varying. PMID:25393784
NASA Astrophysics Data System (ADS)
Yamakoshi, Yoshiki; Yamamoto, Atsushi; Kasahara, Toshihiro; Iijima, Tomohiro; Yuminaka, Yasushi
2015-07-01
We have proposed a quantitative shear wave imaging technique for continuous shear wave excitation. Shear wave wavefront is observed directly by color flow imaging using a general-purpose ultrasonic imaging system. In this study, the proposed method is applied to experiments in vivo, and shear wave maps, namely, the shear wave phase map, which shows the shear wave propagation inside the medium, and the shear wave velocity map, are observed for the skeletal muscle in the shoulder. To excite the shear wave inside the skeletal muscle of the shoulder, a hybrid ultrasonic wave transducer, which combines a small vibrator with an ultrasonic wave probe, is adopted. The shear wave velocity of supraspinatus muscle, which is measured by the proposed method, is 4.11 ± 0.06 m/s (N = 4). This value is consistent with those obtained by the acoustic radiation force impulse method.
Inhibition of return in static but not necessarily in dynamic search.
Wang, Zhiguo; Zhang, Kan; Klein, Raymond M
2010-01-01
If and when search involves the serial inspection of items by covert or overt attention, its efficiency would be enhanced by a mechanism that would discourage re-inspections of items or regions of the display that had already been examined. Klein (1988, 2000; Klein & Dukewich, 2006) proposed that inhibition of return (IOR) might be such a mechanism. The present experiments explored this proposal by combining a dynamic search task (Horowitz & Wolfe, 1998, 2003) with a probe-detection task. IOR was observed when search was most efficient (static and slower dynamic search). IOR was not observed when search performance was less efficient (fast dynamic search).These findings are consistent with the "foraging facilitator" proposal of IOR and are unpredicted by theories of search that assume parallel accumulation of information across the array (plus noise) as a general explanation for the effect of set size upon search performance.
Super resolution for astronomical observations
NASA Astrophysics Data System (ADS)
Li, Zhan; Peng, Qingyu; Bhanu, Bir; Zhang, Qingfeng; He, Haifeng
2018-05-01
In order to obtain detailed information from multiple telescope observations a general blind super-resolution (SR) reconstruction approach for astronomical images is proposed in this paper. A pixel-reliability-based SR reconstruction algorithm is described and implemented, where the developed process incorporates flat field correction, automatic star searching and centering, iterative star matching, and sub-pixel image registration. Images captured by the 1-m telescope at Yunnan Observatory are used to test the proposed technique. The results of these experiments indicate that, following SR reconstruction, faint stars are more distinct, bright stars have sharper profiles, and the backgrounds have higher details; thus these results benefit from the high-precision star centering and image registration provided by the developed method. Application of the proposed approach not only provides more opportunities for new discoveries from astronomical image sequences, but will also contribute to enhancing the capabilities of most spatial or ground-based telescopes.
Uno, Hajime; Tian, Lu; Claggett, Brian; Wei, L J
2015-12-10
With censored event time observations, the logrank test is the most popular tool for testing the equality of two underlying survival distributions. Although this test is asymptotically distribution free, it may not be powerful when the proportional hazards assumption is violated. Various other novel testing procedures have been proposed, which generally are derived by assuming a class of specific alternative hypotheses with respect to the hazard functions. The test considered by Pepe and Fleming (1989) is based on a linear combination of weighted differences of the two Kaplan-Meier curves over time and is a natural tool to assess the difference of two survival functions directly. In this article, we take a similar approach but choose weights that are proportional to the observed standardized difference of the estimated survival curves at each time point. The new proposal automatically makes weighting adjustments empirically. The new test statistic is aimed at a one-sided general alternative hypothesis and is distributed with a short right tail under the null hypothesis but with a heavy tail under the alternative. The results from extensive numerical studies demonstrate that the new procedure performs well under various general alternatives with a caution of a minor inflation of the type I error rate when the sample size is small or the number of observed events is small. The survival data from a recent cancer comparative study are utilized for illustrating the implementation of the process. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Hasegawa, K.; Lim, C. S.; Ogure, K.
2003-09-01
We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.
Observable consequences of zero-point energy
NASA Astrophysics Data System (ADS)
Sen, Siddhartha; Gupta, Kumar S.
2017-12-01
Spectral line widths, the Lamb shift and the Casimir effect are generally accepted to be observable consequences of the zero-point electromagnetic (ZPEM) fields. A new class of observable consequences of ZPEM field at the mesoscopic scale were recently proposed and observed. Here, we extend this class of observable effects and predict that mesoscopic water layers should have a high value for its solid-liquid phase transition temperature, as illustrated by water inside a single-walled carbon nanotube (CNT). For this case, our analysis predicts that the phase transition temperature scales inversely with the square of the effective radius available for the water flow within the CNT.
Transparent communication strategy on GMOs: will it change public opinion?
Sinemus, Kristina; Egelhofer, Marc
2007-09-01
Innovations are central for the economic growth; however, the use of new technologies needs to be widely accepted in the general public and the society as a whole. Biotechnology in general, and the use of genetic engineering in food production in particular are seen critically by the European public and perceived as "risky", and a transatlantic divide between European and US citizens has been observed. This review investigates the reasons for those differing perceptions and proposes new strategies to communicate the benefits of biotechnology in agriculture to a broader public. When analyzing the dialogue process that has taken place between public, scientists, governmental organizations and industry, questions arise on what has been done differently in Europe, in order to propose new, more successful and efficient communication strategies for the future.
Applying Nyquist's method for stability determination to solar wind observations
NASA Astrophysics Data System (ADS)
Klein, Kristopher G.; Kasper, Justin C.; Korreck, K. E.; Stevens, Michael L.
2017-10-01
The role instabilities play in governing the evolution of solar and astrophysical plasmas is a matter of considerable scientific interest. The large number of sources of free energy accessible to such nearly collisionless plasmas makes general modeling of unstable behavior, accounting for the temperatures, densities, anisotropies, and relative drifts of a large number of populations, analytically difficult. We therefore seek a general method of stability determination that may be automated for future analysis of solar wind observations. This work describes an efficient application of the Nyquist instability method to the Vlasov dispersion relation appropriate for hot, collisionless, magnetized plasmas, including the solar wind. The algorithm recovers the familiar proton temperature anisotropy instabilities, as well as instabilities that had been previously identified using fits extracted from in situ observations in Gary et al. (2016). Future proposed applications of this method are discussed.
SIMULATION OF DESCENDING MULTIPLE SUPRA-ARCADE RECONNECTION OUTFLOWS IN SOLAR FLARES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cecere, M.; Schneiter, M.; Costa, A.
After recent Atmospheric Imaging Assembly observations by Savage, McKenzie, and Reeves, we revisit the scenario proposed by us in previous papers. We have shown that sunward, generally dark plasma features that originated above posteruption flare arcades are consistent with a scenario where plasma voids (which we identify as supra-arcade reconnection outflows, SAROs) generate the bouncing and interfering of shocks and expansion waves upstream of an initial localized deposition of energy that is collimated in the magnetic field direction. In this paper, we analyze the multiple production and interaction of SAROs and their individual structures that make them relatively stable featuresmore » while moving. We compare our results with observations and with the scenarios proposed by other authors.« less
A lunar far-side very low frequency array
NASA Technical Reports Server (NTRS)
Burns, Jack O. (Editor); Duric, Nebojsa (Editor); Johnson, Stewart (Editor); Taylor, G. Jeffrey (Editor)
1989-01-01
Papers were presented to consider very low frequency (VLF) radio astronomical observations from the moon. In part 1, the environment in which a lunar VLF radio array would function is described. Part 2 is a review of previous and proposed low-frequency observatories. The science that could be conducted with a lunar VLF array is described in part 3. The design of a lunar VLF array and site selection criteria are considered, respectively, in parts 4 and 5. Part 6 is a proposal for precursor lunar VLF observations. Finally, part 7 is a summary and statement of conclusions, with suggestions for future science and engineering studies. The workshop concluded with a general consensus on the scientific goals and preliminary design for a lunar VLF array.
A generalized target theory and its applications.
Zhao, Lei; Mi, Dong; Hu, Bei; Sun, Yeqing
2015-09-28
Different radiobiological models have been proposed to estimate the cell-killing effects, which are very important in radiotherapy and radiation risk assessment. However, most applied models have their own scopes of application. In this work, by generalizing the relationship between "hit" and "survival" in traditional target theory with Yager negation operator in Fuzzy mathematics, we propose a generalized target model of radiation-induced cell inactivation that takes into account both cellular repair effects and indirect effects of radiation. The simulation results of the model and the rethinking of "the number of targets in a cell" and "the number of hits per target" suggest that it is only necessary to investigate the generalized single-hit single-target (GSHST) in the present theoretical frame. Analysis shows that the GSHST model can be reduced to the linear quadratic model and multitarget model in the low-dose and high-dose regions, respectively. The fitting results show that the GSHST model agrees well with the usual experimental observations. In addition, the present model can be used to effectively predict cellular repair capacity, radiosensitivity, target size, especially the biologically effective dose for the treatment planning in clinical applications.
Straddling Interdisciplinary Seams: Working Safely in the Field, Living Dangerously With a Model
NASA Astrophysics Data System (ADS)
Light, B.; Roberts, A.
2016-12-01
Many excellent proposals for observational work have included language detailing how the proposers will appropriately archive their data and publish their results in peer-reviewed literature so that they may be readily available to the modeling community for parameterization development. While such division of labor may be both practical and inevitable, the assimilation of observational results and the development of observationally-based parameterizations of physical processes require care and feeding. Key questions include: (1) Is an existing parameterization accurate, consistent, and general? If not, it may be ripe for additional physics. (2) Do there exist functional working relationships between human modeler and human observationalist? If not, one or more may need to be initiated and cultivated. (3) If empirical observation and model development are a chicken/egg problem, how, given our lack of prescience and foreknowledge, can we better design observational science plans to meet the eventual demands of model parameterization? (4) Will the addition of new physics "break" the model? If so, then the addition may be imperative. In the context of these questions, we will make retrospective and forward-looking assessments of a now-decade-old numerical parameterization to treat the partitioning of solar energy at the Earth's surface where sea ice is present. While this so called "Delta-Eddington Albedo Parameterization" is currently employed in the widely-used Los Alamos Sea Ice Model (CICE) and appears to be standing the tests of accuracy, consistency, and generality, we will highlight some ideas for its ongoing development and improvement.
Edwards, Darrin C.; Metz, Charles E.
2012-01-01
Although a fully general extension of ROC analysis to classification tasks with more than two classes has yet to be developed, the potential benefits to be gained from a practical performance evaluation methodology for classification tasks with three classes have motivated a number of research groups to propose methods based on constrained or simplified observer or data models. Here we consider an ideal observer in a task with underlying data drawn from three univariate normal distributions. We investigate the behavior of the resulting ideal observer’s decision variables and ROC surface. In particular, we show that the pair of ideal observer decision variables is constrained to a parametric curve in two-dimensional likelihood ratio space, and that the decision boundary line segments used by the ideal observer can intersect this curve in at most six places. From this, we further show that the resulting ROC surface has at most four degrees of freedom at any point, and not the five that would be required, in general, for a surface in a six-dimensional space to be non-degenerate. In light of the difficulties we have previously pointed out in generalizing the well-known area under the ROC curve performance metric to tasks with three or more classes, the problem of developing a suitable and fully general performance metric for classification tasks with three or more classes remains unsolved. PMID:23162165
USDA-ARS?s Scientific Manuscript database
The feedback between soil moisture and precipitation has long been a topic of interest due to its potential for improving weather and seasonal forecasts. The generally proposed mechanism assumes a control of soil moisture on precipitation via the partitioning of the surface fluxes (the Evaporative F...
USDA-ARS?s Scientific Manuscript database
If not properly account for, auto-correlated errors in observations can lead to inaccurate results in soil moisture data analysis and reanalysis. Here, we propose a more generalized form of the triple collocation algorithm (GTC) capable of decomposing the total error variance of remotely-sensed surf...
An Optimal Algorithm towards Successive Location Privacy in Sensor Networks with Dynamic Programming
NASA Astrophysics Data System (ADS)
Zhao, Baokang; Wang, Dan; Shao, Zili; Cao, Jiannong; Chan, Keith C. C.; Su, Jinshu
In wireless sensor networks, preserving location privacy under successive inference attacks is extremely critical. Although this problem is NP-complete in general cases, we propose a dynamic programming based algorithm and prove it is optimal in special cases where the correlation only exists between p immediate adjacent observations.
Data-Driven Engineering of Social Dynamics: Pattern Matching and Profit Maximization
Peng, Huan-Kai; Lee, Hao-Chih; Pan, Jia-Yu; Marculescu, Radu
2016-01-01
In this paper, we define a new problem related to social media, namely, the data-driven engineering of social dynamics. More precisely, given a set of observations from the past, we aim at finding the best short-term intervention that can lead to predefined long-term outcomes. Toward this end, we propose a general formulation that covers two useful engineering tasks as special cases, namely, pattern matching and profit maximization. By incorporating a deep learning model, we derive a solution using convex relaxation and quadratic-programming transformation. Moreover, we propose a data-driven evaluation method in place of the expensive field experiments. Using a Twitter dataset, we demonstrate the effectiveness of our dynamics engineering approach for both pattern matching and profit maximization, and study the multifaceted interplay among several important factors of dynamics engineering, such as solution validity, pattern-matching accuracy, and intervention cost. Finally, the method we propose is general enough to work with multi-dimensional time series, so it can potentially be used in many other applications. PMID:26771830
Data-Driven Engineering of Social Dynamics: Pattern Matching and Profit Maximization.
Peng, Huan-Kai; Lee, Hao-Chih; Pan, Jia-Yu; Marculescu, Radu
2016-01-01
In this paper, we define a new problem related to social media, namely, the data-driven engineering of social dynamics. More precisely, given a set of observations from the past, we aim at finding the best short-term intervention that can lead to predefined long-term outcomes. Toward this end, we propose a general formulation that covers two useful engineering tasks as special cases, namely, pattern matching and profit maximization. By incorporating a deep learning model, we derive a solution using convex relaxation and quadratic-programming transformation. Moreover, we propose a data-driven evaluation method in place of the expensive field experiments. Using a Twitter dataset, we demonstrate the effectiveness of our dynamics engineering approach for both pattern matching and profit maximization, and study the multifaceted interplay among several important factors of dynamics engineering, such as solution validity, pattern-matching accuracy, and intervention cost. Finally, the method we propose is general enough to work with multi-dimensional time series, so it can potentially be used in many other applications.
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Stone, Anna
2012-05-01
A centre-surround attentional mechanism was proposed by Carr and Dagenbach (Journal of Experimental Psychology: Learning, Memory, and Cognition 16: 341-350, 1990) to account for their observations of negative semantic priming from hard-to-perceive primes. Their mechanism cannot account for the observation of negative semantic priming when primes are clearly visible. Three experiments (Ns = 30, 46, and 30) used a familiarity decision with names of famous people, preceded by a prime name with the same occupation as the target or with a different occupation. Negative semantic priming was observed at a 150- or 200-ms SOA, with positive priming at shorter (50-ms) and longer (1,000-ms) SOAs. In Experiment 3, we verified that the primes were easily recognisable in the priming task at an SOA that yielded negative semantic priming, which cannot be predicted by the original centre-surround mechanism. A modified version is proposed that explains transiently negative semantic priming by proposing that centre-surround inhibition is a normal, automatically invoked aspect of the semantic processing of visually presented famous names.
Decision making generalized by a cumulative probability weighting function
NASA Astrophysics Data System (ADS)
dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto
2018-01-01
Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.
NASA Astrophysics Data System (ADS)
Gassara, H.; El Hajjaji, A.; Chaabane, M.
2017-07-01
This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.
FAST TRACK COMMUNICATION: Novel mechanism for nanoscale catalysis
NASA Astrophysics Data System (ADS)
Msezane, Alfred Z.; Felfli, Zineb; Sokolovski, Dmitri
2010-10-01
The interplay between Regge resonances and Ramsauer-Townsend minima in the electron elastic total cross sections for Au and Pd atoms along with their large electron affinities is proposed as the fundamental atomic mechanism responsible for the observed exceptional catalytic properties of Au nanoparticles and to explain why the combination Au-Pd possesses an even higher catalytic activity than Au or Pd separately when catalyzing H2O2, consistent with recent experiments. The investigation uses the recent complex angular momentum description of electron scattering from neutral atoms and the proposed mechanism in general.
Generalized quantum interference of correlated photon pairs
Kim, Heonoh; Lee, Sang Min; Moon, Han Seb
2015-01-01
Superposition and indistinguishablility between probability amplitudes have played an essential role in observing quantum interference effects of correlated photons. The Hong-Ou-Mandel interference and interferences of the path-entangled photon number state are of special interest in the field of quantum information technologies. However, a fully generalized two-photon quantum interferometric scheme accounting for the Hong-Ou-Mandel scheme and path-entangled photon number states has not yet been proposed. Here we report the experimental demonstrations of the generalized two-photon interferometry with both the interferometric properties of the Hong-Ou-Mandel effect and the fully unfolded version of the path-entangled photon number state using photon-pair sources, which are independently generated by spontaneous parametric down-conversion. Our experimental scheme explains two-photon interference fringes revealing single- and two-photon coherence properties in a single interferometer setup. Using the proposed interferometric measurement, it is possible to directly estimate the joint spectral intensity of a photon pair source. PMID:25951143
Portell, Mariona; Anguera, M Teresa; Hernández-Mendo, Antonio; Jonsson, Gudberg K
2015-01-01
Contextual factors are crucial for evaluative research in psychology, as they provide insights into what works, for whom, in what circumstances, in what respects, and why. Studying behavior in context, however, poses numerous methodological challenges. Although a comprehensive framework for classifying methods seeking to quantify biopsychosocial aspects in everyday contexts was recently proposed, this framework does not contemplate contributions from observational methodology. The aim of this paper is to justify and propose a more general framework that includes observational methodology approaches. Our analysis is rooted in two general concepts: ecological validity and methodological complementarity. We performed a narrative review of the literature on research methods and techniques for studying daily life and describe their shared properties and requirements (collection of data in real time, on repeated occasions, and in natural settings) and classification criteria (eg, variables of interest and level of participant involvement in the data collection process). We provide several examples that illustrate why, despite their higher costs, studies of behavior and experience in everyday contexts offer insights that complement findings provided by other methodological approaches. We urge that observational methodology be included in classifications of research methods and techniques for studying everyday behavior and advocate a renewed commitment to prioritizing ecological validity in behavioral research seeking to quantify biopsychosocial aspects. PMID:26089708
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szankowski, Piotr; Trippenbach, Marek; Infeld, Eryk
We introduce a class of solitonlike entities in spinor three-component Bose-Einstein condensates. These entities generalize well-known solitons. For special values of coupling constants, the system considered is completely integrable and supports N soliton solutions. The one-soliton solutions can be generalized to systems with different values of coupling constants. However, they no longer interact elastically. When two so-generalized solitons collide, a spin component oscillation is observed in both emerging entities. We propose to call these newfound entities oscillatons. They propagate without dispersion and retain their character after collisions. We derive an exact mathematical model for oscillatons and show that the well-knownmore » one-soliton solutions are a particular case.« less
General flat four-dimensional world pictures and clock systems
NASA Technical Reports Server (NTRS)
Hsu, J. P.; Underwood, J. A.
1978-01-01
We explore the mathematical structure and the physical implications of a general four-dimensional symmetry framework which is consistent with the Poincare-Einstein principle of relativity for physical laws and with experiments. In particular, we discuss a four-dimensional framework in which all observers in different frames use one and the same grid of clocks. The general framework includes special relativity and a recently proposed new four-dimensional symmetry with a nonuniversal light speed as two special simple cases. The connection between the properties of light propagation and the convention concerning clock systems is also discussed, and is seen to be nonunique within the four-dimensional framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cimpoesu, Dorin, E-mail: cdorin@uaic.ro; Stoleriu, Laurentiu; Stancu, Alexandru
2013-12-14
We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.
Generalized probabilistic scale space for image restoration.
Wong, Alexander; Mishra, Akshaya K
2010-10-01
A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.
Wang, Chang; Qi, Fei; Shi, Guangming; Wang, Xiaotian
2013-01-01
Deployment is a critical issue affecting the quality of service of camera networks. The deployment aims at adopting the least number of cameras to cover the whole scene, which may have obstacles to occlude the line of sight, with expected observation quality. This is generally formulated as a non-convex optimization problem, which is hard to solve in polynomial time. In this paper, we propose an efficient convex solution for deployment optimizing the observation quality based on a novel anisotropic sensing model of cameras, which provides a reliable measurement of the observation quality. The deployment is formulated as the selection of a subset of nodes from a redundant initial deployment with numerous cameras, which is an ℓ0 minimization problem. Then, we relax this non-convex optimization to a convex ℓ1 minimization employing the sparse representation. Therefore, the high quality deployment is efficiently obtained via convex optimization. Simulation results confirm the effectiveness of the proposed camera deployment algorithms. PMID:23989826
NASA Astrophysics Data System (ADS)
Caporale, F.; Celiberto, F. G.; Chachamis, G.; Gómez, D. Gordo; Vera, A. Sabio
2017-04-01
Recently, a new family of observables consisting of azimuthal-angle generalized ratios was proposed in a kinematical setup that resembles the usual Mueller-Navelet jets but with an additional tagged jet in the central region of rapidity. Nontagged minijet activity between the three jets can affect significantly the azimuthal angle orientation of the jets and is accounted for by the introduction of two Balitsky-Fadin-Kuraev- Lipatov (BFKL) gluon Green functions. Here, we calculate the, presumably, most relevant higher order corrections to the observables by now convoluting the three leading order jet vertices with two gluon Green functions at next-to-leading logarithmic approximation. The corrections appear to be mostly moderate, giving us confidence that the recently proposed observables are actually an excellent way to probe the BFKL dynamics at the LHC. Furthermore, we allow for the jets to take values in different rapidity bins in various configurations such that a comparison between our predictions and the experimental data is a straightforward task.
Equivalent theories redefine Hamiltonian observables to exhibit change in general relativity
NASA Astrophysics Data System (ADS)
Pitts, J. Brian
2017-03-01
Change and local spatial variation are missing in canonical General Relativity’s observables as usually defined, an aspect of the problem of time. Definitions can be tested using equivalent formulations of a theory, non-gauge and gauge, because they must have equivalent observables and everything is observable in the non-gauge formulation. Taking an observable from the non-gauge formulation and finding the equivalent in the gauge formulation, one requires that the equivalent be an observable, thus constraining definitions. For massive photons, the de Broglie-Proca non-gauge formulation observable {{A}μ} is equivalent to the Stueckelberg-Utiyama gauge formulation quantity {{A}μ}+{{\\partial}μ}φ, which must therefore be an observable. To achieve that result, observables must have 0 Poisson bracket not with each first-class constraint, but with the Rosenfeld-Anderson-Bergmann-Castellani gauge generator G, a tuned sum of first-class constraints, in accord with the Pons-Salisbury-Sundermeyer definition of observables. The definition for external gauge symmetries can be tested using massive gravity, where one can install gauge freedom by parametrization with clock fields X A . The non-gauge observable {{g}μ ν} has the gauge equivalent {{X}A}{{,}μ}{{g}μ ν}{{X}B}{{,}ν}. The Poisson bracket of {{X}A}{{,}μ}{{g}μ ν}{{X}B}{{,}ν} with G turns out to be not 0 but a Lie derivative. This non-zero Poisson bracket refines and systematizes Kuchař’s proposal to relax the 0 Poisson bracket condition with the Hamiltonian constraint. Thus observables need covariance, not invariance, in relation to external gauge symmetries. The Lagrangian and Hamiltonian for massive gravity are those of General Relativity + Λ + 4 scalars, so the same definition of observables applies to General Relativity. Local fields such as {{g}μ ν} are observables. Thus observables change. Requiring equivalent observables for equivalent theories also recovers Hamiltonian-Lagrangian equivalence.
Structural Equation Models in a Redundancy Analysis Framework With Covariates.
Lovaglio, Pietro Giorgio; Vittadini, Giorgio
2014-01-01
A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.
Observed reflectivities and liquid water content for marine stratocumulus
NASA Technical Reports Server (NTRS)
Coakley, J. A., Jr.; Snider, J. B.
1989-01-01
Simultaneous observations of cloud liquid water content and cloud reflectivity are used to verify their parametric relationship in a manner consistent with simple parameterizations often used in general-circulation climate models. The column amount of cloud liquid water was measured with a microwave radiometer on San Nicolas Island as described by Hogg et al., (1983). Cloud reflectivity was obtained through spatial coherence analysis of AVHRR imagery data as per Coakley and Baldwin (1984) and Coakley and Beckner (1988). The dependence of the observed reflectivity on the observed liquid water is discussed, and this empirical relationship is compared with the parameterization proposed by Stephens (1978).
NASA Technical Reports Server (NTRS)
Bourras, D.; Eymard, L.; Liu, W. T.
2000-01-01
The turbulent latent and sensible heat fluxes are necessary to study heat budget of the upper ocean or initialize ocean general circulation models. In order to retrieve the latent heat flux from satellite observations authors mostly use a bulk approximation of the flux whose parameters are derived from different instrument. In this paper, an approach based on artificial neural networks is proposed and compared to the bulk method on a global data set and 3 local data sets.
Carpenter, G.B.; Cardinell, A.P.; Francois, D.K.; Good, L.K.; Lewis, R.L.; Stiles, N.T.
1982-01-01
Analysis of high-resolution geophysical data collected over 540 blocks tentatively selected for leasing in proposed OCS Oil and Gas Lease Sale 52 (Georges Bank) revealed a number of potential geologic hazards to oil and gas exploration and development activities: evidence of mass movements and shallow gas deposits on the continental slope. No potential hazards were observed on the continental shelf or rise. Other geology-related problems, termed constraints because they pose a relatively low degree of risk and can be routinely dealt with by the use of existing technology have been observed on the continental shelf. Constraints identified in the proposed sale area are erosion, sand waves, filled channels and deep faults. Piston cores were collected for geotechnical analysis at selected locations on the continental slope in the proposed lease sale area. The core locations were selected to provide information on slope stability and to establish the general geotechnical properties of the sediments. Preliminary results of a testing program suggest that the surficial sediment cover is stable with respect to mass movement.
Extraterrestrial intelligence - An observational approach
NASA Technical Reports Server (NTRS)
Murray, B.; Gulkis, S.; Edelson, R. E.
1978-01-01
The article surveys present and proposed search techniques for extraterrestrial intelligence in terms of technological requirements. It is proposed that computer systems used along with existing antennas may be utilized to search for radio signals over a broad frequency range. A general search within the electromagnetic spectrum would explore frequency, received power flux, spatial locations, and modulation. Previous SETI projects (beginning in 1960) are briefly described. An observation project is proposed in which the earth's rotational motion would scan the antenna beam along one declination circle in 24 hours. The 15 degree beam width would yield a mapping of 75% of the sky in an 8-day period if the beam were shifted 15 degrees per day. With the proposed instrument parameters, a sensitivity of about 10 to the -21 watt/sq m is achieved at a 0 degree declination and 1.5 GHz. In a second phase, a 26 m antenna would yield an HPBW of 0.8 degrees at 1 GHz and 0.03 degrees at 25 GHz. It is noted that the described technology would provide secondary benefits for radio astronomy, radio communications, and other fields.
NASA Technical Reports Server (NTRS)
Fuerst, Steven V.; Mizuno, Yosuke; Nishikawa, Ken-Ichi; Wu, Kinwah
2007-01-01
We have calculated the emission from relativistic flows in black hole systems using a fully general relativistic radiative transfer, with flow structures obtained by general relativistic magnetohydrodynamic simulations. We consider thermal free-free emission and thermal synchrotron emission. Bright filament-like features are found protruding (visually) from the accretion disk surface, which are enhancements of synchrotron emission when the magnetic field is roughly aligned with the line-of-sight in the co-moving frame. The features move back and forth as the accretion flow evolves, but their visibility and morphology are robust. We propose that variations and location drifts of the features are responsible for certain X-ray quasi-periodic oscillations (QPOs) observed in black-hole X-ray binaries.
Social inheritance can explain the structure of animal social networks
Ilany, Amiyaal; Akçay, Erol
2016-01-01
The social network structure of animal populations has major implications for survival, reproductive success, sexual selection and pathogen transmission of individuals. But as of yet, no general theory of social network structure exists that can explain the diversity of social networks observed in nature, and serve as a null model for detecting species and population-specific factors. Here we propose a simple and generally applicable model of social network structure. We consider the emergence of network structure as a result of social inheritance, in which newborns are likely to bond with maternal contacts, and via forming bonds randomly. We compare model output with data from several species, showing that it can generate networks with properties such as those observed in real social systems. Our model demonstrates that important observed properties of social networks, including heritability of network position or assortative associations, can be understood as consequences of social inheritance. PMID:27352101
ERIC Educational Resources Information Center
White, Peter A.
2009-01-01
Many kinds of common and easily observed causal relations exhibit property transmission, which is a tendency for the causal object to impose its own properties on the effect object. It is proposed that property transmission becomes a general and readily available hypothesis used to make interpretations and judgments about causal questions under…
On the verification of intransitive noninterference in mulitlevel security.
Ben Hadj-Alouane, Nejib; Lafrance, Stéphane; Lin, Feng; Mullins, John; Yeddes, Mohamed Moez
2005-10-01
We propose an algorithmic approach to the problem of verification of the property of intransitive noninterference (INI), using tools and concepts of discrete event systems (DES). INI can be used to characterize and solve several important security problems in multilevel security systems. In a previous work, we have established the notion of iP-observability, which precisely captures the property of INI. We have also developed an algorithm for checking iP-observability by indirectly checking P-observability for systems with at most three security levels. In this paper, we generalize the results for systems with any finite number of security levels by developing a direct method for checking iP-observability, based on an insightful observation that the iP function is a left congruence in terms of relations on formal languages. To demonstrate the applicability of our approach, we propose a formal method to detect denial of service vulnerabilities in security protocols based on INI. This method is illustrated using the TCP/IP protocol. The work extends the theory of supervisory control of DES to a new application domain.
NASA Technical Reports Server (NTRS)
Steiner, B.; Kuriyama, M.; Dobbyn, R. C.; Laor, U.; Larson, D.; Brown, M.
1988-01-01
Novel, streak-like disruption features restricted to the plane of diffraction have recently been observed in images obtained by synchrotron radiation diffraction from undoped, semi-insulating gallium arsenide crystals. These features were identified as ensembles of very thin platelets or interfaces lying in (110) planes, and a structural model consisting of antiphase domain boundaries was proposed. We report here the other principal features observed in high resolution monochromatic synchrotron radiation diffraction images: (quasi) cellular structure; linear, very low-angle subgrain boundaries in (110) directions, and surface stripes in a (110) direction. In addition, we report systematic differences in the acceptance angle for images involving various diffraction vectors. When these observations are considered together, a unifying picture emerges. The presence of ensembles of thin (110) antiphase platelet regions or boundaries is generally consistent not only with the streak-like diffraction features but with the other features reported here as well. For the formation of such regions we propose two mechanisms, operating in parallel, that appear to be consistent with the various defect features observed by a variety of techniques.
NASA Technical Reports Server (NTRS)
Steiner, B.; Kuriyama, M.; Dobbyn, R. C.; Laor, U.; Larson, D.
1989-01-01
Novel, streak-like disruption features restricted to the plane of diffraction have recently been observed in images obtained by synchrotron radiation diffraction from undoped, semi-insulating gallium arsenide crystals. These features were identified as ensembles of very thin platelets or interfaces lying in (110) planes, and a structural model consisting of antiphase domain boundaries was proposed. We report here the other principal features observed in high resolution monochromatic synchrotron radiation diffraction images: (quasi) cellular structure; linear, very low-angle subgrain boundaries in (110) directions, and surface stripes in a (110) direction. In addition, we report systematic differences in the acceptance angle for images involving various diffraction vectors. When these observations are considered together, a unifying picture emerges. The presence of ensembles of thin (110) antiphase platelet regions or boundaries is generally consistent not only with the streak-like diffraction features but with the other features reported here as well. For the formation of such regions we propose two mechanisms, operating in parallel, that appear to be consistent with the various defect features observed by a variety of techniques.
Xing, Junliang; Ai, Haizhou; Liu, Liwei; Lao, Shihong
2011-06-01
Multiple object tracking (MOT) is a very challenging task yet of fundamental importance for many practical applications. In this paper, we focus on the problem of tracking multiple players in sports video which is even more difficult due to the abrupt movements of players and their complex interactions. To handle the difficulties in this problem, we present a new MOT algorithm which contributes both in the observation modeling level and in the tracking strategy level. For the observation modeling, we develop a progressive observation modeling process that is able to provide strong tracking observations and greatly facilitate the tracking task. For the tracking strategy, we propose a dual-mode two-way Bayesian inference approach which dynamically switches between an offline general model and an online dedicated model to deal with single isolated object tracking and multiple occluded object tracking integrally by forward filtering and backward smoothing. Extensive experiments on different kinds of sports videos, including football, basketball, as well as hockey, demonstrate the effectiveness and efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.
Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen
2015-05-01
Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.
Jaman, Ajmery; Latif, Mahbub A H M; Bari, Wasimul; Wahed, Abdus S
2016-05-20
In generalized estimating equations (GEE), the correlation between the repeated observations on a subject is specified with a working correlation matrix. Correct specification of the working correlation structure ensures efficient estimators of the regression coefficients. Among the criteria used, in practice, for selecting working correlation structure, Rotnitzky-Jewell, Quasi Information Criterion (QIC) and Correlation Information Criterion (CIC) are based on the fact that if the assumed working correlation structure is correct then the model-based (naive) and the sandwich (robust) covariance estimators of the regression coefficient estimators should be close to each other. The sandwich covariance estimator, used in defining the Rotnitzky-Jewell, QIC and CIC criteria, is biased downward and has a larger variability than the corresponding model-based covariance estimator. Motivated by this fact, a new criterion is proposed in this paper based on the bias-corrected sandwich covariance estimator for selecting an appropriate working correlation structure in GEE. A comparison of the proposed and the competing criteria is shown using simulation studies with correlated binary responses. The results revealed that the proposed criterion generally performs better than the competing criteria. An example of selecting the appropriate working correlation structure has also been shown using the data from Madras Schizophrenia Study. Copyright © 2015 John Wiley & Sons, Ltd.
Xia, Youshen; Kamel, Mohamed S
2007-06-01
Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.
Prevalence of anal symptoms in general practice: a prospective study.
Tournu, Géraldine; Abramowitz, Laurent; Couffignal, Camille; Juguet, Frédéric; Sénéjoux, Agnès; Berger, Stéphane; Wiart, Anne-Laure; Bernard, Marc; Provost, Françoise; Pillant-Le Moult, Hélène; Bouchard, Dominique; Aubert, Jean-Pierre
2017-08-03
Anal disorders are largely underestimated in general practice. Studies have shown patients conceal anal symptoms leading to late diagnosis and treatment. Management by general practitioners is poorly described. The aim of this study is to assess the prevalence of anal symptoms and their management in general practice. In this prospective, observational, national study set in France, all adult patients consulting their general practitioner during 2 days of consultation were included. Anal symptoms, whether spontaneously revealed or not, were systematically collected and assessed. For symptomatic patients, the obstacles to anal examination were evaluated. The general practitioner's diagnosis was collected and a proctologist visit was systematically proposed in case of anal symptoms. If the proctologist was consulted, his or her diagnosis was collected. From October 2014 to April 2015, 1061 patients were included by 57 general practitioners. The prevalence of anal symptoms was 15.6% (95% CI: 14-18). However, 85% of these patients did not spontaneously share their symptoms with their doctors, despite a discomfort rating of 3 out of 10 (range 1-5). Although 65% of patients agreed to an anal examination, it was not proposed in 45% of cases with anal symptoms. Performing the examination was associated with a significantly higher diagnosis rate of 76% versus 20% (p < 0.001). Proctologist and general practitioner diagnoses were consistent in 14 out of 17 cases. Patients' concealed anal symptoms are significant in general practice despite the impact on quality of life. Anal examination is seldom done. Improved training of general practitioners is required to break the taboo.
The Principle of General Tovariance
NASA Astrophysics Data System (ADS)
Heunen, C.; Landsman, N. P.; Spitters, B.
2008-06-01
We tentatively propose two guiding principles for the construction of theories of physics, which should be satisfied by a possible future theory of quantum gravity. These principles are inspired by those that led Einstein to his theory of general relativity, viz. his principle of general covariance and his equivalence principle, as well as by the two mysterious dogmas of Bohr's interpretation of quantum mechanics, i.e. his doctrine of classical concepts and his principle of complementarity. An appropriate mathematical language for combining these ideas is topos theory, a framework earlier proposed for physics by Isham and collaborators. Our principle of general tovariance states that any mathematical structure appearing in the laws of physics must be definable in an arbitrary topos (with natural numbers object) and must be preserved under so-called geometric morphisms. This principle identifies geometric logic as the mathematical language of physics and restricts the constructions and theorems to those valid in intuitionism: neither Aristotle's principle of the excluded third nor Zermelo's Axiom of Choice may be invoked. Subsequently, our equivalence principle states that any algebra of observables (initially defined in the topos Sets) is empirically equivalent to a commutative one in some other topos.
NASA Astrophysics Data System (ADS)
Avanaki, Ali R. N.; Espig, Kathryn; Knippel, Eddie; Kimpe, Tom R. L.; Xthona, Albert; Maidment, Andrew D. A.
2016-03-01
In this paper, we specify a notion of background tissue complexity (BTC) as perceived by a human observer that is suited for use with model observers. This notion of BTC is a function of image location and lesion shape and size. We propose four unsupervised BTC estimators based on: (i) perceived pre- and post-lesion similarity of images, (ii) lesion border analysis (LBA; conspicuous lesion should be brighter than its surround), (iii) tissue anomaly detection, and (iv) mammogram density measurement. The latter two are existing methods we adapt for location- and lesion-dependent BTC estimation. To validate the BTC estimators, we ask human observers to measure BTC as the visibility threshold amplitude of an inserted lesion at specified locations in a mammogram. Both human-measured and computationally estimated BTC varied with lesion shape (from circular to oval), size (from small circular to larger circular), and location (different points across a mammogram). BTCs measured by different human observers are correlated (ρ=0.67). BTC estimators are highly correlated to each other (0.84
No black holes: A gravitational gauge theory possibility
NASA Astrophysics Data System (ADS)
Chang, David B.; Johnson, Harold H.
1980-06-01
The most general lowest order lagrangian that can be formed from gauge-derived vierbein invariants is constrained by the hypothesis that the speed of light as measured by conventional rods and clocks of atomic constitution is independent of direction in a gravitational field. It is shown that the standard weak field observational tests of general relativity serve to eliminate all possible combinations of parameters in this constrained lagrangian except two. One parameter choice gives the isotropic Schwarzchild black hole metric of the general theory of relativity. The other allowable choice leads to an exponential metric of the class proposed by Yilmaz, corresponding in strong fields to large red shifts without black hole formation. Permanent address: Trinity College; Deerfield, Illinois.
Caldwell, Robert R
2011-12-28
The challenge to understand the physical origin of the cosmic acceleration is framed as a problem of gravitation. Specifically, does the relationship between stress-energy and space-time curvature differ on large scales from the predictions of general relativity. In this article, we describe efforts to model and test a generalized relationship between the matter and the metric using cosmological observations. Late-time tracers of large-scale structure, including the cosmic microwave background, weak gravitational lensing, and clustering are shown to provide good tests of the proposed solution. Current data are very close to proving a critical test, leaving only a small window in parameter space in the case that the generalized relationship is scale free above galactic scales.
A review of the generalized uncertainty principle.
Tawfik, Abdel Nasser; Diab, Abdel Magied
2015-12-01
Based on string theory, black hole physics, doubly special relativity and some 'thought' experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in the understanding of recent PLANCK observations of cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta. Possible arguments against the GUP are discussed; for instance, the concern about its compatibility with the equivalence principles, the universality of gravitational redshift and the free fall and law of reciprocal action are addressed.
Properties of added variable plots in Cox's regression model.
Lindkvist, M
2000-03-01
The added variable plot is useful for examining the effect of a covariate in regression models. The plot provides information regarding the inclusion of a covariate, and is useful in identifying influential observations on the parameter estimates. Hall et al. (1996) proposed a plot for Cox's proportional hazards model derived by regarding the Cox model as a generalized linear model. This paper proves and discusses properties of this plot. These properties make the plot a valuable tool in model evaluation. Quantities considered include parameter estimates, residuals, leverage, case influence measures and correspondence to previously proposed residuals and diagnostics.
Transformation of Didactic Intensions by Teachers: The Case of Geometrical Optics in Grade 8.
ERIC Educational Resources Information Center
Hirn, Colette; Viennot, Laurence
2000-01-01
Investigates the idea that teachers are not passive transmitters, and that some general trends can be found in the way they transform proposed strategies. Presents the case of elementary optics in grade 8 in France in which four sets of data--interviews before teaching, logbooks, assessment tasks, and video-recorded class observations--lead to…
Research into Practice: Visualising the Molecular World for a Deep Understanding of Chemistry
ERIC Educational Resources Information Center
Tasker, Roy
2014-01-01
Why is chemistry so difficult? A seminal paper by Johnstone (1982) offered an explanation for why science in general, and chemistry in particular, is so difficult to learn. He proposed that an expert in chemistry thinks at three levels; the macro (referred to as the observational level in this article), the sub-micro (referred to as the molecular…
Characterization of measurements in quantum communication. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chan, V. W. S.
1975-01-01
A characterization of quantum measurements by operator valued measures is presented. The generalized measurements include simultaneous approximate measurement of noncommuting observables. This characterization is suitable for solving problems in quantum communication. Two realizations of such measurements are discussed. The first is by adjoining an apparatus to the system under observation and performing a measurement corresponding to a self-adjoint operator in the tensor-product Hilbert space of the system and apparatus spaces. The second realization is by performing, on the system alone, sequential measurements that correspond to self-adjoint operators, basing the choice of each measurement on the outcomes of previous measurements. Simultaneous generalized measurements are found to be equivalent to a single finer grain generalized measurement, and hence it is sufficient to consider the set of single measurements. An alternative characterization of generalized measurement is proposed. It is shown to be equivalent to the characterization by operator-values measures, but it is potentially more suitable for the treatment of estimation problems. Finally, a study of the interaction between the information-carrying system and a measurement apparatus provides clues for the physical realizations of abstractly characterized quantum measurements.
Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.
Zhang, Yue; Berhane, Kiros
2016-01-01
We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.
Generalized framework for testing gravity with gravitational-wave propagation. I. Formulation
NASA Astrophysics Data System (ADS)
Nishizawa, Atsushi
2018-05-01
The direct detection of gravitational waves (GWs) from merging binary black holes and neutron stars marks the beginning of a new era in gravitational physics, and it brings forth new opportunities to test theories of gravity. To this end, it is crucial to search for anomalous deviations from general relativity in a model-independent way, irrespective of gravity theories, GW sources, and background spacetimes. In this paper, we propose a new universal framework for testing gravity with GWs, based on the generalized propagation of a GW in an effective field theory that describes modification of gravity at cosmological scales. Then, we perform a parameter estimation study, showing how well the future observation of GWs can constrain the model parameters in the generalized models of GW propagation.
Analysis of cohort studies with multivariate and partially observed disease classification data.
Chatterjee, Nilanjan; Sinha, Samiran; Diver, W Ryan; Feigelson, Heather Spencer
2010-09-01
Complex diseases like cancers can often be classified into subtypes using various pathological and molecular traits of the disease. In this article, we develop methods for analysis of disease incidence in cohort studies incorporating data on multiple disease traits using a two-stage semiparametric Cox proportional hazards regression model that allows one to examine the heterogeneity in the effect of the covariates by the levels of the different disease traits. For inference in the presence of missing disease traits, we propose a generalization of an estimating equation approach for handling missing cause of failure in competing-risk data. We prove asymptotic unbiasedness of the estimating equation method under a general missing-at-random assumption and propose a novel influence-function-based sandwich variance estimator. The methods are illustrated using simulation studies and a real data application involving the Cancer Prevention Study II nutrition cohort.
Rank-preserving regression: a more robust rank regression model against outliers.
Chen, Tian; Kowalski, Jeanne; Chen, Rui; Wu, Pan; Zhang, Hui; Feng, Changyong; Tu, Xin M
2016-08-30
Mean-based semi-parametric regression models such as the popular generalized estimating equations are widely used to improve robustness of inference over parametric models. Unfortunately, such models are quite sensitive to outlying observations. The Wilcoxon-score-based rank regression (RR) provides more robust estimates over generalized estimating equations against outliers. However, the RR and its extensions do not sufficiently address missing data arising in longitudinal studies. In this paper, we propose a new approach to address outliers under a different framework based on the functional response models. This functional-response-model-based alternative not only addresses limitations of the RR and its extensions for longitudinal data, but, with its rank-preserving property, even provides more robust estimates than these alternatives. The proposed approach is illustrated with both real and simulated data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Optimal control of large space structures via generalized inverse matrix
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Fang, Xiaowen
1987-01-01
Independent Modal Space Control (IMSC) is a control scheme that decouples the space structure into n independent second-order subsystems according to n controlled modes and controls each mode independently. It is well-known that the IMSC eliminates control and observation spillover caused when the conventional coupled modal control scheme is employed. The independent control of each mode requires that the number of actuators be equal to the number of modelled modes, which is very high for a faithful modeling of large space structures. A control scheme is proposed that allows one to use a reduced number of actuators to control all modeled modes suboptimally. In particular, the method of generalized inverse matrices is employed to implement the actuators such that the eigenvalues of the closed-loop system are as closed as possible to those specified by the optimal IMSC. Computer simulation of the proposed control scheme on a simply supported beam is given.
Application of remote sensors in coastal zone observations
NASA Technical Reports Server (NTRS)
Caillat, J. M.; Elachi, C.; Brown, W. E., Jr.
1975-01-01
A review of processes taking place along coastlines and their biological consideration led to the determination of the elements which are required in the study of coastal structures and which are needed for better utilization of the resources from the oceans. The processes considered include waves, currents, and their influence on the erosion of coastal structures. Biological considerations include coastal fisheries, estuaries, and tidal marshes. Various remote sensors were analyzed for the information which they can provide and sites were proposed where a general ocean-observation plan could be tested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salzano, Vincenzo; Da̧browski, Mariusz P.; Mota, David F.
Modified gravity theories with a screening mechanism have acquired much interest recently in the quest for a viable alternative to General Relativity on cosmological scales, given their intrinsic property of being able to pass Solar System scale tests and, at the same time, to possibly drive universe acceleration on much larger scales. Here, we explore the possibility that the same screening mechanism, or its breaking at a certain astrophysical scale, might be responsible of those gravitational effects which, in the context of general relativity, are generally attributed to Dark Matter. We consider a recently proposed extension of covariant Galileon modelsmore » in the so-called ''beyond Horndeski'' scenario, where a breaking of the Vainshtein mechanism is possible and, thus, some peculiar observational signatures should be detectable and make it distinguishable from general relativity. We apply this model to a sample of clusters of galaxies observed under the CLASH survey, using both new data from gravitational lensing events and archival data from X-ray intra-cluster hot gas observations. In particular, we use the latter to model the gas density, and then use it as the only ingredient in the matter clusters' budget to calculate the expected lensing convergence map. Results show that, in the context of this extended Galileon, the assumption of having only gas and no Dark Matter at all in the clusters is able to match observations. We also obtain narrow and very interesting bounds on the parameters which characterize this model. In particular, we find that, at least for one of them, the general relativity limit is excluded at 2σ confidence level, thus making this model clearly statistically different and competitive with respect to general relativity.« less
No need for dark matter in galaxy clusters within Galileon theory
NASA Astrophysics Data System (ADS)
Salzano, Vincenzo; Mota, David F.; Dabrowski, Mariusz P.; Capozziello, Salvatore
2016-10-01
Modified gravity theories with a screening mechanism have acquired much interest recently in the quest for a viable alternative to General Relativity on cosmological scales, given their intrinsic property of being able to pass Solar System scale tests and, at the same time, to possibly drive universe acceleration on much larger scales. Here, we explore the possibility that the same screening mechanism, or its breaking at a certain astrophysical scale, might be responsible of those gravitational effects which, in the context of general relativity, are generally attributed to Dark Matter. We consider a recently proposed extension of covariant Galileon models in the so-called ``beyond Horndeski'' scenario, where a breaking of the Vainshtein mechanism is possible and, thus, some peculiar observational signatures should be detectable and make it distinguishable from general relativity. We apply this model to a sample of clusters of galaxies observed under the CLASH survey, using both new data from gravitational lensing events and archival data from X-ray intra-cluster hot gas observations. In particular, we use the latter to model the gas density, and then use it as the only ingredient in the matter clusters' budget to calculate the expected lensing convergence map. Results show that, in the context of this extended Galileon, the assumption of having only gas and no Dark Matter at all in the clusters is able to match observations. We also obtain narrow and very interesting bounds on the parameters which characterize this model. In particular, we find that, at least for one of them, the general relativity limit is excluded at 2σ confidence level, thus making this model clearly statistically different and competitive with respect to general relativity.
NASA Technical Reports Server (NTRS)
Bernard, L. C.
1973-01-01
Whistler mode waves that propagate through the magnetosphere exchange energy with energetic electrons by wave-particle interaction mechanisms. Using linear theory, a detailed investigation is presented of the resulting amplitude variations of the wave as it propagates. Arbitrary wave frequency and direction of propagation are considered. A general class of electron distributions that are nonseparable in particle energy and pitch-angle is proposed. It is found that the proposed distribution model is consistent with available whistler and particle observations. This model yields insignificant amplitude variation over a large frequency band, a feature commonly observed in whistler data. This feature implies a certain equilibrium between waves and particles in the magnetosphere over a wide spread of particle energy, and is relevant to plasma injection experiments and to monitoring the distribution of energetic electrons in the magnetosphere.
Solar Prominence Fine Structure and Dynamics
NASA Astrophysics Data System (ADS)
Berger, Thomas
2014-01-01
We review recent observational and theoretical results on the fine structure and dynamics of solar prominences, beginning with an overview of prominence classifications, the proposal of possible new ``funnel prominence'' classification, and a discussion of the recent ``solar tornado'' findings. We then focus on quiescent prominences to review formation, down-flow dynamics, and the ``prominence bubble'' phenomena. We show new observations of the prominence bubble Rayleigh-Taylor instability triggered by a Kelvin-Helmholtz shear flow instability occurring along the bubble boundary. Finally we review recent studies on plasma composition of bubbles, emphasizing that differential emission measure (DEM) analysis offers a more quantitative analysis than photometric comparisons. In conclusion, we discuss the relation of prominences to coronal magnetic flux ropes, proposing that prominences can be understood as partially ionized condensations of plasma forming the return flow of a general magneto-thermal convection in the corona.
Distributed finite-time containment control for double-integrator multiagent systems.
Wang, Xiangyu; Li, Shihua; Shi, Peng
2014-09-01
In this paper, the distributed finite-time containment control problem for double-integrator multiagent systems with multiple leaders and external disturbances is discussed. In the presence of multiple dynamic leaders, by utilizing the homogeneous control technique, a distributed finite-time observer is developed for the followers to estimate the weighted average of the leaders' velocities at first. Then, based on the estimates and the generalized adding a power integrator approach, distributed finite-time containment control algorithms are designed to guarantee that the states of the followers converge to the dynamic convex hull spanned by those of the leaders in finite time. Moreover, as a special case of multiple dynamic leaders with zero velocities, the proposed containment control algorithms also work for the case of multiple stationary leaders without using the distributed observer. Simulations demonstrate the effectiveness of the proposed control algorithms.
Electronic evaluation for video commercials by impression index.
Kong, Wanzeng; Zhao, Xinxin; Hu, Sanqing; Vecchiato, Giovanni; Babiloni, Fabio
2013-12-01
How to evaluate the effect of commercials is significantly important in neuromarketing. In this paper, we proposed an electronic way to evaluate the influence of video commercials on consumers by impression index. The impression index combines both the memorization and attention index during consumers observing video commercials by tracking the EEG activity. It extracts features from scalp EEG to evaluate the effectiveness of video commercials in terms of time-frequency-space domain. And, the general global field power was used as an impression index for evaluation of video commercial scenes as time series. Results of experiment demonstrate that the proposed approach is able to track variations of the cerebral activity related to cognitive task such as observing video commercials, and help to judge whether the scene in video commercials is impressive or not by EEG signals.
Guo, Ying; Manatunga, Amita K
2009-03-01
Assessing agreement is often of interest in clinical studies to evaluate the similarity of measurements produced by different raters or methods on the same subjects. We present a modified weighted kappa coefficient to measure agreement between bivariate discrete survival times. The proposed kappa coefficient accommodates censoring by redistributing the mass of censored observations within the grid where the unobserved events may potentially happen. A generalized modified weighted kappa is proposed for multivariate discrete survival times. We estimate the modified kappa coefficients nonparametrically through a multivariate survival function estimator. The asymptotic properties of the kappa estimators are established and the performance of the estimators are examined through simulation studies of bivariate and trivariate survival times. We illustrate the application of the modified kappa coefficient in the presence of censored observations with data from a prostate cancer study.
Enhancement of magnetocaloric effect in the Gd 2Al phase by Co alloying
Huang, Z. Y.; Fu, H.; Hadimani, R. L.; ...
2014-11-14
We observe that Cu clusters grow on surface terraces of graphite as a result of physical vapor deposition in ultrahigh vacuum. We show that the observation is incompatible with a variety of models incorporating homogeneous nucleation and high level calculations of atomic-scale energetics. An alternative explanation, ion-mediated heterogeneous nucleation, is proposed and validated, both with theory and experiment. This serves as a case study in identifying when and whether the simple, common observation of metal clusters on carbon-rich surfaces can be interpreted in terms of homogeneous nucleation. We describe a general approach for making system-specific and laboratory-specific predictions.
Determining whether metals nucleate homogeneously on graphite: A case study with copper
Appy, David; Lei, Huaping; Han, Yong; ...
2014-11-05
In this study, we observe that Cu clusters grow on surface terraces of graphite as a result of physical vapor deposition in ultrahigh vacuum. We show that the observation is incompatible with a variety of models incorporating homogeneous nucleation and calculations of atomic-scale energetics. An alternative explanation, ion-mediated heterogeneous nucleation, is proposed and validated, both with theory and experiment. This serves as a case study in identifying when and whether the simple, common observation of metal clusters on carbon-rich surfaces can be interpreted in terms of homogeneous nucleation. We describe a general approach for making system-specific and laboratory-specific predictions.
A new method of passive modifications for partial frequency assignment of general structures
NASA Astrophysics Data System (ADS)
Belotti, Roberto; Ouyang, Huajiang; Richiedei, Dario
2018-01-01
The assignment of a subset of natural frequencies to vibrating systems can be conveniently achieved by means of suitable structural modifications. It has been observed that such an approach usually leads to the undesired change of the unassigned natural frequencies, which is a phenomenon known as frequency spill-over. Such an issue has been dealt with in the literature only in simple specific cases. In this paper, a new and general method is proposed that aims to assign a subset of natural frequencies with low spill-over. The optimal structural modifications are determined through a three-step procedure that considers both the prescribed eigenvalues and the feasibility constraints, assuring that the obtained solution is physically realizable. The proposed method is therefore applicable to very general vibrating systems, such as those obtained through the finite element method. The numerical difficulties that may occur as a result of employing the method are also carefully addressed. Finally, the capabilities of the method are validated in three test-cases in which both lumped and distributed parameters are modified to obtain the desired eigenvalues.
NASA Astrophysics Data System (ADS)
Parand, K.; Latifi, S.; Moayeri, M. M.; Delkhosh, M.
2018-05-01
In this study, we have constructed a new numerical approach for solving the time-dependent linear and nonlinear Fokker-Planck equations. In fact, we have discretized the time variable with Crank-Nicolson method and for the space variable, a numerical method based on Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) collocation method is applied. It leads to in solving the equation in a series of time steps and at each time step, the problem is reduced to a problem consisting of a system of algebraic equations that greatly simplifies the problem. One can observe that the proposed method is simple and accurate. Indeed, one of its merits is that it is derivative-free and by proposing a formula for derivative matrices, the difficulty aroused in calculation is overcome, along with that it does not need to calculate the General Lagrange basis and matrices; they have Kronecker property. Linear and nonlinear Fokker-Planck equations are given as examples and the results amply demonstrate that the presented method is very valid, effective, reliable and does not require any restrictive assumptions for nonlinear terms.
NASA Astrophysics Data System (ADS)
Simon, M.; Dolinar, S.
2005-08-01
A means is proposed for realizing the generalized split-symbol moments estimator (SSME) of signal-to-noise ratio (SNR), i.e., one whose implementation on the average allows for a number of subdivisions (observables), 2L, per symbol beyond the conventional value of two, with other than an integer value of L. In theory, the generalized SSME was previously shown to yield optimum performance for a given true SNR, R, when L=R/sqrt(2) and thus, in general, the resulting estimator was referred to as the fictitious SSME. Here we present a time-multiplexed version of the SSME that allows it to achieve its optimum value of L as above (to the extent that it can be computed as the average of a sum of integers) at each value of SNR and as such turns fiction into non-fiction. Also proposed is an adaptive algorithm that allows the SSME to rapidly converge to its optimum value of L when in fact one has no a priori information about the true value of SNR.
Simultaneous Chandra/EHT/NuSTAR Monitoring of Sgr A* Flares
NASA Astrophysics Data System (ADS)
Garmire, Gordon
2017-09-01
EHT will observe SgrA* at 0.85 mm during the period 2017 April 5-14 UT. These will be the first mm VLBI observations with sufficient effective area and angular resolution to produce time-resolved images of the event horizon of a black hole, enabling tests of general relativity in the strong gravity regime and a search for structural variability, especially during flares. Chandra Flight Ops has identified windows on four dates when Chandra can observe SgrA* uninterrupted for 33 ks simultaneous with EHT. NuSTAR will coordinate to observe simultaneously in these windows. This Cycle 19 observation will cover one of the four windows. The other three will be covered by splitting 100 ks of Cycle 18 time currently in ObsIDs 19726 and 19727 into three observations (Proposal 18620742).
On the soft supersymmetry-breaking parameters in gauge-mediated models
NASA Astrophysics Data System (ADS)
Wagner, C. E. M.
1998-09-01
Gauge mediation of supersymmetry breaking in the observable sector is an attractive idea, which naturally alleviates the flavor changing neutral current problem of supersymmetric theories. Quite generally, however, the number and quantum number of the messengers are not known; nor is their characteristic mass scale determined by the theory. Using the recently proposed method to extract supersymmetry-breaking parameters from wave-function renormalization, we derived general formulae for the soft supersymmetry-breaking parameters in the observable sector, valid in the small and moderate tan β regimes, for the case of split messengers. The full leading-order effects of top Yukawa and gauge couplings on the soft supersymmetry-breaking parameters are included. We give a simple interpretation of the general formulae in terms of the renormalization group evolution of the soft supersymmetry-breaking parameters. As a by-product of this analysis, the one-loop renormalization group evolution of the soft supersymmetry-breaking parameters is obtained for arbitrary boundary conditions of the scalar and gaugino mass parameters at high energies.
Barnett, Adrian G; Herbert, Danielle L; Campbell, Megan; Daly, Naomi; Roberts, Jason A; Mudge, Alison; Graves, Nicholas
2015-02-07
Despite the widely recognised importance of sustainable health care systems, health services research remains generally underfunded in Australia. The Australian Centre for Health Services Innovation (AusHSI) is funding health services research in the state of Queensland. AusHSI has developed a streamlined protocol for applying and awarding funding using a short proposal and accelerated peer review. An observational study of proposals for four health services research funding rounds from May 2012 to November 2013. A short proposal of less than 1,200 words was submitted using a secure web-based portal. The primary outcome measures are: time spent preparing proposals; a simplified scoring of grant proposals (reject, revise or accept for interview) by a scientific review committee; and progressing from submission to funding outcomes within eight weeks. Proposals outside of health services research were deemed ineligible. There were 228 eligible proposals across 4 funding rounds: from 29% to 79% were shortlisted and 9% to 32% were accepted for interview. Success rates increased from 6% (in 2012) to 16% (in 2013) of eligible proposals. Applicants were notified of the outcomes within two weeks from the interview; which was a maximum of eight weeks after the submission deadline. Applicants spent 7 days on average preparing their proposal. Applicants with a ranking of reject or revise received written feedback and suggested improvements for their proposals, and resubmissions composed one third of the 2013 rounds. The AusHSI funding scheme is a streamlined application process that has simplified the process of allocating health services research funding for both applicants and peer reviewers. The AusHSI process has minimised the time from submission to notification of funding outcomes.
Is Prediction Possible in General Relativity?
NASA Astrophysics Data System (ADS)
Manchak, John Byron
2008-04-01
Here we briefly review the concept of “prediction” within the context of classical relativity theory. We prove a theorem asserting that one may predict one’s own future only in a closed universe. We then question whether prediction is possible at all (even in closed universes). We note that interest in prediction has stemmed from considering the epistemological predicament of the observer. We argue that the definitions of prediction found thus far in the literature do not fully appreciate this predicament. We propose a more adequate alternative and show that, under this definition, prediction is essentially impossible in general relativity.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-14
... Environmental Impact Statement for the Proposed Replacement General Aviation Airport, Mesquite, Clark County, NV... Environmental Impact Statement (EIS) for a proposed Replacement General Aviation (GA) Airport in Mesquite, Clark... General Aviation (GA) Airport, for the City of Mesquite in eastern Clark County, Nevada. The City [[Page...
Murray, Kara; McKenzie, Karen; Kelleher, Michael
2016-10-01
The importance of non-technical skills (NTS) to patient outcomes is increasingly being recognised, however, there is limited research into how such skills can be taught and evaluated in student nurses in relation toward rounds. This pilot study describes an evaluation of a NTS framework that could potentially be used to measure ward round skills of student nurses. The study used an observational design. Potential key NTS were identified from existing literature and NTS taxonomies. The proposed framework was then used to evaluate whether the identified NTS were evident in a series of ward round simulations that final year general nursing students undertook as part of their training. Finally, the views of a small group of qualified nurse educators, qualified nurses and general nursing students were sought about whether the identified NTS were important and relevant to practice. The proposed NTS framework included seven categories: Communication, Decision Making, Situational Awareness, Teamwork and Task Management, Student Initiative and Responsiveness to Patient. All were rated as important and relevant to practice. The pilot study suggests that the proposed NTS framework could be used as a means of evaluating student nurse competencies in respect of many non-technical skills required for a successful ward round. Further work is required to establish the validity of the framework in educational settings and to determine the extent to which it is of use in a non-simulated ward round setting. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Yu-Lin
The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in our orbit problem. D.C. Brown proposed an algorithm solving a more general least squares adjustment problem in which the scalar residual function, however, is still constructed by first-order approximation. Not long ago, a completely general solution was published by W.H Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in our problem. The normal equations were first solved by Newton's scheme. Practical examples show that this converges fast if the observational errors are sufficiently small and the initial approximate solution is sufficiently accurate, and that it fails otherwise. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also considered. The definition of efficiency is revised.
Keshavarzi, Sareh; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Pakfetrat, Maryam
2012-01-01
BACKGROUND. In many studies with longitudinal data, time-dependent covariates can only be measured intermittently (not at all observation times), and this presents difficulties for standard statistical analyses. This situation is common in medical studies, and methods that deal with this challenge would be useful. METHODS. In this study, we performed the seemingly unrelated regression (SUR) based models, with respect to each observation time in longitudinal data with intermittently observed time-dependent covariates and further compared these models with mixed-effect regression models (MRMs) under three classic imputation procedures. Simulation studies were performed to compare the sample size properties of the estimated coefficients for different modeling choices. RESULTS. In general, the proposed models in the presence of intermittently observed time-dependent covariates showed a good performance. However, when we considered only the observed values of the covariate without any imputations, the resulted biases were greater. The performances of the proposed SUR-based models in comparison with MRM using classic imputation methods were nearly similar with approximately equal amounts of bias and MSE. CONCLUSION. The simulation study suggests that the SUR-based models work as efficiently as MRM in the case of intermittently observed time-dependent covariates. Thus, it can be used as an alternative to MRM.
Wilkin, John L.; Rosenfeld, Leslie; Allen, Arthur; Baltes, Rebecca; Baptista, Antonio; He, Ruoying; Hogan, Patrick; Kurapov, Alexander; Mehra, Avichal; Quintrell, Josie; Schwab, David; Signell, Richard; Smith, Jane
2017-01-01
This paper outlines strategies that would advance coastal ocean modelling, analysis and prediction as a complement to the observing and data management activities of the coastal components of the US Integrated Ocean Observing System (IOOS®) and the Global Ocean Observing System (GOOS). The views presented are the consensus of a group of US-based researchers with a cross-section of coastal oceanography and ocean modelling expertise and community representation drawn from Regional and US Federal partners in IOOS. Priorities for research and development are suggested that would enhance the value of IOOS observations through model-based synthesis, deliver better model-based information products, and assist the design, evaluation, and operation of the observing system itself. The proposed priorities are: model coupling, data assimilation, nearshore processes, cyberinfrastructure and model skill assessment, modelling for observing system design, evaluation and operation, ensemble prediction, and fast predictors. Approaches are suggested to accomplish substantial progress in a 3–8-year timeframe. In addition, the group proposes steps to promote collaboration between research and operations groups in Regional Associations, US Federal Agencies, and the international ocean research community in general that would foster coordination on scientific and technical issues, and strengthen federal–academic partnerships benefiting IOOS stakeholders and end users.
Queue observing at the Observatoire du Mont-Mégantic 1.6-m telescope
NASA Astrophysics Data System (ADS)
Artigau, Étienne; Lamontagne, Robert; Doyon, René; Malo, Lison
2010-07-01
Queue planning of observation and service observing are generally seen as specific to large, world-class, astronomical observatories that draw proposal from a large community. One of the common grievance, justified or not, against queue planning and service observing is the fear of training a generation of astronomers without hands-on observing experience. At the Observatoire du Mont-Mégantic (OMM) 1.6-m telescope, we are developing a student-run service observing program. Queue planning and service observing are used as training tools to expose students to a variety of scientific project and instruments beyond what they would normally use for their own research project. The queue mode at the OMM specifically targets relatively shallow observations that can be completed in less than a few hours and are too short to justify a multi-night classical observing run.
Ray-tracing in pseudo-complex General Relativity
NASA Astrophysics Data System (ADS)
Schönenbach, T.; Caspar, G.; Hess, P. O.; Boller, T.; Müller, A.; Schäfer, M.; Greiner, W.
2014-07-01
Motivated by possible observations of the black hole candidate in the centre of our Galaxy and the galaxy M87, ray-tracing methods are applied to both standard General Relativity (GR) and a recently proposed extension, the pseudo-complex GR (pc-GR). The correction terms due to the investigated pc-GR model lead to slower orbital motions close to massive objects. Also the concept of an innermost stable circular orbit is modified for the pc-GR model, allowing particles to get closer to the central object for most values of the spin parameter a than in GR. Thus, the accretion disc, surrounding a massive object, is brighter in pc-GR than in GR. Iron Kα emission-line profiles are also calculated as those are good observables for regions of strong gravity. Differences between the two theories are pointed out.
Hernandez, J E; Epstein, L D; Rodriguez, M H; Rodriguez, A D; Rejmankova, E; Roberts, D R
1997-03-01
We propose the use of generalized tree models (GTMs) to analyze data from entomological field studies. Generalized tree models can be used to characterize environments with different mosquito breeding capacity. A GTM simultaneously analyzes a set of predictor variables (e.g., vegetation coverage) in relation to a response variable (e.g., counts of Anopheles albimanus larvae), and how it varies with respect to a set of criterion variables (e.g., presence of predators). The algorithm produces a treelike graphical display with its root at the top and 2 branches stemming down from each node. At each node, conditions on the value of predictors partition the observations into subgroups (environments) in which the relation between response and criterion variables is most homogeneous.
Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen
2014-01-01
Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.
Positive state observer for the automatic control of the depth of anesthesia-Clinical results.
Nogueira, Filipa N; Mendonça, T; Rocha, P
2016-09-13
The depth of anesthesia (DoA) is a crucial feature in general anesthesia. Nowadays the DoA is usually evaluated by the bispectral index (BIS). According to the surgical procedure, different reference levels for the BIS may be clinically required. This can be achieved by the simultaneous administration of an analgesic (e.g. remifentanil) and an hypnotic (eg propofol). As a contribution to the effort of automating the processes of drug delivery in general anesthesia, in this paper, a positive state observer is designed for the implementation of a control scheme proposed for the automatic administration of propofol and of remifentanil, in order to track a desired level for the BIS. It is proved and illustrated by simulations that the controller-observer scheme has a very good performance. This scheme was implemented, tested and evaluated both by means of simulations and for a set of patients during surgical procedures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Sugihara, Masahiro
2010-01-01
In survival analysis, treatment effects are commonly evaluated based on survival curves and hazard ratios as causal treatment effects. In observational studies, these estimates may be biased due to confounding factors. The inverse probability of treatment weighted (IPTW) method based on the propensity score is one of the approaches utilized to adjust for confounding factors between binary treatment groups. As a generalization of this methodology, we developed an exact formula for an IPTW log-rank test based on the generalized propensity score for survival data. This makes it possible to compare the group differences of IPTW Kaplan-Meier estimators of survival curves using an IPTW log-rank test for multi-valued treatments. As causal treatment effects, the hazard ratio can be estimated using the IPTW approach. If the treatments correspond to ordered levels of a treatment, the proposed method can be easily extended to the analysis of treatment effect patterns with contrast statistics. In this paper, the proposed method is illustrated with data from the Kyushu Lipid Intervention Study (KLIS), which investigated the primary preventive effects of pravastatin on coronary heart disease (CHD). The results of the proposed method suggested that pravastatin treatment reduces the risk of CHD and that compliance to pravastatin treatment is important for the prevention of CHD. (c) 2009 John Wiley & Sons, Ltd.
Risk prediction for myocardial infarction via generalized functional regression models.
Ieva, Francesca; Paganoni, Anna M
2016-08-01
In this paper, we propose a generalized functional linear regression model for a binary outcome indicating the presence/absence of a cardiac disease with multivariate functional data among the relevant predictors. In particular, the motivating aim is the analysis of electrocardiographic traces of patients whose pre-hospital electrocardiogram (ECG) has been sent to 118 Dispatch Center of Milan (the Italian free-toll number for emergencies) by life support personnel of the basic rescue units. The statistical analysis starts with a preprocessing of ECGs treated as multivariate functional data. The signals are reconstructed from noisy observations. The biological variability is then removed by a nonlinear registration procedure based on landmarks. Thus, in order to perform a data-driven dimensional reduction, a multivariate functional principal component analysis is carried out on the variance-covariance matrix of the reconstructed and registered ECGs and their first derivatives. We use the scores of the Principal Components decomposition as covariates in a generalized linear model to predict the presence of the disease in a new patient. Hence, a new semi-automatic diagnostic procedure is proposed to estimate the risk of infarction (in the case of interest, the probability of being affected by Left Bundle Brunch Block). The performance of this classification method is evaluated and compared with other methods proposed in literature. Finally, the robustness of the procedure is checked via leave-j-out techniques. © The Author(s) 2013.
NASA Astrophysics Data System (ADS)
Si, H.; Koketsu, K.; Miyake, H.; Ibrahim, R.
2016-12-01
During the two major earthquakes occurred in Kumamoto prefecture, at 21:26 on 14 April, 2016 (Mw 6.2, GCMT), and at 1:25 on 16 April, 2016 (Mw7.0, GCMT), a large number of strong ground motions were recorded, including those very close to the surface fault. In this study, we will discuss the attenuation characteristics of strong ground motions observed during the earthquakes. The data used in this study are mainly observed by K-NET, KiK-net, Osaka University, JMA and Kumamoto prefecture. The 5% damped acceleration response spectra (GMRotI50) are calculated based on the method proposed by Boore et al. (2006). PGA and PGV is defined as the larger one among the PGAs and PGVs of two horizontal components. The PGA, PGV, and GMRotI50 data were corrected to the bedrock with Vs of 1.5km/s based on the method proposed by Si et al. (2016) using the average shear wave velocity (Vs30) and the thickness of sediments over the bedrock. The thickness is estimated based on the velocity structure model provided by J-SHIS. We use a source model proposed by Koketsu et al. (2016) to calculate the fault distance and the median distance (MED) which defined as the closest distance from a station to the median line of the fault plane (Si et al., 2014). We compared the observed PGAs, PGVs, and GMRotI50 with the GMPEs developed in Japan using MED (Si et al., 2014). The predictions by the GMPEs are generally consistent with the observations during the two Kumamoto earthquakes. The results of the comparison also indicated that, (1) strong motion records from the earthquake on April 14th are generally consistent with the predictions by GMPE, however, at the periods of 0.5 to 2 seconds, several records close to the fault plane show larger amplitudes than the predictions by GMPE, including the KiK-net station Mashiki (KMMH16); (2) for the earthquake on April 16, the PGAs and GMRotI50 at periods from 0.1s to 0.4s with short distance from the fault plane are slightly smaller than the predictions by GMPE. On the other hand, for the PGVs and GMRotI50s at periods longer than 2.5s with MED larger than about 100 km, the observations are generally larger than the prediction by GMPE, showing smaller attenuation.
Calibration and validation of a general infiltration model
NASA Astrophysics Data System (ADS)
Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.
1999-08-01
A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.
NASA Astrophysics Data System (ADS)
Lv, Chao; Zheng, Lianqing; Yang, Wei
2012-01-01
Molecular dynamics sampling can be enhanced via the promoting of potential energy fluctuations, for instance, based on a Hamiltonian modified with the addition of a potential-energy-dependent biasing term. To overcome the diffusion sampling issue, which reveals the fact that enlargement of event-irrelevant energy fluctuations may abolish sampling efficiency, the essential energy space random walk (EESRW) approach was proposed earlier. To more effectively accelerate the sampling of solute conformations in aqueous environment, in the current work, we generalized the EESRW method to a two-dimension-EESRW (2D-EESRW) strategy. Specifically, the essential internal energy component of a focused region and the essential interaction energy component between the focused region and the environmental region are employed to define the two-dimensional essential energy space. This proposal is motivated by the general observation that in different conformational events, the two essential energy components have distinctive interplays. Model studies on the alanine dipeptide and the aspartate-arginine peptide demonstrate sampling improvement over the original one-dimension-EESRW strategy; with the same biasing level, the present generalization allows more effective acceleration of the sampling of conformational transitions in aqueous solution. The 2D-EESRW generalization is readily extended to higher dimension schemes and employed in more advanced enhanced-sampling schemes, such as the recent orthogonal space random walk method.
Interaction Models for Functional Regression.
Usset, Joseph; Staicu, Ana-Maria; Maity, Arnab
2016-02-01
A functional regression model with a scalar response and multiple functional predictors is proposed that accommodates two-way interactions in addition to their main effects. The proposed estimation procedure models the main effects using penalized regression splines, and the interaction effect by a tensor product basis. Extensions to generalized linear models and data observed on sparse grids or with measurement error are presented. A hypothesis testing procedure for the functional interaction effect is described. The proposed method can be easily implemented through existing software. Numerical studies show that fitting an additive model in the presence of interaction leads to both poor estimation performance and lost prediction power, while fitting an interaction model where there is in fact no interaction leads to negligible losses. The methodology is illustrated on the AneuRisk65 study data.
Reducing murder to manslaughter: whose job?
Griew, E
1986-01-01
This paper compares two versions of the diminished responsibility defence, which reduces murder to manslaughter: the present statutory formulation and a proposed reformulation. The comparison confirms that evidence such as psychiatrists are commonly invited to give in murder cases takes them beyond their proper role. Paradoxically, although the two formulations mean essentially the same thing, the proposed change of wording must have the practical effect of subduing the psychiatrist's evidence. This conclusion leads to speculation about why psychiatrists are at present allowed so large a function in diminished responsibility cases and to some general observations about the role of the expert in relation to those of judge and jury. PMID:3959035
Zhang, Zhi-Hui; Yang, Guang-Hong
2017-05-01
This paper provides a novel event-triggered fault detection (FD) scheme for discrete-time linear systems. First, an event-triggered interval observer is proposed to generate the upper and lower residuals by taking into account the influence of the disturbances and the event error. Second, the robustness of the residual interval against the disturbances and the fault sensitivity are improved by introducing l 1 and H ∞ performances. Third, dilated linear matrix inequalities are used to decouple the Lyapunov matrices from the system matrices. The nonnegative conditions for the estimation error variables are presented with the aid of the slack matrix variables. This technique allows considering a more general Lyapunov function. Furthermore, the FD decision scheme is proposed by monitoring whether the zero value belongs to the residual interval. It is shown that the information communication burden is reduced by designing the event-triggering mechanism, while the FD performance can still be guaranteed. Finally, simulation results demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Digital Biomass Accumulation Using High-Throughput Plant Phenotype Data Analysis.
Rahaman, Md Matiur; Ahsan, Md Asif; Gillani, Zeeshan; Chen, Ming
2017-09-01
Biomass is an important phenotypic trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive, and they require numerous individuals to be cultivated for repeated measurements. With the advent of image-based high-throughput plant phenotyping facilities, non-destructive biomass measuring methods have attempted to overcome this problem. Thus, the estimation of plant biomass of individual plants from their digital images is becoming more important. In this paper, we propose an approach to biomass estimation based on image derived phenotypic traits. Several image-based biomass studies state that the estimation of plant biomass is only a linear function of the projected plant area in images. However, we modeled the plant volume as a function of plant area, plant compactness, and plant age to generalize the linear biomass model. The obtained results confirm the proposed model and can explain most of the observed variance during image-derived biomass estimation. Moreover, a small difference was observed between actual and estimated digital biomass, which indicates that our proposed approach can be used to estimate digital biomass accurately.
Wang, Xun; Sun, Beibei; Liu, Boyang; Fu, Yaping; Zheng, Pan
2017-01-01
Experimental design focuses on describing or explaining the multifactorial interactions that are hypothesized to reflect the variation. The design introduces conditions that may directly affect the variation, where particular conditions are purposely selected for observation. Combinatorial design theory deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of balance and/or symmetry. In this work, borrowing the concept of "balance" in combinatorial design theory, a novel method for multifactorial bio-chemical experiments design is proposed, where balanced templates in combinational design are used to select the conditions for observation. Balanced experimental data that covers all the influencing factors of experiments can be obtianed for further processing, such as training set for machine learning models. Finally, a software based on the proposed method is developed for designing experiments with covering influencing factors a certain number of times.
Ridges on Europa: Origin by Incremental Ice-Wedging
NASA Technical Reports Server (NTRS)
Melosh, H. J.; Turtle, E. P.
2004-01-01
The surface of Europa is covered by ridges that display a variety of morphologies . The most common type is characterized by a double ridge divided by an axial trough. These ridges are, in general, narrow (typically only a few km across) and remarkably linear. They are up to a few hundred meters high and the inner and outer slopes appear to stand at the angle of repose . A number of diverse mechanisms have been proposed to explain the formation of these ubiquitous features , although none can fully account for all of their observed characteristics. We propose a different formation theory in which accumulation of material within cracks that open during the extensional phase of the tidal cycle prevents complete closure of the cracks during the tidal cycle s compressional phase. This accumulation deforms the surrounding ice and, in time, results in the growth of a landform remarkably similar to the ridges observed on Europa.
Can chaos be observed in quantum gravity?
NASA Astrophysics Data System (ADS)
Dittrich, Bianca; Höhn, Philipp A.; Koslowski, Tim A.; Nelson, Mike I.
2017-06-01
Full general relativity is almost certainly 'chaotic'. We argue that this entails a notion of non-integrability: a generic general relativistic model, at least when coupled to cosmologically interesting matter, likely possesses neither differentiable Dirac observables nor a reduced phase space. It follows that the standard notion of observable has to be extended to include non-differentiable or even discontinuous generalized observables. These cannot carry Poisson-algebraic structures and do not admit a standard quantization; one thus faces a quantum representation problem of gravitational observables. This has deep consequences for a quantum theory of gravity, which we investigate in a simple model for a system with Hamiltonian constraint that fails to be completely integrable. We show that basing the quantization on standard topology precludes a semiclassical limit and can even prohibit any solutions to the quantum constraints. Our proposed solution to this problem is to refine topology such that a complete set of Dirac observables becomes continuous. In the toy model, it turns out that a refinement to a polymer-type topology, as e.g. used in loop gravity, is sufficient. Basing quantization of the toy model on this finer topology, we find a complete set of quantum Dirac observables and a suitable semiclassical limit. This strategy is applicable to realistic candidate theories of quantum gravity and thereby suggests a solution to a long-standing problem which implies ramifications for the very concept of quantization. Our work reveals a qualitatively novel facet of chaos in physics and opens up a new avenue of research on chaos in gravity which hints at deep insights into the structure of quantum gravity.
Hyper-Spectral Image Analysis With Partially Latent Regression and Spatial Markov Dependencies
NASA Astrophysics Data System (ADS)
Deleforge, Antoine; Forbes, Florence; Ba, Sileye; Horaud, Radu
2015-09-01
Hyper-spectral data can be analyzed to recover physical properties at large planetary scales. This involves resolving inverse problems which can be addressed within machine learning, with the advantage that, once a relationship between physical parameters and spectra has been established in a data-driven fashion, the learned relationship can be used to estimate physical parameters for new hyper-spectral observations. Within this framework, we propose a spatially-constrained and partially-latent regression method which maps high-dimensional inputs (hyper-spectral images) onto low-dimensional responses (physical parameters such as the local chemical composition of the soil). The proposed regression model comprises two key features. Firstly, it combines a Gaussian mixture of locally-linear mappings (GLLiM) with a partially-latent response model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. Secondly, spatial constraints are introduced in the model through a Markov random field (MRF) prior which provides a spatial structure to the Gaussian-mixture hidden variables. Experiments conducted on a database composed of remotely sensed observations collected from the Mars planet by the Mars Express orbiter demonstrate the effectiveness of the proposed model.
Advanced Earth Observation System Instrumentation Study (aeosis)
NASA Technical Reports Server (NTRS)
White, R.; Grant, F.; Malchow, H.; Walker, B.
1975-01-01
Various types of measurements were studied for estimating the orbit and/or attitude of an Earth Observation Satellite. An investigation was made into the use of known ground targets in the earth sensor imagery, in combination with onboard star sightings and/or range and range rate measurements by ground tracking stations or tracking satellites (TDRSS), to estimate satellite attitude, orbital ephemeris, and gyro bias drift. Generalized measurement equations were derived for star measurements with a particular type of star tracker, and for landmark measurements with a multispectral scanner being proposed for an advanced Earth Observation Satellite. The use of infra-red horizon measurements to estimate the attitude and gyro bias drift of a geosynchronous satellite was explored.
Estimating long-term multivariate progression from short-term data.
Donohue, Michael C; Jacqmin-Gadda, Hélène; Le Goff, Mélanie; Thomas, Ronald G; Raman, Rema; Gamst, Anthony C; Beckett, Laurel A; Jack, Clifford R; Weiner, Michael W; Dartigues, Jean-François; Aisen, Paul S
2014-10-01
Diseases that progress slowly are often studied by observing cohorts at different stages of disease for short periods of time. The Alzheimer's Disease Neuroimaging Initiative (ADNI) follows elders with various degrees of cognitive impairment, from normal to impaired. The study includes a rich panel of novel cognitive tests, biomarkers, and brain images collected every 6 months for as long as 6 years. The relative timing of the observations with respect to disease pathology is unknown. We propose a general semiparametric model and iterative estimation procedure to estimate simultaneously the pathological timing and long-term growth curves. The resulting estimates of long-term progression are fine-tuned using cognitive trajectories derived from the long-term "Personnes Agées Quid" study. We demonstrate with simulations that the method can recover long-term disease trends from short-term observations. The method also estimates temporal ordering of individuals with respect to disease pathology, providing subject-specific prognostic estimates of the time until onset of symptoms. When the method is applied to ADNI data, the estimated growth curves are in general agreement with prevailing theories of the Alzheimer's disease cascade. Other data sets with common outcome measures can be combined using the proposed algorithm. Software to fit the model and reproduce results with the statistical software R is available as the grace package. ADNI data can be downloaded from the Laboratory of NeuroImaging. Copyright © 2014 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Quantum probability and quantum decision-making.
Yukalov, V I; Sornette, D
2016-01-13
A rigorous general definition of quantum probability is given, which is valid not only for elementary events but also for composite events, for operationally testable measurements as well as for inconclusive measurements, and also for non-commuting observables in addition to commutative observables. Our proposed definition of quantum probability makes it possible to describe quantum measurements and quantum decision-making on the same common mathematical footing. Conditions are formulated for the case when quantum decision theory reduces to its classical counterpart and for the situation where the use of quantum decision theory is necessary. © 2015 The Author(s).
NASA Astrophysics Data System (ADS)
Koelsch, Stefan
2015-06-01
I am happy about each commentator's contribution [1-27], about the wealth of the kind and generally positive comments, and the many interesting and enriching remarks, observations, and extensions. In the following, I will summarize some major points of the comments, and relate them to the Quartet Theory (henceforth QT) proposed in the target article [28].
A Non-Gaussian Stock Price Model: Options, Credit and a Multi-Timescale Memory
NASA Astrophysics Data System (ADS)
Borland, L.
We review a recently proposed model of stock prices, based on astatistical feedback model that results in a non-Gaussian distribution of price changes. Applications to option pricing and the pricing of debt is discussed. A generalization to account for feedback effects over multiple timescales is also presented. This model reproduces most of the stylized facts (ie statistical anomalies) observed in real financial markets.
Study of metabolism and energetics in hypogravity: Degenerative effects of prolonged hypogravity
NASA Technical Reports Server (NTRS)
Siegel, S. M.
1976-01-01
The role of gravity in the formation of rigid, lignified plant cell walls hence to the development of the erect land plant body is examined. An experiment was proposed with a general hypothesis that a chosen plant, a dwarf marigold, would display degenerative changes in mechanical supportive systems under hypogravity because normal lignin-cellulose wall structure fails to develop. Observational and experimental results are given.
f( R) gravity modifications: from the action to the data
NASA Astrophysics Data System (ADS)
Lazkoz, Ruth; Ortiz-Baños, María; Salzano, Vincenzo
2018-03-01
It is a very well established matter nowadays that many modified gravity models can offer a sound alternative to General Relativity for the description of the accelerated expansion of the universe. But it is also equally well known that no clear and sharp discrimination between any alternative theory and the classical one has been found so far. In this work, we attempt at formulating a different approach starting from the general class of f( R) theories as test probes: we try to reformulate f( R) Lagrangian terms as explicit functions of the redshift, i.e., as f( z). In this context, the f( R) setting to the consensus cosmological model, the Λ CDM model, can be written as a polynomial including just a constant and a third-order term. Starting from this result, we propose various different polynomial parameterizations f( z), including new terms which would allow for deviations from Λ CDM, and we thoroughly compare them with observational data. While on the one hand we have found no statistically preference for our proposals (even if some of them are as good as Λ CDM by using Bayesian Evidence comparison), we think that our novel approach could provide a different perspective for the development of new and observationally reliable alternative models of gravity.
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2015-01-01
Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2016-06-01
The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.
Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Obaidat, Mohammad S
2015-11-01
In order to access remote medical server, generally the patients utilize smart card to login to the server. It has been observed that most of the user (patient) authentication protocols suffer from smart card stolen attack that means the attacker can mount several common attacks after extracting smart card information. Recently, Lu et al.'s proposes a session key agreement protocol between the patient and remote medical server and claims that the same protocol is secure against relevant security attacks. However, this paper presents several security attacks on Lu et al.'s protocol such as identity trace attack, new smart card issue attack, patient impersonation attack and medical server impersonation attack. In order to fix the mentioned security pitfalls including smart card stolen attack, this paper proposes an efficient remote mutual authentication protocol using smart card. We have then simulated the proposed protocol using widely-accepted AVISPA simulation tool whose results make certain that the same protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. Moreover, the rigorous security analysis proves that the proposed protocol provides strong security protection on the relevant security attacks including smart card stolen attack. We compare the proposed scheme with several related schemes in terms of computation cost and communication cost as well as security functionalities. It has been observed that the proposed scheme is comparatively better than related existing schemes.
Lu, Tsui-Shan; Longnecker, Matthew P.; Zhou, Haibo
2016-01-01
Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data and the general ODS design for a continuous response. While substantial work has been done for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome dependent sampling (Multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the Multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the Multivariate-ODS or the estimator from a simple random sample with the same sample size. The Multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of PCB exposure to hearing loss in children born to the Collaborative Perinatal Study. PMID:27966260
NASA Astrophysics Data System (ADS)
Gasperini, Paolo; Lolli, Barbara
2014-01-01
The argument proposed by Wason et al. that the conversion of magnitudes from a scale (e.g. Ms or mb) to another (e.g. Mw), using the coefficients computed by the general orthogonal regression method (Fuller) is biased if the observed values of the predictor (independent) variable are used in the equation as well as the methodology they suggest to estimate the supposedly true values of the predictor variable are wrong for a number of theoretical and empirical reasons. Hence, we advise against the use of such methodology for magnitude conversions.
Explaining quantum correlations through evolution of causal models
NASA Astrophysics Data System (ADS)
Harper, Robin; Chapman, Robert J.; Ferrie, Christopher; Granade, Christopher; Kueng, Richard; Naoumenko, Daniel; Flammia, Steven T.; Peruzzo, Alberto
2017-04-01
We propose a framework for the systematic and quantitative generalization of Bell's theorem using causal networks. We first consider the multiobjective optimization problem of matching observed data while minimizing the causal effect of nonlocal variables and prove an inequality for the optimal region that both strengthens and generalizes Bell's theorem. To solve the optimization problem (rather than simply bound it), we develop a genetic algorithm treating as individuals causal networks. By applying our algorithm to a photonic Bell experiment, we demonstrate the trade-off between the quantitative relaxation of one or more local causality assumptions and the ability of data to match quantum correlations.
Proposed new test of spin effects in general relativity.
O'Connell, R F
2004-08-20
The recent discovery of a double-pulsar PSR J0737-3039A/B provides an opportunity of unequivocally observing, for the first time, spin effects in general relativity. Existing efforts involve detection of the precession of the spinning body itself. However, for a close binary system, spin effects on the orbit may also be discernible. Not only do they add to the advance of the periastron (by an amount which is small compared to the conventional contribution) but they also give rise to a precession of the orbit about the spin direction. The measurement of such an effect would also give information on the moment of inertia of pulsars.
Massive parallelization of serial inference algorithms for a complex generalized linear model
Suchard, Marc A.; Simpson, Shawn E.; Zorych, Ivan; Ryan, Patrick; Madigan, David
2014-01-01
Following a series of high-profile drug safety disasters in recent years, many countries are redoubling their efforts to ensure the safety of licensed medical products. Large-scale observational databases such as claims databases or electronic health record systems are attracting particular attention in this regard, but present significant methodological and computational concerns. In this paper we show how high-performance statistical computation, including graphics processing units, relatively inexpensive highly parallel computing devices, can enable complex methods in large databases. We focus on optimization and massive parallelization of cyclic coordinate descent approaches to fit a conditioned generalized linear model involving tens of millions of observations and thousands of predictors in a Bayesian context. We find orders-of-magnitude improvement in overall run-time. Coordinate descent approaches are ubiquitous in high-dimensional statistics and the algorithms we propose open up exciting new methodological possibilities with the potential to significantly improve drug safety. PMID:25328363
Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias
2015-01-01
The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading. PMID:25816246
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
Model diagnostics in reduced-rank estimation
Chen, Kun
2016-01-01
Reduced-rank methods are very popular in high-dimensional multivariate analysis for conducting simultaneous dimension reduction and model estimation. However, the commonly-used reduced-rank methods are not robust, as the underlying reduced-rank structure can be easily distorted by only a few data outliers. Anomalies are bound to exist in big data problems, and in some applications they themselves could be of the primary interest. While naive residual analysis is often inadequate for outlier detection due to potential masking and swamping, robust reduced-rank estimation approaches could be computationally demanding. Under Stein's unbiased risk estimation framework, we propose a set of tools, including leverage score and generalized information score, to perform model diagnostics and outlier detection in large-scale reduced-rank estimation. The leverage scores give an exact decomposition of the so-called model degrees of freedom to the observation level, which lead to exact decomposition of many commonly-used information criteria; the resulting quantities are thus named information scores of the observations. The proposed information score approach provides a principled way of combining the residuals and leverage scores for anomaly detection. Simulation studies confirm that the proposed diagnostic tools work well. A pattern recognition example with hand-writing digital images and a time series analysis example with monthly U.S. macroeconomic data further demonstrate the efficacy of the proposed approaches. PMID:28003860
Model diagnostics in reduced-rank estimation.
Chen, Kun
2016-01-01
Reduced-rank methods are very popular in high-dimensional multivariate analysis for conducting simultaneous dimension reduction and model estimation. However, the commonly-used reduced-rank methods are not robust, as the underlying reduced-rank structure can be easily distorted by only a few data outliers. Anomalies are bound to exist in big data problems, and in some applications they themselves could be of the primary interest. While naive residual analysis is often inadequate for outlier detection due to potential masking and swamping, robust reduced-rank estimation approaches could be computationally demanding. Under Stein's unbiased risk estimation framework, we propose a set of tools, including leverage score and generalized information score, to perform model diagnostics and outlier detection in large-scale reduced-rank estimation. The leverage scores give an exact decomposition of the so-called model degrees of freedom to the observation level, which lead to exact decomposition of many commonly-used information criteria; the resulting quantities are thus named information scores of the observations. The proposed information score approach provides a principled way of combining the residuals and leverage scores for anomaly detection. Simulation studies confirm that the proposed diagnostic tools work well. A pattern recognition example with hand-writing digital images and a time series analysis example with monthly U.S. macroeconomic data further demonstrate the efficacy of the proposed approaches.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Lin, Guang, E-mail: guanglin@purdue.edu
2015-08-01
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
NASA Astrophysics Data System (ADS)
Shababi, Homa; Chung, Won Sang
2018-04-01
In this paper, using the new type of D-dimensional nonperturbative Generalized Uncertainty Principle (GUP) which has predicted both a minimal length uncertainty and a maximal observable momentum,1 first, we obtain the maximally localized states and express their connections to [P. Pedram, Phys. Lett. B 714, 317 (2012)]. Then, in the context of our proposed GUP and using the generalized Schrödinger equation, we solve some important problems including particle in a box and one-dimensional hydrogen atom. Next, implying modified Bohr-Sommerfeld quantization, we obtain energy spectra of quantum harmonic oscillator and quantum bouncer. Finally, as an example, we investigate some statistical properties of a free particle, including partition function and internal energy, in the presence of the mentioned GUP.
The generalized drift flux approach: Identification of the void-drift closure law
NASA Technical Reports Server (NTRS)
Boure, J. A.
1989-01-01
The main characteristics and the potential advantages of generalized drift flux models are presented. In particular it is stressed that the issue on the propagation properties and on the mathematical nature (hyperbolic or not) of the model and the problem of closure are easier to tackle than in two fluid models. The problem of identifying the differential void-drift closure law inherent to generalized drift flux models is then addressed. Such a void-drift closure, based on wave properties, is proposed for bubbly flows. It involves a drift relaxation time which is of the order of 0.25 s. It is observed that, although wave properties provide essential closure validity tests, they do not represent an easily usable source of quantitative information on the closure laws.
Cosmological power spectrum in a noncommutative spacetime
NASA Astrophysics Data System (ADS)
Kothari, Rahul; Rath, Pranati K.; Jain, Pankaj
2016-09-01
We propose a generalized star product that deviates from the standard one when the fields are considered at different spacetime points by introducing a form factor in the standard star product. We also introduce a recursive definition by which we calculate the explicit form of the generalized star product at any number of spacetime points. We show that our generalized star product is associative and cyclic at linear order. As a special case, we demonstrate that our recursive approach can be used to prove the associativity of standard star products for same or different spacetime points. The introduction of a form factor has no effect on the standard Lagrangian density in a noncommutative spacetime because it reduces to the standard star product when spacetime points become the same. We show that the generalized star product leads to physically consistent results and can fit the observed data on hemispherical anisotropy in the cosmic microwave background radiation.
Universal effect of dynamical reinforcement learning mechanism in spatial evolutionary games
NASA Astrophysics Data System (ADS)
Zhang, Hai-Feng; Wu, Zhi-Xi; Wang, Bing-Hong
2012-06-01
One of the prototypical mechanisms in understanding the ubiquitous cooperation in social dilemma situations is the win-stay, lose-shift rule. In this work, a generalized win-stay, lose-shift learning model—a reinforcement learning model with dynamic aspiration level—is proposed to describe how humans adapt their social behaviors based on their social experiences. In the model, the players incorporate the information of the outcomes in previous rounds with time-dependent aspiration payoffs to regulate the probability of choosing cooperation. By investigating such a reinforcement learning rule in the spatial prisoner's dilemma game and public goods game, a most noteworthy viewpoint is that moderate greediness (i.e. moderate aspiration level) favors best the development and organization of collective cooperation. The generality of this observation is tested against different regulation strengths and different types of network of interaction as well. We also make comparisons with two recently proposed models to highlight the importance of the mechanism of adaptive aspiration level in supporting cooperation in structured populations.
Bayesian Correction for Misclassification in Multilevel Count Data Models.
Nelson, Tyler; Song, Joon Jin; Chin, Yoo-Mi; Stamey, James D
2018-01-01
Covariate misclassification is well known to yield biased estimates in single level regression models. The impact on hierarchical count models has been less studied. A fully Bayesian approach to modeling both the misclassified covariate and the hierarchical response is proposed. Models with a single diagnostic test and with multiple diagnostic tests are considered. Simulation studies show the ability of the proposed model to appropriately account for the misclassification by reducing bias and improving performance of interval estimators. A real data example further demonstrated the consequences of ignoring the misclassification. Ignoring misclassification yielded a model that indicated there was a significant, positive impact on the number of children of females who observed spousal abuse between their parents. When the misclassification was accounted for, the relationship switched to negative, but not significant. Ignoring misclassification in standard linear and generalized linear models is well known to lead to biased results. We provide an approach to extend misclassification modeling to the important area of hierarchical generalized linear models.
A Solution to the Cosmic Conundrum including Cosmological Constant and Dark Energy Problems
NASA Astrophysics Data System (ADS)
Singh, A.
2009-12-01
A comprehensive solution to the cosmic conundrum is presented that also resolves key paradoxes of quantum mechanics and relativity. A simple mathematical model, the Gravity Nullification model (GNM), is proposed that integrates the missing physics of the spontaneous relativistic conversion of mass to energy into the existing physics theories, specifically a simplified general theory of relativity. Mechanistic mathematical expressions are derived for a relativistic universe expansion, which predict both the observed linear Hubble expansion in the nearby universe and the accelerating expansion exhibited by the supernova observations. The integrated model addresses the key questions haunting physics and Big Bang cosmology. It also provides a fresh perspective on the misconceived birth and evolution of the universe, especially the creation and dissolution of matter. The proposed model eliminates singularities from existing models and the need for the incredible and unverifiable assumptions including the superluminous inflation scenario, multiple universes, multiple dimensions, Anthropic principle, and quantum gravity. GNM predicts the observed features of the universe without any explicit consideration of time as a governing parameter.
Measurement Matrix Design for Phase Retrieval Based on Mutual Information
NASA Astrophysics Data System (ADS)
Shlezinger, Nir; Dabora, Ron; Eldar, Yonina C.
2018-01-01
In phase retrieval problems, a signal of interest (SOI) is reconstructed based on the magnitude of a linear transformation of the SOI observed with additive noise. The linear transform is typically referred to as a measurement matrix. Many works on phase retrieval assume that the measurement matrix is a random Gaussian matrix, which, in the noiseless scenario with sufficiently many measurements, guarantees invertability of the transformation between the SOI and the observations, up to an inherent phase ambiguity. However, in many practical applications, the measurement matrix corresponds to an underlying physical setup, and is therefore deterministic, possibly with structural constraints. In this work we study the design of deterministic measurement matrices, based on maximizing the mutual information between the SOI and the observations. We characterize necessary conditions for the optimality of a measurement matrix, and analytically obtain the optimal matrix in the low signal-to-noise ratio regime. Practical methods for designing general measurement matrices and masked Fourier measurements are proposed. Simulation tests demonstrate the performance gain achieved by the proposed techniques compared to random Gaussian measurements for various phase recovery algorithms.
NASA Astrophysics Data System (ADS)
Tayebi, A.; Shekari, Y.; Heydari, M. H.
2017-07-01
Several physical phenomena such as transformation of pollutants, energy, particles and many others can be described by the well-known convection-diffusion equation which is a combination of the diffusion and advection equations. In this paper, this equation is generalized with the concept of variable-order fractional derivatives. The generalized equation is called variable-order time fractional advection-diffusion equation (V-OTFA-DE). An accurate and robust meshless method based on the moving least squares (MLS) approximation and the finite difference scheme is proposed for its numerical solution on two-dimensional (2-D) arbitrary domains. In the time domain, the finite difference technique with a θ-weighted scheme and in the space domain, the MLS approximation are employed to obtain appropriate semi-discrete solutions. Since the newly developed method is a meshless approach, it does not require any background mesh structure to obtain semi-discrete solutions of the problem under consideration, and the numerical solutions are constructed entirely based on a set of scattered nodes. The proposed method is validated in solving three different examples including two benchmark problems and an applied problem of pollutant distribution in the atmosphere. In all such cases, the obtained results show that the proposed method is very accurate and robust. Moreover, a remarkable property so-called positive scheme for the proposed method is observed in solving concentration transport phenomena.
Updating the OMERACT filter: core areas as a basis for defining core outcome sets.
Kirwan, John R; Boers, Maarten; Hewlett, Sarah; Beaton, Dorcas; Bingham, Clifton O; Choy, Ernest; Conaghan, Philip G; D'Agostino, Maria-Antonietta; Dougados, Maxime; Furst, Daniel E; Guillemin, Francis; Gossec, Laure; van der Heijde, Désirée M; Kloppenburg, Margreet; Kvien, Tore K; Landewé, Robert B M; Mackie, Sarah L; Matteson, Eric L; Mease, Philip J; Merkel, Peter A; Ostergaard, Mikkel; Saketkoo, Lesley Ann; Simon, Lee; Singh, Jasvinder A; Strand, Vibeke; Tugwell, Peter
2014-05-01
The Outcome Measures in Rheumatology (OMERACT) Filter provides guidelines for the development and validation of outcome measures for use in clinical research. The "Truth" section of the OMERACT Filter presupposes an explicit framework for identifying the relevant core outcomes that are universal to all studies of the effects of intervention effects. There is no published outline for instrument choice or development that is aimed at measuring outcome, was derived from broad consensus over its underlying philosophy, or includes a structured and documented critique. Therefore, a new proposal for defining core areas of measurement ("Filter 2.0 Core Areas of Measurement") was presented at OMERACT 11 to explore areas of consensus and to consider whether already endorsed core outcome sets fit into this newly proposed framework. Discussion groups critically reviewed the extent to which case studies of current OMERACT Working Groups complied with or negated the proposed framework, whether these observations had a more general application, and what issues remained to be resolved. Although there was broad acceptance of the framework in general, several important areas of construction, presentation, and clarity of the framework were questioned. The discussion groups and subsequent feedback highlighted 20 such issues. These issues will require resolution to reach consensus on accepting the proposed Filter 2.0 framework of Core Areas as the basis for the selection of Core Outcome Domains and hence appropriate Core Outcome Sets for clinical trials.
Regional variations in the observed morphology and activity of martian linear gullies
NASA Astrophysics Data System (ADS)
Morales, Kimberly Marie; Diniega, Serina; Austria, Mia; Ochoa, Vincent; HiRISE Science and Instrument Team
2017-10-01
The formation mechanism for martian linear gullies has been much debated, because they have been suggested as possible evidence of liquid water on Mars. This class of dune gullies is defined by long (up to 2 km), narrow channels that are relatively uniform in width, and range in sinuosity index. Unlike other gullies on Earth and Mars that end in depositional aprons, linear gullies end in circular depressions referred to as terminal pits. This particular morphological difference, along with the difficulty of identifying a source of water to form these features, has led to several ‘dry’ hypotheses. Recent observations on the morphology, distribution, and present-day activity of linear gullies suggests that they could be formed by subliming blocks of seasonal CO2 ice (“dry ice”) sliding downslope on dune faces. In our study, we aimed to further constrain the possible mechanism(s) responsible for the formation of linear gullies by using HiRISE images to collect morphological data and track seasonal activity within three regions in the southern hemisphere-Hellespontus (~45°S, 40°E), Aonia Terra (~50°S, 290°E), and Jeans (~70°S, 155°E) over the last four Mars years. General similarities in these observations were reflective of the proposed formation process (sliding CO2 blocks) while differences were correlated with regional environmental conditions related to the latitude or general geologic setting. This presentation describes the observed regional differences in linear gully morphology and activity, and investigates how environmental factors such as surface properties and local levels of frost may explain these variations while still supporting the proposed model. Determining the formation mechanism that forms these martian features can improve our understanding of both the climatic and geological processes that shape the Martian surface.
Multisetting Greenberger-Horne-Zeilinger paradoxes
NASA Astrophysics Data System (ADS)
Tang, Weidong; Yu, Sixia; Oh, C. H.
2017-01-01
The Greenberger-Horne-Zeilinger (GHZ) paradox provides an all-versus-nothing test for the quantum nonlocality. In most of the GHZ paradoxes known so far each observer is allowed to measure only two alternative observables. Here we present a general construction for GHZ paradoxes in which each observer measures more than two observables given that the system is prepared in the n -qudit GHZ state. By doing so we are able to construct a multisetting GHZ paradox for the n -qubit GHZ state, with n being arbitrary, which is genuine n -partite; i.e., no GHZ paradox exists when restricted to a subset of a number of observers for a given set of Mermin observables. Our result fills up the gap of the absence of a genuine GHZ paradox for the GHZ state of an even number of qubits, especially the four-qubit GHZ state as used in GHZ's original proposal.
Accelerating assimilation development for new observing systems using EFSO
NASA Astrophysics Data System (ADS)
Lien, Guo-Yuan; Hotta, Daisuke; Kalnay, Eugenia; Miyoshi, Takemasa; Chen, Tse-Chun
2018-03-01
To successfully assimilate data from a new observing system, it is necessary to develop appropriate data selection strategies, assimilating only the generally useful data. This development work is usually done by trial and error using observing system experiments (OSEs), which are very time and resource consuming. This study proposes a new, efficient methodology to accelerate the development using ensemble forecast sensitivity to observations (EFSO). First, non-cycled assimilation of the new observation data is conducted to compute EFSO diagnostics for each observation within a large sample. Second, the average EFSO conditionally sampled in terms of various factors is computed. Third, potential data selection criteria are designed based on the non-cycled EFSO statistics, and tested in cycled OSEs to verify the actual assimilation impact. The usefulness of this method is demonstrated with the assimilation of satellite precipitation data. It is shown that the EFSO-based method can efficiently suggest data selection criteria that significantly improve the assimilation results.
Stochastic search, optimization and regression with energy applications
NASA Astrophysics Data System (ADS)
Hannah, Lauren A.
Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.
An empirical model to forecast solar wind velocity through statistical modeling
NASA Astrophysics Data System (ADS)
Gao, Y.; Ridley, A. J.
2013-12-01
The accurate prediction of the solar wind velocity has been a major challenge in the space weather community. Previous studies proposed many empirical and semi-empirical models to forecast the solar wind velocity based on either the historical observations, e.g. the persistence model, or the instantaneous observations of the sun, e.g. the Wang-Sheeley-Arge model. In this study, we use the one-minute WIND data from January 1995 to August 2012 to investigate and compare the performances of 4 models often used in literature, here referred to as the null model, the persistence model, the one-solar-rotation-ago model, and the Wang-Sheeley-Arge model. It is found that, measured by root mean square error, the persistence model gives the most accurate predictions within two days. Beyond two days, the Wang-Sheeley-Arge model serves as the best model, though it only slightly outperforms the null model and the one-solar-rotation-ago model. Finally, we apply the least-square regression to linearly combine the null model, the persistence model, and the one-solar-rotation-ago model to propose a 'general persistence model'. By comparing its performance against the 4 aforementioned models, it is found that the accuracy of the general persistence model outperforms the other 4 models within five days. Due to its great simplicity and superb performance, we believe that the general persistence model can serve as a benchmark in the forecast of solar wind velocity and has the potential to be modified to arrive at better models.
Code of Federal Regulations, 2010 CFR
2010-07-01
... concerning a proposed disposal must a disposal agency provide to the Attorney General to determine the... information concerning a proposed disposal must a disposal agency provide to the Attorney General to determine the applicability of antitrust laws? The disposal agency must promptly provide the Attorney General...
Observations of the Eclipsing Millisecond Pulsar
NASA Astrophysics Data System (ADS)
Bookbinder, Jay
1990-12-01
FRUCHTER et al. (1988a) HAVE RECENTLY DISCOVERED a 1.6 MSEC PULSAR (PSR 1957+20) IN A 9.2 HOUR ECLIPSING BINARY SYSTEM. THE UNUSUAL BEHAVIOR OF THE DISPERSION MEASURE AS A FUNCTION OF ORBITAL PHASE, AND THE DISAPPEARANCE OF THE PULSAR SIGNAL FOR 50 MINUTES DURING EACH ORBIT, IMPLIES THAT THE ECLIPSES ARE DUE TO A PULSAR-INDUCED WIND FLOWING OFF OF THE COMPANION. THE OPTICAL COUNTERPART IS A 21ST MAGNITUDE OBJECT WHICH VARIES IN INTENSITY OVER THE BINARY PERIOD; ACCURATE GROUND-BASED OBSERVATIONS ARE PREVENTED BY THE PROXIMITY (0.7") OF A 20TH MAGNITUDE K DWARF. WE PROPOSE TO OBSERVE THE OPTICAL COUNTERPART IN A TWO-PART STUDY. FIRST, THE WF/PC WILL PROVIDE ACCURATE MULTICOLOR PHOTOMETRY, ENABLING US TO DETERMINE UNCONTAMINATED MAGNITUDES AND COLORS BOTH AT MAXIMUM (ANTI-ECLIPSE) AS WELL AS AT MINIMUM (ECLIPSE). SECOND, WE PROPOSE TO OBSERVE THE EXPECTED UV LINE EMISSION WITH FOS, ALLOWING FOR AN INTIAL DETERMINATION OF THE TEMPERATURE AND DENSITY STRUCTURE AND ABUNDANCES OF THE WIND THAT IS BEING ABLATED FROM THE COMPANION. STUDY OF THIS UNIQUE SYSTEM HOLDS ENORMOUS POTENTIAL FOR THE UNDERSTANDING OF THE RADIATION FIELD OF A MILLISECOND PULSAR AND THE EVOLUTION OF LMXRBs AND MSPs IN GENERAL. WE EXPECT THESE OBSERVATIONS TO PLACE VERY SIGNIFICANT CONTRAINTS ON MODELS OF THIS UNIQUE OBJECT.
NASA Astrophysics Data System (ADS)
Mitchell, Myles A.; He, Jian-hua; Arnold, Christian; Li, Baojiu
2018-06-01
We propose a new framework for testing gravity using cluster observations, which aims to provide an unbiased constraint on modified gravity models from Sunyaev-Zel'dovich (SZ) and X-ray cluster counts and the cluster gas fraction, among other possible observables. Focusing on a popular f(R) model of gravity, we propose a novel procedure to recalibrate mass scaling relations from Λ cold dark matter (ΛCDM) to f(R) gravity for SZ and X-ray cluster observables. We find that the complicated modified gravity effects can be simply modelled as a dependence on a combination of the background scalar field and redshift, fR(z)/(1 + z), regardless of the f(R) model parameter. By employing a large suite of N-body simulations, we demonstrate that a theoretically derived tanh fitting formula is in excellent agreement with the dynamical mass enhancement of dark matter haloes for a large range of background field parameters and redshifts. Our framework is sufficiently flexible to allow for tests of other models and inclusion of further observables, and the one-parameter description of the dynamical mass enhancement can have important implications on the theoretical modelling of observables and on practical tests of gravity.
Location- and lesion-dependent estimation of mammographic background tissue complexity.
Avanaki, Ali; Espig, Kathryn; Kimpe, Tom
2017-01-01
We specify a notion of perceived background tissue complexity (BTC) that varies with lesion shape, lesion size, and lesion location in the image. We propose four unsupervised BTC estimators based on: perceived pre and postlesion similarity of images, lesion border analysis (LBA; conspicuous lesion should be brighter than its surround), tissue anomaly detection, and local energy. The latter two are existing methods adapted for location- and lesion-dependent BTC estimation. For evaluation, we ask human observers to measure BTC (threshold visibility amplitude of a given lesion inserted) at specified locations in a mammogram. As expected, both human measured and computationally estimated BTC vary with lesion shape, size, and location. BTCs measured by different human observers are correlated ([Formula: see text]). BTC estimators are correlated to each other ([Formula: see text]) and less so to human observers ([Formula: see text]). With change in lesion shape or size, LBA estimated BTC changes in the same direction as human measured BTC. Proposed estimators can be generalized to other modalities (e.g., breast tomosynthesis) and used as-is or customized to a specific human observer, to construct BTC-aware model observers with applications, such as optimization of contrast-enhanced medical imaging systems and creation of a diversified image dataset with characteristics of a desired population.
Location- and lesion-dependent estimation of mammographic background tissue complexity
Avanaki, Ali; Espig, Kathryn; Kimpe, Tom
2017-01-01
Abstract. We specify a notion of perceived background tissue complexity (BTC) that varies with lesion shape, lesion size, and lesion location in the image. We propose four unsupervised BTC estimators based on: perceived pre and postlesion similarity of images, lesion border analysis (LBA; conspicuous lesion should be brighter than its surround), tissue anomaly detection, and local energy. The latter two are existing methods adapted for location- and lesion-dependent BTC estimation. For evaluation, we ask human observers to measure BTC (threshold visibility amplitude of a given lesion inserted) at specified locations in a mammogram. As expected, both human measured and computationally estimated BTC vary with lesion shape, size, and location. BTCs measured by different human observers are correlated (ρ=0.67). BTC estimators are correlated to each other (0.84<ρ<0.95) and less so to human observers (ρ≤0.81). With change in lesion shape or size, LBA estimated BTC changes in the same direction as human measured BTC. Proposed estimators can be generalized to other modalities (e.g., breast tomosynthesis) and used as-is or customized to a specific human observer, to construct BTC-aware model observers with applications, such as optimization of contrast-enhanced medical imaging systems and creation of a diversified image dataset with characteristics of a desired population. PMID:28097214
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
A general framework of noise suppression in material decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less
Mathematically guided approaches to distinguish models of periodic patterning
Hiscock, Tom W.; Megason, Sean G.
2015-01-01
How periodic patterns are generated is an open question. A number of mechanisms have been proposed – most famously, Turing's reaction-diffusion model. However, many theoretical and experimental studies focus on the Turing mechanism while ignoring other possible mechanisms. Here, we use a general model of periodic patterning to show that different types of mechanism (molecular, cellular, mechanical) can generate qualitatively similar final patterns. Observation of final patterns is therefore not sufficient to favour one mechanism over others. However, we propose that a mathematical approach can help to guide the design of experiments that can distinguish between different mechanisms, and illustrate the potential value of this approach with specific biological examples. PMID:25605777
Life Cycle Cost Assessments for Military Transatmospheric Vehicles,
1997-01-01
earth orbit (GEO) that fall within the Titan-IV heavy launch vehicle (HLV) class are outside the practical design limits for a marketable RLV SSTO ...information is from the RAND-hosted TAV Workshop. Three SSTO concepts for X-33 were proposed during Phase I, all with either different takeoff or landing...1996 indicated some observed general differences in vehicles depending on the launch and landing modes:4 • Single stage to orbit ( SSTO ) TAVs for
Titan 4 TPS Replacement Implementation Study
NASA Technical Reports Server (NTRS)
Jackson, Charles H.
1996-01-01
This final report documents the overall progress of the study. It is a general discussion of the documents reviewed, recommendations, trips taken, findings/observations, and proposed corrective actions. In addition, cost data for the contract is addressed. The normal abstract and executive summary provided with most final reports is also provided as a part of this report. A conclusion section is provided that addresses the relative completeness of the Titan 4 TPSR project and this contract.
A 2D MTF approach to evaluate and guide dynamic imaging developments.
Chao, Tzu-Cheng; Chung, Hsiao-Wen; Hoge, W Scott; Madore, Bruno
2010-02-01
As the number and complexity of partially sampled dynamic imaging methods continue to increase, reliable strategies to evaluate performance may prove most useful. In the present work, an analytical framework to evaluate given reconstruction methods is presented. A perturbation algorithm allows the proposed evaluation scheme to perform robustly without requiring knowledge about the inner workings of the method being evaluated. A main output of the evaluation process consists of a two-dimensional modulation transfer function, an easy-to-interpret visual rendering of a method's ability to capture all combinations of spatial and temporal frequencies. Approaches to evaluate noise properties and artifact content at all spatial and temporal frequencies are also proposed. One fully sampled phantom and three fully sampled cardiac cine datasets were subsampled (R = 4 and 8) and reconstructed with the different methods tested here. A hybrid method, which combines the main advantageous features observed in our assessments, was proposed and tested in a cardiac cine application, with acceleration factors of 3.5 and 6.3 (skip factors of 4 and 8, respectively). This approach combines features from methods such as k-t sensitivity encoding, unaliasing by Fourier encoding the overlaps in the temporal dimension-sensitivity encoding, generalized autocalibrating partially parallel acquisition, sensitivity profiles from an array of coils for encoding and reconstruction in parallel, self, hybrid referencing with unaliasing by Fourier encoding the overlaps in the temporal dimension and generalized autocalibrating partially parallel acquisition, and generalized autocalibrating partially parallel acquisition-enhanced sensitivity maps for sensitivity encoding reconstructions.
Gravity-wave spectra in the atmosphere observed by MST radar, part 4.2B
NASA Technical Reports Server (NTRS)
Scheffler, A. O.; Liu, C. H.
1984-01-01
A universal spectrum of atmospheric buoyancy waves is proposed based on data from radiosonde, Doppler navigation, not-wire anemometer and Jimsphere balloon. The possible existence of such a universal spectrum clearly will have significant impact on several areas in the study of the middle atmosphere dynamics such as the parameterization of sub-grid scale gravity waves in global circulation models; the transport of trace constituents and heat in the middle atmosphere, etc. Therefore, it is important to examine more global wind data with temporal and spatial resolutions suitable for the investigation of the wave spectra. Mesosphere-stratosphere-troposphere (MST) radar observations offer an excellent opportunity for such studies. It is important to realize that radar measures the line-of-sight velocity which, in general, contains the combination of the vertical and horizontal components of the wave-associated particle velocity. Starting from a general oblique radar observation configuration, applying the dispersion relation for the gravity waves, the spectrum for the observed fluctuations in the line-of-sight gravity-wave spectrum is investigated through a filter function. The consequence of the filter function on data analysis is discussed.
Very high energy gamma ray astronomy
NASA Technical Reports Server (NTRS)
Grindlay, J. E.
1976-01-01
Recent results in ground based very high energy gamma ray astronomy are reviewed. The various modes of the atmospheric Cerenkov technique are described, and the importance of cosmic ray rejection methods is stressed. The positive detections of the Crab pulsar that suggest a very flat spectrum and time-variable pulse phase are discussed. Observations of other pulsars (particularly Vela) suggest these features may be general. Evidence that a 4.8 hr modulated effect was detected from Cyg X-3 is strengthened in that the exact period originally proposed agrees well with a recent determination of the X-ray period. The southern sky observations are reviewed, and the significance of the detection of an active galaxy (NGC 5128) is considered for source models and future observations.
Supersymmetric leptogenesis with a light hidden sector
NASA Astrophysics Data System (ADS)
De Simone, Andrea; Garny, Mathias; Ibarra, Alejandro; Weniger, Christoph
2010-07-01
Supersymmetric scenarios incorporating thermal leptogenesis as the origin of the observed matter-antimatter asymmetry generically predict abundances of the primordial elements which are in conflict with observations. In this paper we propose a simple way to circumvent this tension and accommodate naturally thermal leptogenesis and primordial nucleosynthesis. We postulate the existence of a light hidden sector, coupled very weakly to the Minimal Supersymmetric Standard Model, which opens up new decay channels for the next-to-lightest supersymmetric particle, thus diluting its abundance during nucleosynthesis. We present a general model-independent analysis of this mechanism as well as two concrete realizations, and describe the relevant cosmological and astrophysical bounds and implications for this dark matter scenario. Possible experimental signatures at colliders and in cosmic-ray observations are also discussed.
Suggested notation conventions for rotational seismology
Evans, J.R.
2009-01-01
We note substantial inconsistency among authors discussing rotational motions observed with inertial seismic sensors (and much more so in the broader topic of rotational phenomena). Working from physics and other precedents, we propose standard terminology and a preferred reference frame for inertial sensors (Fig. 1) that may be consistently used in discussions of both finite and infinitesimal observed rotational and translational motions in seismology and earthquake engineering. The scope of this article is limited to observations because there are significant differences in the analysis of finite and infinitesimal rotations, though such discussions should remain compatible with those presented here where possible. We recommend the general use of the notation conventions presented in this tutorial, and we recommend that any deviations or alternatives be explicitly defined.
Multi-fidelity Gaussian process regression for prediction of random fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parussini, L.; Venturi, D., E-mail: venturi@ucsc.edu; Perdikaris, P.
We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgersmore » equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.« less
Efficiently detecting outlying behavior in video-game players.
Kim, Young Bin; Kang, Shin Jin; Lee, Sang Hyeok; Jung, Jang Young; Kam, Hyeong Ryeol; Lee, Jung; Kim, Young Sun; Lee, Joonsoo; Kim, Chang Hun
2015-01-01
In this paper, we propose a method for automatically detecting the times during which game players exhibit specific behavior, such as when players commonly show excitement, concentration, immersion, and surprise. The proposed method detects such outlying behavior based on the game players' characteristics. These characteristics are captured non-invasively in a general game environment. In this paper, cameras were used to analyze observed data such as facial expressions and player movements. Moreover, multimodal data from the game players (i.e., data regarding adjustments to the volume and the use of the keyboard and mouse) was used to analyze high-dimensional game-player data. A support vector machine was used to efficiently detect outlying behaviors. We verified the effectiveness of the proposed method using games from several genres. The recall rate of the outlying behavior pre-identified by industry experts was approximately 70%. The proposed method can also be used for feedback analysis of various interactive content provided in PC environments.
Efficiently detecting outlying behavior in video-game players
Kim, Young Bin; Kang, Shin Jin; Lee, Sang Hyeok; Jung, Jang Young; Kam, Hyeong Ryeol; Lee, Jung; Kim, Young Sun; Lee, Joonsoo
2015-01-01
In this paper, we propose a method for automatically detecting the times during which game players exhibit specific behavior, such as when players commonly show excitement, concentration, immersion, and surprise. The proposed method detects such outlying behavior based on the game players’ characteristics. These characteristics are captured non-invasively in a general game environment. In this paper, cameras were used to analyze observed data such as facial expressions and player movements. Moreover, multimodal data from the game players (i.e., data regarding adjustments to the volume and the use of the keyboard and mouse) was used to analyze high-dimensional game-player data. A support vector machine was used to efficiently detect outlying behaviors. We verified the effectiveness of the proposed method using games from several genres. The recall rate of the outlying behavior pre-identified by industry experts was approximately 70%. The proposed method can also be used for feedback analysis of various interactive content provided in PC environments. PMID:26713250
Tropical geometry of statistical models.
Pachter, Lior; Sturmfels, Bernd
2004-11-16
This article presents a unified mathematical framework for inference in graphical models, building on the observation that graphical models are algebraic varieties. From this geometric viewpoint, observations generated from a model are coordinates of a point in the variety, and the sum-product algorithm is an efficient tool for evaluating specific coordinates. Here, we address the question of how the solutions to various inference problems depend on the model parameters. The proposed answer is expressed in terms of tropical algebraic geometry. The Newton polytope of a statistical model plays a key role. Our results are applied to the hidden Markov model and the general Markov model on a binary tree.
Spin Self-Rephasing and Very Long Coherence Times in a Trapped Atomic Ensemble
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, C.; Reinhard, F.; Schneider, T.
2010-07-09
We perform Ramsey spectroscopy on the ground state of ultracold {sup 87}Rb atoms magnetically trapped on a chip in the Knudsen regime. Field inhomogeneities over the sample should limit the 1/e contrast decay time to about 3 s, while decay times of 58{+-}12 s are actually observed. We explain this surprising result by a spin self-rephasing mechanism induced by the identical spin rotation effect originating from particle indistinguishability. We propose a theory of this synchronization mechanism and obtain good agreement with the experimental observations. The effect is general and may appear in other physical systems.
Lu, Tsui-Shan; Longnecker, Matthew P; Zhou, Haibo
2017-03-15
Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data, and the general ODS design for a continuous response. While substantial work has been carried out for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome-dependent sampling (multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the multivariate-ODS or the estimator from a simple random sample with the same sample size. The multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of polychlorinated biphenyl exposure to hearing loss in children born to the Collaborative Perinatal Study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Sun, Wenchao; Ishidaira, Hiroshi; Bastola, Satish; Yu, Jingshan
2015-05-01
Lacking observation data for calibration constrains applications of hydrological models to estimate daily time series of streamflow. Recent improvements in remote sensing enable detection of river water-surface width from satellite observations, making possible the tracking of streamflow from space. In this study, a method calibrating hydrological models using river width derived from remote sensing is demonstrated through application to the ungauged Irrawaddy Basin in Myanmar. Generalized likelihood uncertainty estimation (GLUE) is selected as a tool for automatic calibration and uncertainty analysis. Of 50,000 randomly generated parameter sets, 997 are identified as behavioral, based on comparing model simulation with satellite observations. The uncertainty band of streamflow simulation can span most of 10-year average monthly observed streamflow for moderate and high flow conditions. Nash-Sutcliffe efficiency is 95.7% for the simulated streamflow at the 50% quantile. These results indicate that application to the target basin is generally successful. Beyond evaluating the method in a basin lacking streamflow data, difficulties and possible solutions for applications in the real world are addressed to promote future use of the proposed method in more ungauged basins. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
14 CFR 151.111 - Advance planning proposals: General.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Engineering Proposals § 151.111 Advance planning proposals: General. (a) Each advance planning and engineering... application, under §§ 151.21(c) and 151.27, or both. (c) Each proposal must relate to planning and engineering... “Airport Activity Statistics of Certificated Route Air Carriers” (published jointly by FAA and the Civil...
Physically detached 'compact groups'
NASA Technical Reports Server (NTRS)
Hernquist, Lars; Katz, Neal; Weinberg, David H.
1995-01-01
A small fraction of galaxies appear to reside in dense compact groups, whose inferred crossing times are much shorter than a Hubble time. These short crossing times have led to considerable disagreement among researchers attempting to deduce the dynamical state of these systems. In this paper, we suggest that many of the observed groups are not physically bound but are chance projections of galaxies well separated along the line of sight. Unlike earlier similar proposals, ours does not require that the galaxies in the compact group be members of a more diffuse, but physically bound entity. The probability of physically separated galaxies projecting into an apparent compact group is nonnegligible if most galaxies are distributed in thin filaments. We illustrate this general point with a specific example: a simulation of a cold dark matter universe, in which hydrodynamic effects are included to identify galaxies. The simulated galaxy distribution is filamentary and end-on views of these filaments produce apparent galaxy associations that have sizes and velocity dispersions similar to those of observed compact groups. The frequency of such projections is sufficient, in principle, to explain the observed space density of groups in the Hickson catalog. We discuss the implications of our proposal for the formation and evolution of groups and elliptical galaxies. The proposal can be tested by using redshift-independent distance estimators to measure the line-of-sight spatial extent of nearby compact groups.
Virtual Observatory Interfaces to the Chandra Data Archive
NASA Astrophysics Data System (ADS)
Tibbetts, M.; Harbo, P.; Van Stone, D.; Zografou, P.
2014-05-01
The Chandra Data Archive (CDA) plays a central role in the operation of the Chandra X-ray Center (CXC) by providing access to Chandra data. Proprietary interfaces have been the backbone of the CDA throughout the Chandra mission. While these interfaces continue to provide the depth and breadth of mission specific access Chandra users expect, the CXC has been adding Virtual Observatory (VO) interfaces to the Chandra proposal catalog and observation catalog. VO interfaces provide standards-based access to Chandra data through simple positional queries or more complex queries using the Astronomical Data Query Language. Recent development at the CDA has generalized our existing VO services to create a suite of services that can be configured to provide VO interfaces to any dataset. This approach uses a thin web service layer for the individual VO interfaces, a middle-tier query component which is shared among the VO interfaces for parsing, scheduling, and executing queries, and existing web services for file and data access. The CXC VO services provide Simple Cone Search (SCS), Simple Image Access (SIA), and Table Access Protocol (TAP) implementations for both the Chandra proposal and observation catalogs within the existing archive architecture. Our work with the Chandra proposal and observation catalogs, as well as additional datasets beyond the CDA, illustrates how we can provide configurable VO services to extend core archive functionality.
NASA Astrophysics Data System (ADS)
Dobaria, Archana S.; Coble, Kimberly A.; Alejandra, Le; Berryhill, Katie; McLin, Kevin M.; Cominsky, Lynn R.
2018-06-01
As part of a general education undergraduate astronomy course at a minority-serving university in the Midwestern US, students completed an observing project with the Global Telescope Network (GTN), where they participated in realistic practices used by professional astronomers, including proposal writing and peer review. First, students went through the process of applying for telescope time by choosing an astronomical object and writing an observing proposal. Then they performed an NSF-style review of classmates’ proposals, including written peer reviews and a review panel. After obtaining images from GTN telescopes, students presented their project and findings in front of the class.This study investigates students’ experiences and perceived impacts of participation in the project. The data analyzed includes an essay assignment [N = 59] administered over seven semesters and individual interviews [N = 8] collected over two semesters. Students were prompted to address what they liked, disliked, or would change about the project experience. These data were coded iteratively into nine categories. A Kruskal-Wallis (KW) test was used to determine that essay results from different semesters could be combined. We find that students expressed an overall strong positive affect, increased perception of self-efficacy, enjoyment of the experience of peer review, an appreciation for being able to use real scientific tools and to take on the role of astronomers, as well as a small number of dislikes such as real-world constraints on observing.
Grant, William B
2009-01-01
The ultraviolet-B (UVB)-vitamin D-cancer hypothesis was proposed in 1980. Since then, several ecological and observational studies have examined the hypothesis, in addition to one good randomized, controlled trial. Also, the mechanisms whereby vitamin D reduces the risk of cancer have been elucidated. This report aims to examine the evidence to date with respect to the criteria for causality in a biological system first proposed by Robert Koch and later systematized by A. Bradford Hill. The criteria of most relevance are strength of association, consistency, biological gradient, plausibility/mechanisms and experimental verification. Results for several cancers generally satisfy these criteria. Results for breast and colorectal cancer satisfy the criteria best, but there is also good evidence that other cancers do as well, including bladder, esophageal, gallbladder, gastric, ovarian, rectal, renal and uterine corpus cancer, as well as Hodgkin's and non-Hodgkin's lymphoma. Several cancers have mixed findings with respect to UVB and/or vitamin D, including pancreatic and prostate cancer and melanoma. Even for these, the benefit of vitamin D seems reasonably strong. Although ecological and observational studies are not generally regarded as able to provide convincing evidence of causality, the fact that humanity has always existed with vitamin D from solar UVB irradiance means that there is a wealth of evidence to be harvested using the ecological and observational approaches. Nonetheless, additional randomized, controlled trials are warranted to further examine the link between vitamin D and cancer incidence, survival and mortality.
NASA Astrophysics Data System (ADS)
Chakravarty, G. K.; Mohanty, S.; Lambiase, G.
Cosmological and astrophysical observations lead to the emerging picture of a universe that is spatially flat and presently undertaking an accelerated expansion. The observations supporting this picture come from a range of measurements encompassing estimates of galaxy cluster masses, the Hubble diagram derived from type-Ia supernovae observations, the measurements of Cosmic Microwave Background radiation anisotropies, etc. The present accelerated expansion of the universe can be explained by admitting the existence of a cosmic fluid, with negative pressure. In the simplest scenario, this unknown component of the universe, the Dark Energy, is represented by the cosmological constant (Λ), and accounts for about 70% of the global energy budget of the universe. The remaining 30% consist of a small fraction of baryons (4%) with the rest being Cold Dark Matter (CDM). The Lambda Cold Dark Matter (ΛCDM) model, i.e. General Relativity with cosmological constant, is in good agreement with observations. It can be assumed as the first step towards a new standard cosmological model. However, despite the satisfying agreement with observations, the ΛCDM model presents lack of congruence and shortcomings and therefore theories beyond Einstein’s General Relativity are called for. Many extensions of Einstein’s theory of gravity have been studied and proposed with various motivations like the quest for a quantum theory of gravity to extensions of anomalies in observations at the solar system, galactic and cosmological scales. These extensions include adding higher powers of Ricci curvature R, coupling the Ricci curvature with scalar fields and generalized functions of R. In addition, when viewed from the perspective of Supergravity (SUGRA), many of these theories may originate from the same SUGRA theory, but interpreted in different frames. SUGRA therefore serves as a good framework for organizing and generalizing theories of gravity beyond General Relativity. All these theories when applied to inflation (a rapid expansion of early universe in which primordial gravitational waves might be generated and might still be detectable by the imprint they left or by the ripples that persist today) can have distinct signatures in the Cosmic Microwave Background radiation temperature and polarization anisotropies. We give a review of ΛCDM cosmology and survey the theories of gravity beyond Einstein’s General Relativity, specially which arise from SUGRA, and study the consequences of these theories in the context of inflation and put bounds on the theories and the parameters therein from the observational experiments like PLANCK, Keck/BICEP, etc. The possibility of testing these theories in the near future in CMB observations and new data coming from colliders like the LHC, provides an unique opportunity for constructing verifiable models of particle physics and General Relativity.
NASA Astrophysics Data System (ADS)
Dat, Tran Huy; Takeda, Kazuya; Itakura, Fumitada
We present a multichannel speech enhancement method based on MAP speech spectral magnitude estimation using a generalized gamma model of speech prior distribution, where the model parameters are adapted from actual noisy speech in a frame-by-frame manner. The utilization of a more general prior distribution with its online adaptive estimation is shown to be effective for speech spectral estimation in noisy environments. Furthermore, the multi-channel information in terms of cross-channel statistics are shown to be useful to better adapt the prior distribution parameters to the actual observation, resulting in better performance of speech enhancement algorithm. We tested the proposed algorithm in an in-car speech database and obtained significant improvements of the speech recognition performance, particularly under non-stationary noise conditions such as music, air-conditioner and open window.
Atmospheric Diabatic Heating in Different Weather States and the General Circulation
NASA Technical Reports Server (NTRS)
Rossow, William B.; Zhang, Yuanchong; Tselioudis, George
2016-01-01
Analysis of multiple global satellite products identifies distinctive weather states of the atmosphere from the mesoscale pattern of cloud properties and quantifies the associated diabatic heating/cooling by radiative flux divergence, precipitation, and surface sensible heat flux. The results show that the forcing for the atmospheric general circulation is a very dynamic process, varying strongly at weather space-time scales, comprising relatively infrequent, strong heating events by ''stormy'' weather and more nearly continuous, weak cooling by ''fair'' weather. Such behavior undercuts the value of analyses of time-averaged energy exchanges in observations or numerical models. It is proposed that an analysis of the joint time-related variations of the global weather states and the general circulation on weather space-time scales might be used to establish useful ''feedback like'' relationships between cloud processes and the large-scale circulation.
A universal test for gravitational decoherence
Pfister, C.; Kaniewski, J.; Tomamichel, M.; Mantri, A.; Schmucker, R.; McMahon, N.; Milburn, G.; Wehner, S.
2016-01-01
Quantum mechanics and the theory of gravity are presently not compatible. A particular question is whether gravity causes decoherence. Several models for gravitational decoherence have been proposed, not all of which can be described quantum mechanically. Since quantum mechanics may need to be modified, one may question the use of quantum mechanics as a calculational tool to draw conclusions from the data of experiments concerning gravity. Here we propose a general method to estimate gravitational decoherence in an experiment that allows us to draw conclusions in any physical theory where the no-signalling principle holds, even if quantum mechanics needs to be modified. As an example, we propose a concrete experiment using optomechanics. Our work raises the interesting question whether other properties of nature could similarly be established from experimental observations alone—that is, without already having a rather well-formed theory of nature to make sense of experimental data. PMID:27694976
Statistical analysis of loopy belief propagation in random fields
NASA Astrophysics Data System (ADS)
Yasuda, Muneki; Kataoka, Shun; Tanaka, Kazuyuki
2015-10-01
Loopy belief propagation (LBP), which is equivalent to the Bethe approximation in statistical mechanics, is a message-passing-type inference method that is widely used to analyze systems based on Markov random fields (MRFs). In this paper, we propose a message-passing-type method to analytically evaluate the quenched average of LBP in random fields by using the replica cluster variation method. The proposed analytical method is applicable to general pairwise MRFs with random fields whose distributions differ from each other and can give the quenched averages of the Bethe free energies over random fields, which are consistent with numerical results. The order of its computational cost is equivalent to that of standard LBP. In the latter part of this paper, we describe the application of the proposed method to Bayesian image restoration, in which we observed that our theoretical results are in good agreement with the numerical results for natural images.
Hashida, Masahiro; Kamezaki, Ryousuke; Goto, Makoto; Shiraishi, Junji
2017-03-01
The ability to predict hazards in possible situations in a general X-ray examination room created for Kiken-Yochi training (KYT) is quantified by use of free-response receiver-operating characteristics (FROC) analysis for determining whether the total number of years of clinical experience, involvement in general X-ray examinations, occupation, and training each have an impact on the hazard prediction ability. Twenty-three radiological technologists (RTs) (years of experience: 2-28), four nurses (years of experience: 15-19), and six RT students observed 53 scenes of KYT: 26 scenes with hazardous points (hazardous points are those that might cause injury to patients) and 27 scenes without points. Based on the results of these observations, we calculated the alternative free-response receiver-operating characteristic (AFROC) curve and the figure of merit (FOM) to quantify the hazard prediction ability. The results showed that the total number of years of clinical experience did not have any impact on hazard prediction ability, whereas recent experience with general X-ray examinations greatly influenced this ability. In addition, the hazard prediction ability varied depending on the occupations of the observers while they were observing the same scenes in KYT. The hazard prediction ability of the radiologic technology students was improved after they had undergone patient safety training. This proposed method with FROC observer study enabled the quantification and evaluation of the hazard prediction capability, and the application of this approach to clinical practice may help to ensure the safety of examinations and treatment in the radiology department.
Observation of fractional Chern insulators in a van der Waals heterostructure
NASA Astrophysics Data System (ADS)
Spanton, Eric M.; Zibrov, Alexander A.; Zhou, Haoxin; Taniguchi, Takashi; Watanabe, Kenji; Zaletel, Michael P.; Young, Andrea F.
2018-04-01
Topologically ordered phases are characterized by long-range quantum entanglement and fractional statistics rather than by symmetry breaking. First observed in a fractionally filled continuum Landau level, topological order has since been proposed to arise more generally at fractional fillings of topologically nontrivial Chern bands. Here we report the observation of gapped states at fractional fillings of Harper-Hofstadter bands arising from the interplay of a magnetic field and a superlattice potential in a bilayer graphene–hexagonal boron nitride heterostructure. We observed phases at fractional filling of bands with Chern indices C=‑1, ±2, and ±3. Some of these phases, in C=‑1 and C=2 bands, are characterized by fractional Hall conductance—that is, they are known as fractional Chern insulators and constitute an example of topological order beyond Landau levels.
Earth Observing System (EOS) Advanced Microwave Sounding Unit-A (AMSU-A) Spares Program Plan
NASA Technical Reports Server (NTRS)
Chapman, Weldon
1994-01-01
This plan specifies the spare components to be provided for the EOS/AMSU-A instrument and the general spares philosophy for their procurement. It also address key components not recommended for spares, as well as the schedule and method for obtaining the spares. The selected spares list was generated based on component criticality, reliability, repairability, and availability. An alternative spares list is also proposed based on more stringent fiscal constraints.
Sakai, Kenshi; Upadhyaya, Shrinivasa K; Andrade-Sanchez, Pedro; Sviridova, Nina V
2017-03-01
Real-world processes are often combinations of deterministic and stochastic processes. Soil failure observed during farm tillage is one example of this phenomenon. In this paper, we investigated the nonlinear features of soil failure patterns in a farm tillage process. We demonstrate emerging determinism in soil failure patterns from stochastic processes under specific soil conditions. We normalized the deterministic nonlinear prediction considering autocorrelation and propose it as a robust way of extracting a nonlinear dynamical system from noise contaminated motion. Soil is a typical granular material. The results obtained here are expected to be applicable to granular materials in general. From a global scale to nano scale, the granular material is featured in seismology, geotechnology, soil mechanics, and particle technology. The results and discussions presented here are applicable in these wide research areas. The proposed method and our findings are useful with respect to the application of nonlinear dynamics to investigate complex motions generated from granular materials.
Self-consistent description of a system of interacting phonons
NASA Astrophysics Data System (ADS)
Poluektov, Yu. M.
2015-11-01
A proposal for a method of self-consistent description of phonon systems. This method generalizes the Debye model to account for phonon-phonon interaction. The idea of "self-consistent" phonons is introduced; their speed depends on the temperature and is determined by solving a non-linear equation. The Debye energy is also a function of the temperature within the framework of the proposed approach. The thermodynamics of "self-consistent" phonon gas are built. It is shown that at low temperatures the cubic law temperature dependence of specific heat acquires an additional term that is proportional to the seventh power of the temperature. This seems to explain the reason why the cubic law for specific heat is observed only at relatively low temperatures. At high temperatures, the theory predicts a linear deviation with respect to temperature from the Dulong-Petit law, which is observed experimentally. A modification to the melting criteria is considered, to account for the phonon-phonon interaction.
Relativistic Newtonian dynamics
NASA Astrophysics Data System (ADS)
Friedman, Yaakov; Mendel Steiner, Joseph
2017-05-01
A new Relativistic Newtonian Dynamics (RND) for motion under a conservative force capable to describe non-classical behavior in astronomy is proposed. The rotor experiments using Mössbauer spectroscopy with synchrotron radiation, described in the paper, indicate the influence of non-gravitational acceleration or potential energy on time. Similarly, the observed precession of Mercury and the periastron advance of binaries can be explained by the influence of gravitational potential energy on spacetime. The proposed RND incorporates the influence of potential energy on spacetime in Newton’s dynamics. The effect of this influence on time intervals, space increments and velocities is described explicitly by the use of the concept of escape trajectory. For an attracting conservative static potential we derived the RND energy conservation and the dynamics equation for motion of objects with non-zero mass and for massless particles. These equations are subsequently simplified for motion under a central force. Without the need to curve spacetime, this model predicts accurately the four non-classical observations in astronomy used to test the General Relativity.
The Service Programme of the Isaac Newton Group of Telescopes
NASA Astrophysics Data System (ADS)
Méndez, J.
2013-05-01
The Service Programme of the Isaac Newton Group of Telescopes (Roque de los Muchachos Observatory, La Palma, Spain) aims at providing astronomers with a rapid and flexible tool for obtaining small sets of observations on the William Herschel Telescope up to 8 hours. This can be used to try new ideas or complement a regular observing programme allocated on the ING telescopes, for instance. Proposals are accepted from principal investigators working in an institution located in the United Kingdom, the Netherlands or Spain, but also regardless the nationality of the host institution. A monthly deadline for application submission takes place at midnight on the last day of each month but urgent requests submitted at any time can also be accepted. Proposals are generally withdrawn from the scheme after a one year period. In this poster we provide an overview of the programme and some statistics. More information can be obtained at http://www.ing.iac.es/astronomy/service/.
Likić, Vladimir A
2009-01-01
Gas chromatography-mass spectrometry (GC-MS) is a widely used analytical technique for the identification and quantification of trace chemicals in complex mixtures. When complex samples are analyzed by GC-MS it is common to observe co-elution of two or more components, resulting in an overlap of signal peaks observed in the total ion chromatogram. In such situations manual signal analysis is often the most reliable means for the extraction of pure component signals; however, a systematic manual analysis over a number of samples is both tedious and prone to error. In the past 30 years a number of computational approaches were proposed to assist in the process of the extraction of pure signals from co-eluting GC-MS components. This includes empirical methods, comparison with library spectra, eigenvalue analysis, regression and others. However, to date no approach has been recognized as best, nor accepted as standard. This situation hampers general GC-MS capabilities, and in particular has implications for the development of robust, high-throughput GC-MS analytical protocols required in metabolic profiling and biomarker discovery. Here we first discuss the nature of GC-MS data, and then review some of the approaches proposed for the extraction of pure signals from co-eluting components. We summarize and classify different approaches to this problem, and examine why so many approaches proposed in the past have failed to live up to their full promise. Finally, we give some thoughts on the future developments in this field, and suggest that the progress in general computing capabilities attained in the past two decades has opened new horizons for tackling this important problem. PMID:19818154
Attention trees and semantic paths
NASA Astrophysics Data System (ADS)
Giusti, Christian; Pieroni, Goffredo G.; Pieroni, Laura
2007-02-01
In the last few decades several techniques for image content extraction, often based on segmentation, have been proposed. It has been suggested that under the assumption of very general image content, segmentation becomes unstable and classification becomes unreliable. According to recent psychological theories, certain image regions attract the attention of human observers more than others and, generally, the image main meaning appears concentrated in those regions. Initially, regions attracting our attention are perceived as a whole and hypotheses on their content are formulated; successively the components of those regions are carefully analyzed and a more precise interpretation is reached. It is interesting to observe that an image decomposition process performed according to these psychological visual attention theories might present advantages with respect to a traditional segmentation approach. In this paper we propose an automatic procedure generating image decomposition based on the detection of visual attention regions. A new clustering algorithm taking advantage of the Delaunay- Voronoi diagrams for achieving the decomposition target is proposed. By applying that algorithm recursively, starting from the whole image, a transformation of the image into a tree of related meaningful regions is obtained (Attention Tree). Successively, a semantic interpretation of the leaf nodes is carried out by using a structure of Neural Networks (Neural Tree) assisted by a knowledge base (Ontology Net). Starting from leaf nodes, paths toward the root node across the Attention Tree are attempted. The task of the path consists in relating the semantics of each child-parent node pair and, consequently, in merging the corresponding image regions. The relationship detected in this way between two tree nodes generates, as a result, the extension of the interpreted image area through each step of the path. The construction of several Attention Trees has been performed and partial results will be shown.
Detection of crossover time scales in multifractal detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Ge, Erjia; Leung, Yee
2013-04-01
Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.
BOOK REVIEW Dark Energy: Theory and Observations Dark Energy: Theory and Observations
NASA Astrophysics Data System (ADS)
Faraoni, Valerio
2011-02-01
The 1998 discovery of what seems an acceleration of the cosmic expansion was made using type Ia supernovae and was later confirmed by other cosmological observations. It has made a huge impact on cosmology, prompting theoreticians to explain the observations and introducing the concept of dark energy into modern physics. A vast literature on dark energy and its alternatives has appeared since then, and this is the first comprehensive book devoted to the subject. This book is addressed to an advanced audience comprising graduate students and researchers in cosmology. Although it contains forty four fully solved problems and the first three chapters are rather introductory, they do not constitute a self-consistent course in cosmology and this book assumes graduate level knowledge of cosmology and general relativity. The fourth chapter focuses on observations, while the rest of this book addresses various classes of models proposed, including the cosmological constant, quintessence, k-essence, phantom energy, coupled dark energy, etc. The title of this book should not induce the reader into believing that only dark energy models are addressed—the authors devote two chapters to discussing conceptually very different approaches alternative to dark energy, including ƒ(R) and Gauss-Bonnet gravity, braneworld and void models, and the backreaction of inhomogeneities on the cosmic dynamics. Two chapters contain a general discussion of non-linear cosmological perturbations and statistical methods widely applicable in cosmology. The final chapter outlines future perspectives and the most likely lines of observational research on dark energy in the future. Overall, this book is carefully drafted, well presented, and does a good job of organizing the information available in the vast literature. The reader is pointed to the essential references and guided in a balanced way through the various proposals aimied at explaining the cosmological observations. Not all classes of models are treated in great detail, as expected from a volume covering an estimated four thousand papers. This much needed volume fills a gap in the literature and is a must-have in the library of young and seasoned researchers alike.
Geomechanical Modeling of Gas Hydrate Bearing Sediments
NASA Astrophysics Data System (ADS)
Sanchez, M. J.; Gai, X., Sr.
2015-12-01
This contribution focuses on an advance geomechanical model for methane hydrate-bearing soils based on concepts of elasto-plasticity for strain hardening/softening soils and incorporates bonding and damage effects. The core of the proposed model includes: a hierarchical single surface critical state framework, sub-loading concepts for modeling the plastic strains generally observed inside the yield surface and a hydrate enhancement factor to account for the cementing effects provided by the presence of hydrates in sediments. The proposed framework has been validated against recently published experiments involving both, synthetic and natural hydrate soils, as well as different sediments types (i.e., different hydrate saturations, and different hydrates morphologies) and confinement conditions. The performance of the model in these different case studies was very satisfactory.
NASA Astrophysics Data System (ADS)
Dell'Acqua, Fabio; Iannelli, Gianni Cristian; Kerekes, John; Lisini, Gianni; Moser, Gabriele; Ricardi, Niccolo; Pierce, Leland
2016-08-01
The issue of homogeneity in performance assessment of proposed algorithms for information extraction is generally perceived also in the Earth Observation (EO) domain. Different authors propose different datasets to test their developed algorithms and to the reader it is frequently difficult to assess which is better for his/her specific application, given the wide variability in test sets that makes pure comparison of e.g. accuracy values less meaningful than one would desire. With our work, we gave a modest contribution to ease the problem by making it possible to automatically distribute a limited set of possible "standard" open datasets, together with some ground truth info, and automatically assess processing results provided by the users.
NASA Astrophysics Data System (ADS)
Xu, Xueping; Han, Qinkai; Chu, Fulei
2018-03-01
The electromagnetic vibration of electrical machines with an eccentric rotor has been extensively investigated. However, magnetic saturation was often neglected. Moreover, the rub impact between the rotor and stator is inevitable when the amplitude of the rotor vibration exceeds the air-gap. This paper aims to propose a general electromagnetic excitation model for electrical machines. First, a general model which takes the magnetic saturation and rub impact into consideration is proposed and validated by the finite element method and reference. The dynamic equations of a Jeffcott rotor system with electromagnetic excitation and mass imbalance are presented. Then, the effects of pole-pair number and rubbing parameters on vibration amplitude are studied and approaches restraining the amplitude are put forward. Finally, the influences of mass eccentricity, resultant magnetomotive force (MMF), stiffness coefficient, damping coefficient, contact stiffness and friction coefficient on the stability of the rotor system are investigated through the Floquet theory, respectively. The amplitude jumping phenomenon is observed in a synchronous generator for different pole-pair numbers. The changes of design parameters can alter the stability states of the rotor system and the range of parameter values forms the zone of stability, which lays helpful suggestions for the design and application of the electrical machines.
Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li
2014-01-01
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158
Testing deformation hypotheses by constraints on a time series of geodetic observations
NASA Astrophysics Data System (ADS)
Velsink, Hiddo
2018-01-01
In geodetic deformation analysis observations are used to identify form and size changes of a geodetic network, representing objects on the earth's surface. The network points are monitored, often continuously, because of suspected deformations. A deformation may affect many points during many epochs. The problem is that the best description of the deformation is, in general, unknown. To find it, different hypothesised deformation models have to be tested systematically for agreement with the observations. The tests have to be capable of stating with a certain probability the size of detectable deformations, and to be datum invariant. A statistical criterion is needed to find the best deformation model. Existing methods do not fulfil these requirements. Here we propose a method that formulates the different hypotheses as sets of constraints on the parameters of a least-squares adjustment model. The constraints can relate to subsets of epochs and to subsets of points, thus combining time series analysis and congruence model analysis. The constraints are formulated as nonstochastic observations in an adjustment model of observation equations. This gives an easy way to test the constraints and to get a quality description. The proposed method aims at providing a good discriminating method to find the best description of a deformation. The method is expected to improve the quality of geodetic deformation analysis. We demonstrate the method with an elaborate example.
Magnetospheric accretion models for T Tauri stars. 1: Balmer line profiles without rotation
NASA Technical Reports Server (NTRS)
Hartmann, Lee; Hewett, Robert; Calvet, Nuria
1994-01-01
We argue that the strong emission lines of T Tauri stars are generally produced in infalling envelopes. Simple models of infall constrained to a dipolar magnetic field geometry explain many peculiarities of observed line profiles that are difficult, if not impossible, to reproduce with wind models. Radiative transfer effects explain why certain lines can appear quite symmetric while other lines simultaneously exhibit inverse P Cygni profiles, without recourse to complicated velocity fields. The success of the infall models in accounting for qualitative features of observed line profiles supports the proposal that stellar magnetospheres disrupt disk accretion in T Tauri stars, that true boundary layers are not usually present in T Tauri stars, and that the observed 'blue veiling' emission arises from the base of the magnetospheric accretion column.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-22
... Administrative Procedure Act (APA), or any other law, to publish general notice of proposed rulemaking.'' The RFA... NPDES general permits are permits, not rulemakings, under the APA and thus not subject to APA rulemaking...
7 CFR 1700.4 - Public comments on proposed rules.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 11 2011-01-01 2011-01-01 false Public comments on proposed rules. 1700.4 Section 1700.4 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE GENERAL INFORMATION General § 1700.4 Public comments on proposed rules. RUS...
Muthukumar, P; Balasubramaniam, P; Ratnavelu, K
2017-07-26
This paper proposes a generalized robust synchronization method for different dimensional fractional order dynamical systems with mismatched fractional derivatives in the presence of function uncertainty and external disturbance by a designing sliding mode controller. Based on the proposed theory of generalized robust synchronization criterion, a novel audio cryptosystem is proposed for sending or sharing voice messages secretly via insecure channel. Numerical examples are given to verify the potency of the proposed theories. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Homogeneity of a Global Multisatellite Soil Moisture Climate Data Record
NASA Technical Reports Server (NTRS)
Su, Chun-Hsu; Ryu, Dongryeol; Dorigo, Wouter; Zwieback, Simon; Gruber, Alexander; Albergel, Clement; Reichle, Rolf H.; Wagner, Wolfgang
2016-01-01
Climate Data Records (CDR) that blend multiple satellite products are invaluable for climate studies, trend analysis and risk assessments. Knowledge of any inhomogeneities in the CDR is therefore critical for making correct inferences. This work proposes a methodology to identify the spatiotemporal extent of the inhomogeneities in a 36-year, global multisatellite soil moisture CDR as the result of changing observing systems. Inhomogeneities are detected at up to 24 percent of the tested pixels with spatial extent varying with satellite changeover times. Nevertheless, the contiguous periods without inhomogeneities at changeover times are generally longer than 10 years. Although the inhomogeneities have measurable impact on the derived trends, these trends are similar to those observed in ground data and land surface reanalysis, with an average error less than 0.003 cubic meters per cubic meter per year. These results strengthen the basis of using the product for long-term studies and demonstrate the necessity of homogeneity testing of multisatellite CDRs in general.
Assessment of corneal properties based on statistical modeling of OCT speckle.
Jesus, Danilo A; Iskander, D Robert
2017-01-01
A new approach to assess the properties of the corneal micro-structure in vivo based on the statistical modeling of speckle obtained from Optical Coherence Tomography (OCT) is presented. A number of statistical models were proposed to fit the corneal speckle data obtained from OCT raw image. Short-term changes in corneal properties were studied by inducing corneal swelling whereas age-related changes were observed analyzing data of sixty-five subjects aged between twenty-four and seventy-three years. Generalized Gamma distribution has shown to be the best model, in terms of the Akaike's Information Criterion, to fit the OCT corneal speckle. Its parameters have shown statistically significant differences (Kruskal-Wallis, p < 0.001) for short and age-related corneal changes. In addition, it was observed that age-related changes influence the corneal biomechanical behaviour when corneal swelling is induced. This study shows that Generalized Gamma distribution can be utilized to modeling corneal speckle in OCT in vivo providing complementary quantified information where micro-structure of corneal tissue is of essence.
The generalized accessibility and spectral gap of lower hybrid waves in tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Hironori
1994-03-01
The generalized accessibility of lower hybrid waves, primarily in the current drive regime of tokamak plasmas, which may include shifting, either upward or downward, of the parallel refractive index (n{sub {parallel}}), is investigated, based upon a cold plasma dispersion relation and various geometrical constraint (G.C.) relations imposed on the behavior of n{sub {parallel}}. It is shown that n{sub {parallel}} upshifting can be bounded and insufficient to bridge a large spectral gap to cause wave damping, depending upon whether the G.C. relation allows the oblique resonance to occur. The traditional n{sub {parallel}} upshifting mechanism caused by the pitch angle of magneticmore » field lines is shown to lead to contradictions with experimental observations. An upshifting mechanism brought about by the density gradient along field lines is proposed, which is not inconsistent with experimental observations, and provides plausible explanations to some unresolved issues of lower hybrid wave theory, including generation of {open_quote}seed electrons.{close_quote}« less
NASA Astrophysics Data System (ADS)
Bania, Piotr; Baranowski, Jerzy
2018-02-01
Quantisation of signals is a ubiquitous property of digital processing. In many cases, it introduces significant difficulties in state estimation and in consequence control. Popular approaches either do not address properly the problem of system disturbances or lead to biased estimates. Our intention was to find a method for state estimation for stochastic systems with quantised and discrete observation, that is free of the mentioned drawbacks. We have formulated a general form of the optimal filter derived by a solution of Fokker-Planck equation. We then propose the approximation method based on Galerkin projections. We illustrate the approach for the Ornstein-Uhlenbeck process, and derive analytic formulae for the approximated optimal filter, also extending the results for the variant with control. Operation is illustrated with numerical experiments and compared with classical discrete-continuous Kalman filter. Results of comparison are substantially in favour of our approach, with over 20 times lower mean squared error. The proposed filter is especially effective for signal amplitudes comparable to the quantisation thresholds. Additionally, it was observed that for high order of approximation, state estimate is very close to the true process value. The results open the possibilities of further analysis, especially for more complex processes.
Loop models of low coronal structures observed by the Normal Incidence X-Ray Telescope (NIXT)
NASA Technical Reports Server (NTRS)
Peres, G.; Reale, F.; Golub, L.
1994-01-01
The X-ray pictures obtained with the Normal Incidence X-Ray Telescope (NIXT), apart from the ubiquitous coronal loops well known from previous X-ray observations, show a new and peculiar morphology: in many active regions there are wide and apparently low-lying areas of intense emission which resemble H alpha plages. By means of hydrostatic models of coronal arches, we analyze the distribution of temperature, density, emission measure, and plasma emissivity in the spectral band to which NIXT is sensitive, and we show that the above morphology can be explained by the characteristics of high pressure loops having a thin region of high surface brightness at the base. We therefore propose that this finding might help to identify high-pressure X-ray emitting coronal regions in NIXT images, and it is in principle applicable to any imaging instrument which has high sensitivity to 10(exp 4) - 10(exp 6) K plasma within a narrow coronal-temperature passband. As a more general result of this study, we propose that the comparison of NIXT observations with models of stationary loops might provide a new diagnostic: the determination of the loop plasma pressure from measurements of brightness distribution along the loop.
Effects of Dissociation/Recombination on the Day–Night Temperature Contrasts of Ultra-hot Jupiters
NASA Astrophysics Data System (ADS)
Komacek, Thaddeus D.; Tan, Xianyu
2018-05-01
Secondary eclipse observations of ultra-hot Jupiters have found evidence that hydrogen is dissociated on their daysides. Additionally, full-phase light curve observations of ultra-hot Jupiters show a smaller day-night emitted flux contrast than that expected from previous theory. Recently, it was proposed by Bell & Cowan (2018) that the heat intake to dissociate hydrogen and heat release due to recombination of dissociated hydrogen can affect the atmospheric circulation of ultra-hot Jupiters. In this work, we add cooling/heating due to dissociation/recombination into the analytic theory of Komacek & Showman (2016) and Zhang & Showman (2017) for the dayside-nightside temperature contrasts of hot Jupiters. We find that at high values of incident stellar flux, the day-night temperature contrast of ultra-hot Jupiters may decrease with increasing incident stellar flux due to dissociation/recombination, the opposite of that expected without including the effects of dissociation/recombination. We propose that a combination of a greater number of full-phase light curve observations of ultra-hot Jupiters and future General Circulation Models that include the effects of dissociation/recombination could determine in detail how the atmospheric circulation of ultra-hot Jupiters differs from that of cooler planets.
The distribution of density in supersonic turbulence
NASA Astrophysics Data System (ADS)
Squire, Jonathan; Hopkins, Philip F.
2017-11-01
We propose a model for the statistics of the mass density in supersonic turbulence, which plays a crucial role in star formation and the physics of the interstellar medium (ISM). The model is derived by considering the density to be arranged as a collection of strong shocks of width ˜ M^{-2}, where M is the turbulent Mach number. With two physically motivated parameters, the model predicts all density statistics for M>1 turbulence: the density probability distribution and its intermittency (deviation from lognormality), the density variance-Mach number relation, power spectra and structure functions. For the proposed model parameters, reasonable agreement is seen between model predictions and numerical simulations, albeit within the large uncertainties associated with current simulation results. More generally, the model could provide a useful framework for more detailed analysis of future simulations and observational data. Due to the simple physical motivations for the model in terms of shocks, it is straightforward to generalize to more complex physical processes, which will be helpful in future more detailed applications to the ISM. We see good qualitative agreement between such extensions and recent simulations of non-isothermal turbulence.
Tzallas, A T; Karvelis, P S; Katsis, C D; Fotiadis, D I; Giannopoulos, S; Konitsiotis, S
2006-01-01
The aim of the paper is to analyze transient events in inter-ictal EEG recordings, and classify epileptic activity into focal or generalized epilepsy using an automated method. A two-stage approach is proposed. In the first stage the observed transient events of a single channel are classified into four categories: epileptic spike (ES), muscle activity (EMG), eye blinking activity (EOG), and sharp alpha activity (SAA). The process is based on an artificial neural network. Different artificial neural network architectures have been tried and the network having the lowest error has been selected using the hold out approach. In the second stage a knowledge-based system is used to produce diagnosis for focal or generalized epileptic activity. The classification of transient events reported high overall accuracy (84.48%), while the knowledge-based system for epilepsy diagnosis correctly classified nine out of ten cases. The proposed method is advantageous since it effectively detects and classifies the undesirable activity into appropriate categories and produces a final outcome related to the existence of epilepsy.
A framework for the direct evaluation of large deviations in non-Markovian processes
NASA Astrophysics Data System (ADS)
Cavallaro, Massimo; Harris, Rosemary J.
2016-11-01
We propose a general framework to simulate stochastic trajectories with arbitrarily long memory dependence and efficiently evaluate large deviation functions associated to time-extensive observables. This extends the ‘cloning’ procedure of Giardiná et al (2006 Phys. Rev. Lett. 96 120603) to non-Markovian systems. We demonstrate the validity of this method by testing non-Markovian variants of an ion-channel model and the totally asymmetric exclusion process, recovering results obtainable by other means.
Surface singularities in Eddington-inspired Born-Infeld gravity.
Pani, Paolo; Sotiriou, Thomas P
2012-12-21
Eddington-inspired Born-Infeld gravity was recently proposed as an alternative to general relativity that offers a resolution of spacetime singularities. The theory differs from Einstein's gravity only inside matter due to nondynamical degrees of freedom, and it is compatible with all current observations. We show that the theory is reminiscent of Palatini f(R) gravity and that it shares the same pathologies, such as curvature singularities at the surface of polytropic stars and unacceptable Newtonian limit. This casts serious doubt on its viability.
Becker, Cinda
2005-02-21
HHS' inspector general's office recently had some good news for four hospitals--they can use gain-sharing programs to help cut spending on devices. Joane Goodroe, left, is the consultant who helped the facilities devise proposals with safeguards that could satisfy the feds' worries about violating antikickback laws. "This is about paying physicians to take on the extra job of of reducing costs," Goodroe says.
A generalized model via random walks for information filtering
NASA Astrophysics Data System (ADS)
Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng
2016-08-01
There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-31
... the conditions of the Beaufort general permit are stringent enough to comply with State water quality... conditions than what is proposed in the Beaufort general permit to ensure compliance with State water quality... Clean Water Act (CWA or ``the Act''), 33 U.S.C. 1342. State Certification of Beaufort General Permit...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 1 2010-07-01 2010-07-01 false How does the Attorney General provide an... PROGRAMS AND ACTIVITIES § 30.8 How does the Attorney General provide an opportunity to comment on proposed... Attorney General gives state processes or directly affected state, areawide, regional, and local officials...
Merging K-means with hierarchical clustering for identifying general-shaped groups.
Peterson, Anna D; Ghosh, Arka P; Maitra, Ranjan
2018-01-01
Clustering partitions a dataset such that observations placed together in a group are similar but different from those in other groups. Hierarchical and K -means clustering are two approaches but have different strengths and weaknesses. For instance, hierarchical clustering identifies groups in a tree-like structure but suffers from computational complexity in large datasets while K -means clustering is efficient but designed to identify homogeneous spherically-shaped clusters. We present a hybrid non-parametric clustering approach that amalgamates the two methods to identify general-shaped clusters and that can be applied to larger datasets. Specifically, we first partition the dataset into spherical groups using K -means. We next merge these groups using hierarchical methods with a data-driven distance measure as a stopping criterion. Our proposal has the potential to reveal groups with general shapes and structure in a dataset. We demonstrate good performance on several simulated and real datasets.
NASA Astrophysics Data System (ADS)
Liu, Xiao-Ming; Jiang, Jun; Hong, Ling; Tang, Dafeng
In this paper, a new method of Generalized Cell Mapping with Sampling-Adaptive Interpolation (GCMSAI) is presented in order to enhance the efficiency of the computation of one-step probability transition matrix of the Generalized Cell Mapping method (GCM). Integrations with one mapping step are replaced by sampling-adaptive interpolations of third order. An explicit formula of interpolation error is derived for a sampling-adaptive control to switch on integrations for the accuracy of computations with GCMSAI. By applying the proposed method to a two-dimensional forced damped pendulum system, global bifurcations are investigated with observations of boundary metamorphoses including full to partial and partial to partial as well as the birth of fully Wada boundary. Moreover GCMSAI requires a computational time of one thirtieth up to one fiftieth compared to that of the previous GCM.
Mir, Aamir; Golden, Barbara L
2016-02-02
The crystal structure of the hammerhead ribozyme bound to the pentavalent transition state analogue vanadate reveals significant rearrangements relative to the previously determined structures. The active site contracts, bringing G10.1 closer to the cleavage site and repositioning a divalent metal ion such that it could, ultimately, interact directly with the scissile phosphate. This ion could also position a water molecule to serve as a general acid in the cleavage reaction. A second divalent ion is observed coordinated to O6 of G12. This metal ion is well-placed to help tune the pKA of G12. On the basis of this crystal structure as well as a wealth of biochemical studies, we propose a mechanism in which G12 serves as the general base and a magnesium-bound water serves as a general acid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mir, Aamir; Golden, Barbara L.
2015-11-09
The crystal structure of the hammerhead ribozyme bound to the pentavalent transition state analogue vanadate reveals significant rearrangements relative to the previously determined structures. The active site contracts, bringing G10.1 closer to the cleavage site and repositioning a divalent metal ion such that it could, ultimately, interact directly with the scissile phosphate. This ion could also position a water molecule to serve as a general acid in the cleavage reaction. A second divalent ion is observed coordinated to O6 of G12. This metal ion is well-placed to help tune the p K A of G12. Finally, on the basis ofmore » this crystal structure as well as a wealth of biochemical studies, in this paper we propose a mechanism in which G12 serves as the general base and a magnesium-bound water serves as a general acid.« less
Vollono, Catello; Testani, Elisa; Losurdo, Anna; Mazza, Salvatore; Della Marca, Giacomo
2013-06-10
We discuss the hypothesis proposed by Engstrom and coworkers that Migraineurs have a relative sleep deprivation, which lowers the pain threshold and predispose to attacks. Previous data indicate that Migraineurs have a reduction of Cyclic Alternating Pattern (CAP), an essential mechanism of NREM sleep regulation which allows to dump the effect of incoming disruptive stimuli, and to protect sleep. The modifications of CAP observed in Migraineurs are similar to those observed in patients with impaired arousal (narcolepsy) and after sleep deprivation. The impairment of this mechanism makes Migraineurs more vulnerable to stimuli triggering attacks during sleep, and represents part of a more general vulnerability to incoming stimuli.
NASA Technical Reports Server (NTRS)
Lien, Guo-Yuan; Kalnay, Eugenia; Miyoshi, Takemasa; Huffman, George J.
2016-01-01
Assimilation of satellite precipitation data into numerical models presents several difficulties, with two of the most important being the non-Gaussian error distributions associated with precipitation, and large model and observation errors. As a result, improving the model forecast beyond a few hours by assimilating precipitation has been found to be difficult. To identify the challenges and propose practical solutions to assimilation of precipitation, statistics are calculated for global precipitation in a low-resolution NCEP Global Forecast System (GFS) model and the TRMM Multisatellite Precipitation Analysis (TMPA). The samples are constructed using the same model with the same forecast period, observation variables, and resolution as in the follow-on GFSTMPA precipitation assimilation experiments presented in the companion paper.The statistical results indicate that the T62 and T126 GFS models generally have positive bias in precipitation compared to the TMPA observations, and that the simulation of the marine stratocumulus precipitation is not realistic in the T62 GFS model. It is necessary to apply to precipitation either the commonly used logarithm transformation or the newly proposed Gaussian transformation to obtain a better relationship between the model and observational precipitation. When the Gaussian transformations are separately applied to the model and observational precipitation, they serve as a bias correction that corrects the amplitude-dependent biases. In addition, using a spatially andor temporally averaged precipitation variable, such as the 6-h accumulated precipitation, should be advantageous for precipitation assimilation.
Fiave, Prosper Agbesi; Sharma, Saloni; Jastorff, Jan; Nelissen, Koen
2018-05-19
Mirror neurons are generally described as a neural substrate hosting shared representations of actions, by simulating or 'mirroring' the actions of others onto the observer's own motor system. Since single neuron recordings are rarely feasible in humans, it has been argued that cross-modal multi-variate pattern analysis (MVPA) of non-invasive fMRI data is a suitable technique to investigate common coding of observed and executed actions, allowing researchers to infer the presence of mirror neurons in the human brain. In an effort to close the gap between monkey electrophysiology and human fMRI data with respect to the mirror neuron system, here we tested this proposal for the first time in the monkey. Rhesus monkeys either performed reach-and-grasp or reach-and-touch motor acts with their right hand in the dark or observed videos of human actors performing similar motor acts. Unimodal decoding showed that both executed or observed motor acts could be decoded from numerous brain regions. Specific portions of rostral parietal, premotor and motor cortices, previously shown to house mirror neurons, in addition to somatosensory regions, yielded significant asymmetric action-specific cross-modal decoding. These results validate the use of cross-modal multi-variate fMRI analyses to probe the representations of own and others' actions in the primate brain and support the proposed mapping of others' actions onto the observer's own motor cortices. Copyright © 2018 Elsevier Inc. All rights reserved.
General cognitive principles for learning structure in time and space.
Goldstein, Michael H; Waterfall, Heidi R; Lotem, Arnon; Halpern, Joseph Y; Schwade, Jennifer A; Onnis, Luca; Edelman, Shimon
2010-06-01
How are hierarchically structured sequences of objects, events or actions learned from experience and represented in the brain? When several streams of regularities present themselves, which will be learned and which ignored? Can statistical regularities take effect on their own, or are additional factors such as behavioral outcomes expected to influence statistical learning? Answers to these questions are starting to emerge through a convergence of findings from naturalistic observations, behavioral experiments, neurobiological studies, and computational analyses and simulations. We propose that a small set of principles are at work in every situation that involves learning of structure from patterns of experience and outline a general framework that accounts for such learning. (c) 2010 Elsevier Ltd. All rights reserved.
Pozsgay, B; Mestyán, M; Werner, M A; Kormos, M; Zaránd, G; Takács, G
2014-09-12
We study the nonequilibrium time evolution of the spin-1/2 anisotropic Heisenberg (XXZ) spin chain, with a choice of dimer product and Néel states as initial states. We investigate numerically various short-ranged spin correlators in the long-time limit and find that they deviate significantly from predictions based on the generalized Gibbs ensemble (GGE) hypotheses. By computing the asymptotic spin correlators within the recently proposed quench-action formalism [Phys. Rev. Lett. 110, 257203 (2013)], however, we find excellent agreement with the numerical data. We, therefore, conclude that the GGE cannot give a complete description even of local observables, while the quench-action formalism correctly captures the steady state in this case.
Study on sampling of continuous linear system based on generalized Fourier transform
NASA Astrophysics Data System (ADS)
Li, Huiguang
2003-09-01
In the research of signal and system, the signal's spectrum and the system's frequency characteristic can be discussed through Fourier Transform (FT) and Laplace Transform (LT). However, some singular signals such as impulse function and signum signal don't satisfy Riemann integration and Lebesgue integration. They are called generalized functions in Maths. This paper will introduce a new definition -- Generalized Fourier Transform (GFT) and will discuss generalized function, Fourier Transform and Laplace Transform under a unified frame. When the continuous linear system is sampled, this paper will propose a new method to judge whether the spectrum will overlap after generalized Fourier transform (GFT). Causal and non-causal systems are studied, and sampling method to maintain system's dynamic performance is presented. The results can be used on ordinary sampling and non-Nyquist sampling. The results also have practical meaning on research of "discretization of continuous linear system" and "non-Nyquist sampling of signal and system." Particularly, condition for ensuring controllability and observability of MIMO continuous systems in references 13 and 14 is just an applicable example of this paper.
Regression analysis of sparse asynchronous longitudinal data.
Cao, Hongyuan; Zeng, Donglin; Fine, Jason P
2015-09-01
We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus.
High and Dry: Trading Water Vapor, Fuel and Observing Time for SOFIA
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Kurklu, Elif
2005-01-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) is NASA's next generation airborne astronomical observatory. The facility consists of a 747-SP modified to accommodate a 2.5 meter telescope. SOFIA is expected to fly an average of 140 science flights per year over it's 20 year lifetime, and will commence operations in early 2005. The SOFIA telescope is mounted aft of the wings on the port side of the aircraft and is articulated through a range of 20 deg to 60 deg of elevation. A significant problem in future SOFIA operations is that of scheduling Facility Instrument (E) flights in support of the SOFIA General Investigator (GI) program. GIs are expected to propose small numbers of observations, and many observations must be grouped together to make up single flights. Approximately 70 GI flight per year are expected, with 5-15 observations per flight.
NASA Astrophysics Data System (ADS)
Faulk, Sean P.; Mitchell, Jonathan L.; Moon, Seulgi; Lora, Juan Manuel
2016-10-01
Titan's zonal-mean precipitation behavior has been widely investigated using general circulation models (GCMs), but the spatial and temporal variability of rainfall in Titan's active hydrologic cycle is less well understood. We conduct statistical analyses of rainfall, diagnosed from GCM simulations of Titan's atmosphere, to determine storm intensity and frequency. Intense storms of methane have been proposed to be critical for enabling mechanical erosion of Titan's surface, as indicated by observations of dendritic valley networks. Using precipitation outputs from the Titan Atmospheric Model (TAM), a GCM shown to realistically simulate many features of Titan's atmosphere, we quantify the precipitation variability within eight separate latitude bins for a variety of initial surface liquid distributions. We find that while the overall wettest regions are indeed the poles, the most intense rainfall generally occurs in the high mid-latitudes, between 45-67.5 degrees, consistent with recent geomorphological observations of alluvial fans concentrated at those latitudes. We also find that precipitation rates necessary for surface erosion, as estimated by Perron et al. (2006) J. Geophys. Res. 111, E11001, frequently occur at all latitudes, with recurrence intervals of less than one Titan year. Such analysis is crucial towards understanding the complex interaction between Titan's atmosphere and surface and defining the influence of precipitation on observed geomorphology.
Food parenting measurement issues: working group consensus report.
Hughes, Sheryl O; Frankel, Leslie A; Beltran, Alicia; Hodges, Eric; Hoerr, Sharon; Lumeng, Julie; Tovar, Alison; Kremers, Stef
2013-08-01
Childhood obesity is a growing problem. As more researchers become involved in the study of parenting influences on childhood obesity, there appears to be a lack of agreement regarding the most important parenting constructs of interest, definitions of those constructs, and measurement of those constructs in a consistent manner across studies. This article aims to summarize findings from a working group that convened specifically to discuss measurement issues related to parental influences on childhood obesity. Six subgroups were formed to address key measurement issues. The conceptualization subgroup proposed to define and distinguish constructs of general parenting styles, feeding styles, and food parenting practices with the goal of understanding interrelating levels of parental influence on child eating behaviors. The observational subgroup identified the need to map constructs for use in coding direct observations and create observational measures that can capture the bidirectional effects of parent-child interactions. The self-regulation subgroup proposed an operational definition of child self-regulation of energy intake and suggested future measures of self-regulation across different stages of development. The translational/community involvement subgroup proposed the involvement of community in the development of surveys so that measures adequately reflect cultural understanding and practices of the community. The qualitative methods subgroup proposed qualitative methods as a way to better understand the breadth of food parenting practices and motivations for the use of such practices. The longitudinal subgroup stressed the importance of food parenting measures sensitive to change for use in longitudinal studies. In the creation of new measures, it is important to consider cultural sensitivity and context-specific food parenting domains. Moderating variables such as child temperament and child food preferences should be considered in models.
Food Parenting Measurement Issues: Working Group Consensus Report
Frankel, Leslie A.; Beltran, Alicia; Hodges, Eric; Hoerr, Sharon; Lumeng, Julie; Tovar, Alison; Kremers, Stef
2013-01-01
Abstract Childhood obesity is a growing problem. As more researchers become involved in the study of parenting influences on childhood obesity, there appears to be a lack of agreement regarding the most important parenting constructs of interest, definitions of those constructs, and measurement of those constructs in a consistent manner across studies. This article aims to summarize findings from a working group that convened specifically to discuss measurement issues related to parental influences on childhood obesity. Six subgroups were formed to address key measurement issues. The conceptualization subgroup proposed to define and distinguish constructs of general parenting styles, feeding styles, and food parenting practices with the goal of understanding interrelating levels of parental influence on child eating behaviors. The observational subgroup identified the need to map constructs for use in coding direct observations and create observational measures that can capture the bidirectional effects of parent–child interactions. The self-regulation subgroup proposed an operational definition of child self-regulation of energy intake and suggested future measures of self-regulation across different stages of development. The translational/community involvement subgroup proposed the involvement of community in the development of surveys so that measures adequately reflect cultural understanding and practices of the community. The qualitative methods subgroup proposed qualitative methods as a way to better understand the breadth of food parenting practices and motivations for the use of such practices. The longitudinal subgroup stressed the importance of food parenting measures sensitive to change for use in longitudinal studies. In the creation of new measures, it is important to consider cultural sensitivity and context-specific food parenting domains. Moderating variables such as child temperament and child food preferences should be considered in models. PMID:23944928
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-25
... Statement and Public Meetings for the General Electric-Hitachi Global Laser Enrichment, LLC Proposed Laser... the proposed General Electric-Hitachi (GEH) Global Laser Enrichment (GLE) Uranium Enrichment Facility... to locate the facility on the existing General Electric Company (GE) site near Wilmington, North...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-12
...] Environmental Impact Statement for Proposed General Management Plan, Pinnacles National Monument, San Benito and... Environmental Impact Statement. SUMMARY: The National Park Service is terminating the preparation of an Environmental Impact Statement (EIS) for the General Management Plan, Pinnacles National Monument, California. A...
Spatial Searching for Solar Physics Data
NASA Astrophysics Data System (ADS)
Hourcle, Joseph; Spencer, J. L.; The VSO Team
2013-07-01
The Virtual Solar Observatory allows searching across many collections of solar physics data, but does not yet allow a researcher to search based on the location and extent of the observation, other than by selecting general categories such as full disk or off limb. High resolution instruments that observe only a portion of the the solar disk require greater specificity than is currently available. We believe that finer-grained spatial searching will allow for improved access to data from existing instruments such as TRACE, XRT and SOT, and well as from upcoming missions such as ATST and IRIS. Our proposed solution should also help scientists to search on the field of view of full-disk images that are out of the Sun-Earth line, such as STEREO/EUVI and obserations from the upcoming Solar Orbiter and Solar Probe Plus missions. We present our current work on cataloging sub field images for spatial searching so that researchers can more easily search for observations of a given feature of interest, with the intent of soliciting information about researcher's requirements and recommendations for further improvements.Abstract (2,250 Maximum Characters): The Virtual Solar Observatory allows searching across many collections of solar physics data, but does not yet allow a researcher to search based on the location and extent of the observation, other than by selecting general categories such as full disk or off limb. High resolution instruments that observe only a portion of the the solar disk require greater specificity than is currently available. We believe that finer-grained spatial searching will allow for improved access to data from existing instruments such as TRACE, XRT and SOT, and well as from upcoming missions such as ATST and IRIS. Our proposed solution should also help scientists to search on the field of view of full-disk images that are out of the Sun-Earth line, such as STEREO/EUVI and obserations from the upcoming Solar Orbiter and Solar Probe Plus missions. We present our current work on cataloging sub field images for spatial searching so that researchers can more easily search for observations of a given feature of interest, with the intent of soliciting information about researcher's requirements and recommendations for further improvements.
The proposed general practice descriptors--will they influence preventive medicine?
Moorhead, R G
1989-01-01
The proposed descriptor bill to change Medicare rebates to general practice patients could have a benefit to general practice preventive medicine. This seems possible through rewarding practitioners who spend more time with their patients and the positive effects of continuing medical education. However, the potential exists for whittling away any rewards for these practitioners by future governments and the audit of general practices could become a method of political control of Australian general practice.
Science with the Space Infrared Telescope Facility
NASA Technical Reports Server (NTRS)
Roellig, Thomas L.
2003-01-01
The Space Infrared Telescope Facility (SIRTF), the fourth and final member of NASA's series of Great Observatories, is scheduled to launch on April 15,2003. Together with the Hubbie Space Telescope, the Compton Gamma ray Telescope, and the Chandra X-Ray Telescope this series of observatories offers observational capabilities across the electromagnetic spectrum from the infrared to high-energy gamma rays. SIRTF is based on three focal plane instruments - an infrared spectrograph and two infrared imagers - coupled to a superfluid-helium cooled telescope to achieve unprecedented sensitivity from 3 to 180 microns. Although SIRTF is a powerful general-purpose infrared observatory, its design was based on the capability to address four broad science themes: (1) understanding the structure and composition of the early universe, (2) understanding the nature of brown dwarfs and super-planets, (3) probing protostellar, protoplanetary, and planetary debris disk systems, and (4) understanding the origin and structure of ultraluminous infrared galaxies and active galactic nuclei. This talk will address the design and capabilities of the SIRTF observatory, provide an overview of some of the initial science investigations planned by the SIRTF Guaranteed Time Observers, and give a brief overview of the General Observer proposal process.
A model for AGN variability on multiple time-scales
NASA Astrophysics Data System (ADS)
Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.
2018-05-01
We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.
Adaptive Sequential Monte Carlo for Multiple Changepoint Analysis
Heard, Nicholas A.; Turcotte, Melissa J. M.
2016-05-21
Process monitoring and control requires detection of structural changes in a data stream in real time. This paper introduces an efficient sequential Monte Carlo algorithm designed for learning unknown changepoints in continuous time. The method is intuitively simple: new changepoints for the latest window of data are proposed by conditioning only on data observed since the most recent estimated changepoint, as these observations carry most of the information about the current state of the process. The proposed method shows improved performance over the current state of the art. Another advantage of the proposed algorithm is that it can be mademore » adaptive, varying the number of particles according to the apparent local complexity of the target changepoint probability distribution. This saves valuable computing time when changes in the changepoint distribution are negligible, and enables re-balancing of the importance weights of existing particles when a significant change in the target distribution is encountered. The plain and adaptive versions of the method are illustrated using the canonical continuous time changepoint problem of inferring the intensity of an inhomogeneous Poisson process, although the method is generally applicable to any changepoint problem. Performance is demonstrated using both conjugate and non-conjugate Bayesian models for the intensity. Lastly, appendices to the article are available online, illustrating the method on other models and applications.« less
Silicification in Grasses: Variation between Different Cell Types
Kumar, Santosh; Soukup, Milan; Elbaum, Rivka
2017-01-01
Plants take up silicon as mono-silicic acid, which is released to soil by the weathering of silicate minerals. Silicic acid can be taken up by plant roots passively or actively, and later it is deposited in its polymerized form as amorphous hydrated silica. Major silica depositions in grasses occur in root endodermis, leaf epidermal cells, and outer epidermal cells of inflorescence bracts. Debates are rife about the mechanism of silica deposition, and two contrasting scenarios are often proposed to explain it. According to the passive mode of silicification, silica deposition is a result of silicic acid condensation due to dehydration, such as during transpirational loss of water from the aboveground organs. In general, silicification and transpiration are positively correlated, and continued silicification is sometimes observed after cell and tissue maturity. The other mode of silicification proposes the involvement of some biological factors, and is based on observations that silicification is not necessarily coupled with transpiration. Here, we review evidence for both mechanisms of silicification, and propose that the deposition mechanism is specific to the cell type. Considering all the cell types together, our conclusion is that grass silica deposition can be divided into three modes: spontaneous cell wall silicification, directed cell wall silicification, and directed paramural silicification in silica cells. PMID:28400787
Popularity Modeling for Mobile Apps: A Sequential Approach.
Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong
2015-07-01
The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.
A Decentralized VPN Service over Generalized Mobile Ad-Hoc Networks
NASA Astrophysics Data System (ADS)
Fujita, Sho; Shima, Keiichi; Uo, Yojiro; Esaki, Hiroshi
We present a decentralized VPN service that can be built over generalized mobile ad-hoc networks (Generalized MANETs), in which topologies can be represented as a time-varying directed multigraph. We address wireless ad-hoc networks and overlay ad-hoc networks as instances of Generalized MANETs. We first propose an architecture to operate on various kinds of networks through a single set of operations. Then, we design and implement a decentralized VPN service on the proposed architecture. Through the development and operation of a prototype system we implemented, we found that the proposed architecture makes the VPN service applicable to each instance of Generalized MANETs, and that the VPN service makes it possible for unmodified applications to operate on the networks.
Solar System Observing with the Space Infrared Telescope Facility (SIRTF)
NASA Technical Reports Server (NTRS)
Cleve, J. Van; Meadows, V. S.; Stansberry, J.
2003-01-01
SIRTF is NASA's Space Infrared Telescope Facility. Currently planned for launch on 15 Apr 2003, it is the final element in NASA's Great Observatories Program. SIRTF has an 85 cm diameter f/12 lightweight beryllium telescope, cooled to lekss than 5.5K. It is diffraction-limited at 6.5 microns, and has wavelengthcoverage from 3-180 microns. Its estimated lifetime (limited by cryogen) is 2.5 years at minimum, with a goal of 5+ years. SIRTF has three instruments, IRAC, IRS, and MIPS. IRAC (InfraRed Array Camera) provides simultaneous images at wavelengths of 3.6, 4.5, 5.8, and 8.0 microns. IRS (InfraRed Spectrograph) has 4 modules providing low-resolution (R=60-120) spectra from 5.3 to 40 microns, high-resolution (R=600) spectra from 10 to 37 microns, and an autonomous target acquisition system (PeakUp) which includes small-field imaging at 15 microns. MIPS (Multiband Imaging Photometer for SIRTF)} does imaging photometry at 24, 70, and 160 m and low-resolution (R=15-25) spectroscopy (SED) between 55 and 96 microns. The SIRTF Guaranteed Time Observers (GTOs) are planning to observe Outer Solar System satellites and planets, extinct comets and low-albedo asteroids, Centaurs and Kuiper Belt Objects, cometary dust trails, and a few active short-period comets. The GTO programs are listed in detail in the SIRTF Reserved Observations Catalog (ROC). We would like to emphasize that there remain many interesting subjects for the General Observers (GO). Proposal success for the planetary observer community in the first SIRTF GO proposal cycle (GO-1) determines expectations for future GO calls and Solar System use of SIRTF, so we would like promote a strong set of planetary GO-1 proposals. Towards that end, we present this poster, and we will convene a Solar System GO workshop 3.5 months after launch.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-06
... NUCLEAR REGULATORY COMMISSION [NRC-2009-0157] General Electric-Hitachi Global Laser Enrichment... Impact Statement (EIS) for the proposed General Electric- Hitachi Global Laser Enrichment, LLC (GLE... issue a license to GLE, pursuant to Title 10 of the Code of Federal Regulations (10 CFR) parts 30, 40...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-23
... the Attorney General, Department of Justice. ACTION: Proposed rule. SUMMARY: This rule proposes to... jurisdiction within the tribe's Indian country, and for the Attorney General to decide whether to consent to... request, and after consultation between the tribe and the Attorney General and consent to Federal...
Constraints on dark matter from intergalactic radiation
NASA Technical Reports Server (NTRS)
Overduin, J. M.; Wesson, P. S.
1992-01-01
Several of the dark matter candidates that have been proposed are believed to be unstable to decay, which would contribute photons to the radiation field between galaxies. The main candidates of this type are light neutrinos and axions, primordial mini-black holes, and a nonzero 'vacuum' energy. All of these can be constrained in nature by observational data on the extragalactic background light and the microwave background radiation. Black holes and the vacuum can be ruled out as significant contributors to the 'missing mass'. Light axions are also unlikely candidates; however, those with extremely small rest energies (the so-called 'invisible' axions) remain feasible. Light neutrinos, like those proposed by Sciama, are marginally viable. In general, we believe that the intergalactic radiation field is an important way of constraining all types of dark matter.
Constitutive acoustic-emission elastic-stress behavior of magnesium alloy
NASA Technical Reports Server (NTRS)
Williams, J. H., Jr.; Emerson, G. P.
1977-01-01
Repeated laoding and unloading of a magnesium alloy below the macroscopic yield stress result in continuous acoustic emissions which are generally repeatable for a given specimen and which are reproducible between different specimens having the same load history. An acoustic emission Bauschinger strain model is proposed to describe the unloading emission behavior. For the limited range of stress examined, loading and unloading stress delays of the order of 50 MN/sq m are observed, and they appear to be dependent upon the direction of loading, the stress rate, and the stress history. The stress delay is hypothesized to be the manifestation of an effective friction stress. The existence of acoustic emission elastic stress constitutive relations is concluded, which provides support for a previously proposed concept for the monitoring of elastic stresses by acoustic emission.
NASA Astrophysics Data System (ADS)
Leveuf, Louis; Navrátil, Libor; Le Saux, Vincent; Marco, Yann; Olhagaray, Jérôme; Leclercq, Sylvain
2018-01-01
A constitutive model for the cyclic behaviour of short carbon fibre-reinforced thermoplastics for aeronautical applications is proposed. First, an extended experimental database is generated in order to highlight the specificities of the studied material. This database is composed of complex tests and is used to design a relevant constitutive model able to capture the cyclic behaviour of the material. A general 3D formulation of the model is then proposed, and an identification strategy is defined to identify its parameters. Finally, a validation of the identification is performed by challenging the prediction of the model to the tests that were not used for the identification. An excellent agreement between the numerical results and the experimental data is observed revealing the capabilities of the model.
NASA Technical Reports Server (NTRS)
1974-01-01
A number of general studies that were proposed for the PPEPL-SHUTTLE program are considered in qualitative detail from both the theoretical and practical points of view. The selection of experimental programs was restricted to those which may be considered active as opposed to refinements of the passive observational programs done previously. It is concluded that, while these new studies were scientifically worthwhile and could be performed in principle, in most cases insufficient attention was paid to the practical details of the experiments. Several specific areas of study, stressing in particular the practical feasibility of the proposed experiments, are recommended. In addition, recommendations are made for further theoretical study, where appropriate. For Vol. 1, see N74-28169; for Vol. 2, see N74-28170.
Cryptanalysis of SFLASH with Slightly Modified Parameters
NASA Astrophysics Data System (ADS)
Dubois, Vivien; Fouque, Pierre-Alain; Stern, Jacques
SFLASH is a signature scheme which belongs to a family of multivariate schemes proposed by Patarin et al. in 1998 [9]. The SFLASH scheme itself has been designed in 2001 [8] and has been selected in 2003 by the NESSIE European Consortium [6] as the best known solution for implementation on low cost smart cards. In this paper, we show that slight modifications of the parameters of SFLASH within the general family initially proposed renders the scheme insecure. The attack uses simple linear algebra, and allows to forge a signature for an arbitrary message in a question of minutes for practical parameters, using only the public key. Although SFLASH itself is not amenable to our attack, it is worrying to observe that no rationale was ever offered for this "lucky" choice of parameters.
Exponentiated power Lindley distribution.
Ashour, Samir K; Eltehiwy, Mahmoud A
2015-11-01
A new generalization of the Lindley distribution is recently proposed by Ghitany et al. [1], called as the power Lindley distribution. Another generalization of the Lindley distribution was introduced by Nadarajah et al. [2], named as the generalized Lindley distribution. This paper proposes a more generalization of the Lindley distribution which generalizes the two. We refer to this new generalization as the exponentiated power Lindley distribution. The new distribution is important since it contains as special sub-models some widely well-known distributions in addition to the above two models, such as the Lindley distribution among many others. It also provides more flexibility to analyze complex real data sets. We study some statistical properties for the new distribution. We discuss maximum likelihood estimation of the distribution parameters. Least square estimation is used to evaluate the parameters. Three algorithms are proposed for generating random data from the proposed distribution. An application of the model to a real data set is analyzed using the new distribution, which shows that the exponentiated power Lindley distribution can be used quite effectively in analyzing real lifetime data.
Impact localization on composite structures using time difference and MUSIC approach
NASA Astrophysics Data System (ADS)
Zhong, Yongteng; Xiang, Jiawei
2017-05-01
1-D uniform linear array (ULA) has the shortcoming of the half-plane mirror effect, which does not allow discriminating between a target placed above the array and a target placed below the array. This paper presents time difference (TD) and multiple signal classification (MUSIC) based omni-directional impact localization on a large stiffened composite structure using improved linear array, which is able to perform omni-directional 360° localization. This array contains 2M+3 PZT sensors, where 2M+1 PZT sensors are arranged as a uniform linear array, and the other two PZT sensors are placed above and below the array. Firstly, the arrival times of impact signals observed by the other two sensors are determined using the wavelet transform. Compared with each other, the direction range of impact source can be decided in general, 0°to 180° or 180°to 360°. And then, two dimensional multiple signal classification (2D-MUSIC) based spatial spectrum formula using the uniform linear array is applied for impact localization by the general direction range. When the arrival times of impact signals observed by upper PZT is equal to that of lower PZT, the direction can be located in x axis (0°or 180°). And time difference based MUSIC method is present to locate impact position. To verify the proposed approach, the proposed approach is applied to a composite structure. The localization results are in good agreement with the actual impact occurring positions.
Towards General Evaluation of Intelligent Systems: Lessons Learned from Reproducing AIQ Test Results
NASA Astrophysics Data System (ADS)
Vadinský, Ondřej
2018-03-01
This paper attempts to replicate the results of evaluating several artificial agents using the Algorithmic Intelligence Quotient test originally reported by Legg and Veness. Three experiments were conducted: One using default settings, one in which the action space was varied and one in which the observation space was varied. While the performance of freq, Q0, Qλ, and HLQλ corresponded well with the original results, the resulting values differed, when using MC-AIXI. Varying the observation space seems to have no qualitative impact on the results as reported, while (contrary to the original results) varying the action space seems to have some impact. An analysis of the impact of modifying parameters of MC-AIXI on its performance in the default settings was carried out with the help of data mining techniques used to identifying highly performing configurations. Overall, the Algorithmic Intelligence Quotient test seems to be reliable, however as a general artificial intelligence evaluation method it has several limits. The test is dependent on the chosen reference machine and also sensitive to changes to its settings. It brings out some differences among agents, however, since they are limited in size, the test setting may not yet be sufficiently complex. A demanding parameter sweep is needed to thoroughly evaluate configurable agents that, together with the test format, further highlights computational requirements of an agent. These and other issues are discussed in the paper along with proposals suggesting how to alleviate them. An implementation of some of the proposals is also demonstrated.
The string-junction picture of multiquark states: an update
NASA Astrophysics Data System (ADS)
Rossi, G. C.; Veneziano, G.
2016-06-01
We recall and update, both theoretically and phenomenologically, our (nearly) forty-years-old proposal of a string-junction as a necessary complement to the conventional classification of hadrons based just on their quark-antiquark constituents. In that proposal single (though in general metastable) hadronic states are associated with "irreducible" gauge-invariant operators consisting of Wilson lines (visualized as strings of color flux tubes) that may either end on a quark or an antiquark, or annihilate in triplets at a junction J or an anti-junction overline{J} . For the junction-free sector (ordinary qoverline{q} mesons and glueballs) the picture is supported by large- N (number of colors) considerations as well as by a lattice strong-coupling expansion. Both imply the famous OZI rule suppressing quark-antiquark annihilation diagrams. For hadrons with J and/or overline{J} constituents the same expansions support our proposal, including its generalization of the OZI rule to the suppression of J-overline{J} annihilation diagrams. Such a rule implies that hadrons with junctions are "mesophobic" and thus unusually narrow if they are below threshold for decaying into as many baryons as their total number of junctions (two for a tetraquark, three for a pentaquark). Experimental support for our claim, based on the observation that narrow multiquark states typically lie below (well above) the relevant baryonic (mesonic) thresholds, will be presented.
Updating the OMERACT filter: implications for patient-reported outcomes.
Kirwan, John R; Bartlett, Susan J; Beaton, Dorcas E; Boers, Maarten; Bosworth, Ailsa; Brooks, Peter M; Choy, Ernest; de Wit, Maarten; Guillemin, Francis; Hewlett, Sarah; Kvien, Tore K; Landewé, Robert B; Leong, Amye L; Lyddiatt, Anne; March, Lyn; May, James; Montie, Pamela Lesley; Nikaï, Enkeleida; Richards, Pam; Voshaar, Marieke M J H; Smeets, Wilma; Strand, Vibeke; Tugwell, Peter; Gossec, Laure
2014-05-01
At a previous Outcome Measures in Rheumatology (OMERACT) meeting, participants reflected on the underlying methods of patient-reported outcome (PRO) instrument development. The participants requested proposals for more explicit instrument development protocols that would contribute to an enhanced version of the "Truth" statement in the OMERACT Filter, a widely used guide for outcome validation. In the present OMERACT session, we explored to what extent these new Filter 2.0 proposals were practicable, feasible, and already being applied. Following overview presentations, discussion groups critically reviewed the extent to which case studies of current OMERACT Working Groups complied with or negated the proposed PRO development framework, whether these observations had a more general application, and what issues remained to be resolved. Several aspects of PRO development were recognized as particularly important, and the need to directly involve patients at every stage of an iterative PRO development program was endorsed. This included recognition that patients contribute as partners in the research and not merely as subjects. Correct communication of concepts with the words used in questionnaires was central to their performance as measuring instruments, and ensuring this understanding crossed cultural and linguistic boundaries was important in international studies or comparisons. Participants recognized, endorsed, and were generally already putting into practice the principles of PRO development presented in the plenary session. Further work is needed on some existing instruments and on establishing widespread good practice for working in close collaboration with patients.
Wormhole and entanglement (non-)detection in the ER=EPR correspondence
Bao, Ning; Pollack, Jason; Remmen, Grant N.
2015-11-19
The recently proposed ER=EPR correspondence postulates the existence of wormholes (Einstein-Rosen bridges) between entangled states (such as EPR pairs). Entanglement is famously known to be unobservable in quantum mechanics, in that there exists no observable (or, equivalently, projector) that can accurately pick out whether a generic state is entangled. Many features of the geometry of spacetime, however, are observables, so one might worry that the presence or absence of a wormhole could identify an entangled state in ER=EPR, violating quantum mechanics, specifically, the property of state-independence of observables. In this note, we establish that this cannot occur: there is nomore » measurement in general relativity that unambiguously detects the presence of a generic wormhole geometry. Furthermore, this statement is the ER=EPR dual of the undetectability of entanglement.« less
Human mobility in a continuum approach.
Simini, Filippo; Maritan, Amos; Néda, Zoltán
2013-01-01
Human mobility is investigated using a continuum approach that allows to calculate the probability to observe a trip to any arbitrary region, and the fluxes between any two regions. The considered description offers a general and unified framework, in which previously proposed mobility models like the gravity model, the intervening opportunities model, and the recently introduced radiation model are naturally resulting as special cases. A new form of radiation model is derived and its validity is investigated using observational data offered by commuting trips obtained from the United States census data set, and the mobility fluxes extracted from mobile phone data collected in a western European country. The new modeling paradigm offered by this description suggests that the complex topological features observed in large mobility and transportation networks may be the result of a simple stochastic process taking place on an inhomogeneous landscape.
NASA Astrophysics Data System (ADS)
Ortega-Rodríguez, M.; Solís-Sánchez, H.; López-Barquero, V.; Matamoros-Alvarado, B.; Venegas-Li, A.
2014-06-01
We propose a simple toy model to explain the 2:3:6 quasi-periodic oscillation (QPO) structure in GRS 1915+105 and, more generally, the 2:3 QPO structure in XTE J1550-564, GRO J1655-40 and H1743-322. The model exploits the onset of subharmonics in the context of discoseismology. We suggest that the observed frequencies may be the consequence of a resonance between a fundamental g mode and an unobservable p wave. The results include the prediction that, as better data become available, a QPO with a frequency of twice the higher twin frequency and a large quality factor will be observed in twin peak sources, as it might already have been observed in the especially active GRS 1915+105.
Human Mobility in a Continuum Approach
Simini, Filippo; Maritan, Amos; Néda, Zoltán
2013-01-01
Human mobility is investigated using a continuum approach that allows to calculate the probability to observe a trip to any arbitrary region, and the fluxes between any two regions. The considered description offers a general and unified framework, in which previously proposed mobility models like the gravity model, the intervening opportunities model, and the recently introduced radiation model are naturally resulting as special cases. A new form of radiation model is derived and its validity is investigated using observational data offered by commuting trips obtained from the United States census data set, and the mobility fluxes extracted from mobile phone data collected in a western European country. The new modeling paradigm offered by this description suggests that the complex topological features observed in large mobility and transportation networks may be the result of a simple stochastic process taking place on an inhomogeneous landscape. PMID:23555885
A more powerful test based on ratio distribution for retention noninferiority hypothesis.
Deng, Ling; Chen, Gang
2013-03-11
Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.
NASA Astrophysics Data System (ADS)
Faherty, Jacqueline; Cruz, Kelle; Rice, Emily; Gagne, Jonathan; Marley, Mark; Gizis, John
2018-05-01
Emerging as an important insight into cool-temperature atmospheric physics is evidence for a correlation between enhanced clouds and youth. With this Spitzer Cycle 14 large GO program, we propose to obtain qualifying evidence for this hypothesis using an age calibrated sample of brown dwarf-exoplanet analogs recently discovered and characterized by team members. Using Spitzer's unparalleled ability to conduct uninterrupted, high-cadence observations over numerous hours, we will examine the periodic brightness variations at 3.5 microns, where clouds are thought to be most disruptive to emergent flux. Compared to older sources, theory predicts that younger or lower-surface gravity objects will have cooler brightness temperatures at 3.5 microns and larger peak to peak amplitude variations due to higher altitude, more turbulent clouds. Therefore we propose to obtain light curves for 26 sources that span L3-L8 spectral types (Teff 2500-1700 K), 20-130 Myr ages, and predicted 8-30 MJup masses. Comparing to the variability trends and statistics of field (3-5 Gyr) Spitzer Space Telescope General Observer Proposal equivalents currently being monitored by Spitzer, we will have unequivocal evidence for (or against) the turbulent atmospheric nature of younger sources. Coupling this Spitzer dataset with the multitude of spectral information we have on each source, the light curves obtained through this proposal will form the definitive library of data for investigating atmosphere dynamics (rotation rates, winds, storms, changing cloud structures) in young giant exoplanets and brown dwarfs.
Optimality in Data Assimilation
NASA Astrophysics Data System (ADS)
Nearing, Grey; Yatheendradas, Soni
2016-04-01
It costs a lot more to develop and launch an earth-observing satellite than it does to build a data assimilation system. As such, we propose that it is important to understand the efficiency of our assimilation algorithms at extracting information from remote sensing retrievals. To address this, we propose that it is necessary to adopt completely general definition of "optimality" that explicitly acknowledges all differences between the parametric constraints of our assimilation algorithm (e.g., Gaussianity, partial linearity, Markovian updates) and the true nature of the environmetnal system and observing system. In fact, it is not only possible, but incredibly straightforward, to measure the optimality (in this more general sense) of any data assimilation algorithm as applied to any intended model or natural system. We measure the information content of remote sensing data conditional on the fact that we are already running a model and then measure the actual information extracted by data assimilation. The ratio of the two is an efficiency metric, and optimality is defined as occurring when the data assimilation algorithm is perfectly efficient at extracting information from the retrievals. We measure the information content of the remote sensing data in a way that, unlike triple collocation, does not rely on any a priori presumed relationship (e.g., linear) between the retrieval and the ground truth, however, like triple-collocation, is insensitive to the spatial mismatch between point-based measurements and grid-scale retrievals. This theory and method is therefore suitable for use with both dense and sparse validation networks. Additionally, the method we propose is *constructive* in the sense that it provides guidance on how to improve data assimilation systems. All data assimilation strategies can be reduced to approximations of Bayes' law, and we measure the fractions of total information loss that are due to individual assumptions or approximations in the prior (i.e., the model uncertainty distribution), and in the likelihood (i.e., the observation operator and observation uncertainty distribution). In this way, we can directly identify the parts of a data assimilation algorithm that contribute most to assimilation error in a way that (unlike traditional DA performance metrics) considers nonlinearity in the model and observation and non-optimality in the fit between filter assumptions and the real system. To reiterate, the method we propose is theoretically rigorous but also dead-to-rights simple, and can be implemented in no more than a few hours by a competent programmer. We use this to show that careful applications of the Ensemble Kalman Filter use substantially less than half of the information contained in remote sensing soil moisture retrievals (LPRM, AMSR-E, SMOS, and SMOPS). We propose that this finding may explain some of the results from several recent large-scale experiments that show lower-than-expected value to assimilating soil moisture retrievals into land surface models forced by high-quality precipitation data. Our results have important implications for the SMAP mission because over half of the SMAP-affiliated "early adopters" plan to use the EnKF as their primary method for extracting information from SMAP retrievals.
A Generalized Quantum-Inspired Decision Making Model for Intelligent Agent
Loo, Chu Kiong
2014-01-01
A novel decision making for intelligent agent using quantum-inspired approach is proposed. A formal, generalized solution to the problem is given. Mathematically, the proposed model is capable of modeling higher dimensional decision problems than previous researches. Four experiments are conducted, and both empirical experiments results and proposed model's experiment results are given for each experiment. Experiments showed that the results of proposed model agree with empirical results perfectly. The proposed model provides a new direction for researcher to resolve cognitive basis in designing intelligent agent. PMID:24778580
Simplified Interval Observer Scheme: A New Approach for Fault Diagnosis in Instruments
Martínez-Sibaja, Albino; Astorga-Zaragoza, Carlos M.; Alvarado-Lassman, Alejandro; Posada-Gómez, Rubén; Aguila-Rodríguez, Gerardo; Rodríguez-Jarquin, José P.; Adam-Medina, Manuel
2011-01-01
There are different schemes based on observers to detect and isolate faults in dynamic processes. In the case of fault diagnosis in instruments (FDI) there are different diagnosis schemes based on the number of observers: the Simplified Observer Scheme (SOS) only requires one observer, uses all the inputs and only one output, detecting faults in one detector; the Dedicated Observer Scheme (DOS), which again uses all the inputs and just one output, but this time there is a bank of observers capable of locating multiple faults in sensors, and the Generalized Observer Scheme (GOS) which involves a reduced bank of observers, where each observer uses all the inputs and m-1 outputs, and allows the localization of unique faults. This work proposes a new scheme named Simplified Interval Observer SIOS-FDI, which does not requires the measurement of any input and just with just one output allows the detection of unique faults in sensors and because it does not require any input, it simplifies in an important way the diagnosis of faults in processes in which it is difficult to measure all the inputs, as in the case of biologic reactors. PMID:22346593
NASA Astrophysics Data System (ADS)
He, Xin
2017-03-01
The ideal observer is widely used in imaging system optimization. One practical question remains open: do the ideal and human observers have the same preference in system optimization and evaluation? Based on the ideal observer's mathematical properties proposed by Barrett et. al. and the empirical properties of human observers investigated by Myers et. al., I attempt to pursue the general rules regarding the applicability of the ideal observer in system optimization. Particularly, in software optimization, the ideal observer pursues data conservation while humans pursue data presentation or perception. In hardware optimization, the ideal observer pursues a system with the maximum total information, while humans pursue a system with the maximum selected (e.g., certain frequency bands) information. These different objectives may result in different system optimizations between human and the ideal observers. Thus, an ideal observer optimized system is not necessarily optimal for humans. I cite empirical evidence in search and detection tasks, in hardware and software evaluation, in X-ray CT, pinhole imaging, as well as emission computed tomography to corroborate the claims. (Disclaimer: the views expressed in this work do not necessarily represent those of the FDA)
Automatic movie skimming with general tempo analysis
NASA Astrophysics Data System (ADS)
Lee, Shih-Hung; Yeh, Chia-Hung; Kuo, C. C. J.
2003-11-01
Story units are extracted by general tempo analysis including tempos analysis including tempos of audio and visual information in this research. Although many schemes have been proposed to successfully segment video data into shots using basic low-level features, how to group shots into meaningful units called story units is still a challenging problem. By focusing on a certain type of video such as sport or news, we can explore models with the specific application domain knowledge. For movie contents, many heuristic rules based on audiovisual clues have been proposed with limited success. We propose a method to extract story units using general tempo analysis. Experimental results are given to demonstrate the feasibility and efficiency of the proposed technique.
Seok, Junhee; Seon Kang, Yeong
2015-01-01
Mutual information, a general measure of the relatedness between two random variables, has been actively used in the analysis of biomedical data. The mutual information between two discrete variables is conventionally calculated by their joint probabilities estimated from the frequency of observed samples in each combination of variable categories. However, this conventional approach is no longer efficient for discrete variables with many categories, which can be easily found in large-scale biomedical data such as diagnosis codes, drug compounds, and genotypes. Here, we propose a method to provide stable estimations for the mutual information between discrete variables with many categories. Simulation studies showed that the proposed method reduced the estimation errors by 45 folds and improved the correlation coefficients with true values by 99 folds, compared with the conventional calculation of mutual information. The proposed method was also demonstrated through a case study for diagnostic data in electronic health records. This method is expected to be useful in the analysis of various biomedical data with discrete variables. PMID:26046461
Fractional Poisson-Nernst-Planck Model for Ion Channels I: Basic Formulations and Algorithms.
Chen, Duan
2017-11-01
In this work, we propose a fractional Poisson-Nernst-Planck model to describe ion permeation in gated ion channels. Due to the intrinsic conformational changes, crowdedness in narrow channel pores, binding and trapping introduced by functioning units of channel proteins, ionic transport in the channel exhibits a power-law-like anomalous diffusion dynamics. We start from continuous-time random walk model for a single ion and use a long-tailed density distribution function for the particle jump waiting time, to derive the fractional Fokker-Planck equation. Then, it is generalized to the macroscopic fractional Poisson-Nernst-Planck model for ionic concentrations. Necessary computational algorithms are designed to implement numerical simulations for the proposed model, and the dynamics of gating current is investigated. Numerical simulations show that the fractional PNP model provides a more qualitatively reasonable match to the profile of gating currents from experimental observations. Meanwhile, the proposed model motivates new challenges in terms of mathematical modeling and computations.
A novel method for unsteady flow field segmentation based on stochastic similarity of direction
NASA Astrophysics Data System (ADS)
Omata, Noriyasu; Shirayama, Susumu
2018-04-01
Recent developments in fluid dynamics research have opened up the possibility for the detailed quantitative understanding of unsteady flow fields. However, the visualization techniques currently in use generally provide only qualitative insights. A method for dividing the flow field into physically relevant regions of interest can help researchers quantify unsteady fluid behaviors. Most methods at present compare the trajectories of virtual Lagrangian particles. The time-invariant features of an unsteady flow are also frequently of interest, but the Lagrangian specification only reveals time-variant features. To address these challenges, we propose a novel method for the time-invariant spatial segmentation of an unsteady flow field. This segmentation method does not require Lagrangian particle tracking but instead quantitatively compares the stochastic models of the direction of the flow at each observed point. The proposed method is validated with several clustering tests for 3D flows past a sphere. Results show that the proposed method reveals the time-invariant, physically relevant structures of an unsteady flow.
A novel application of artificial neural network for wind speed estimation
NASA Astrophysics Data System (ADS)
Fang, Da; Wang, Jianzhou
2017-05-01
Providing accurate multi-steps wind speed estimation models has increasing significance, because of the important technical and economic impacts of wind speed on power grid security and environment benefits. In this study, the combined strategies for wind speed forecasting are proposed based on an intelligent data processing system using artificial neural network (ANN). Generalized regression neural network and Elman neural network are employed to form two hybrid models. The approach employs one of ANN to model the samples achieving data denoising and assimilation and apply the other to predict wind speed using the pre-processed samples. The proposed method is demonstrated in terms of the predicting improvements of the hybrid models compared with single ANN and the typical forecasting method. To give sufficient cases for the study, four observation sites with monthly average wind speed of four given years in Western China were used to test the models. Multiple evaluation methods demonstrated that the proposed method provides a promising alternative technique in monthly average wind speed estimation.
NASA Astrophysics Data System (ADS)
Sisodia, Mitali; Shukla, Abhishek; Thapliyal, Kishore; Pathak, Anirban
2017-12-01
An explicit scheme (quantum circuit) is designed for the teleportation of an n-qubit quantum state. It is established that the proposed scheme requires an optimal amount of quantum resources, whereas larger amount of quantum resources have been used in a large number of recently reported teleportation schemes for the quantum states which can be viewed as special cases of the general n-qubit state considered here. A trade-off between our knowledge about the quantum state to be teleported and the amount of quantum resources required for the same is observed. A proof-of-principle experimental realization of the proposed scheme (for a 2-qubit state) is also performed using 5-qubit superconductivity-based IBM quantum computer. The experimental results show that the state has been teleported with high fidelity. Relevance of the proposed teleportation scheme has also been discussed in the context of controlled, bidirectional, and bidirectional controlled state teleportation.
Proposal of a micromagnetic standard problem for ferromagnetic resonance simulations
NASA Astrophysics Data System (ADS)
Baker, Alexander; Beg, Marijan; Ashton, Gregory; Albert, Maximilian; Chernyshenko, Dmitri; Wang, Weiwei; Zhang, Shilei; Bisotti, Marc-Antonio; Franchin, Matteo; Hu, Chun Lian; Stamps, Robert; Hesjedal, Thorsten; Fangohr, Hans
2017-01-01
Nowadays, micromagnetic simulations are a common tool for studying a wide range of different magnetic phenomena, including the ferromagnetic resonance. A technique for evaluating reliability and validity of different micromagnetic simulation tools is the simulation of proposed standard problems. We propose a new standard problem by providing a detailed specification and analysis of a sufficiently simple problem. By analyzing the magnetization dynamics in a thin permalloy square sample, triggered by a well defined excitation, we obtain the ferromagnetic resonance spectrum and identify the resonance modes via Fourier transform. Simulations are performed using both finite difference and finite element numerical methods, with OOMMF and Nmag simulators, respectively. We report the effects of initial conditions and simulation parameters on the character of the observed resonance modes for this standard problem. We provide detailed instructions and code to assist in using the results for evaluation of new simulator tools, and to help with numerical calculation of ferromagnetic resonance spectra and modes in general.
Dual gait generative models for human motion estimation from a single camera.
Zhang, Xin; Fan, Guoliang
2010-08-01
This paper presents a general gait representation framework for video-based human motion estimation. Specifically, we want to estimate the kinematics of an unknown gait from image sequences taken by a single camera. This approach involves two generative models, called the kinematic gait generative model (KGGM) and the visual gait generative model (VGGM), which represent the kinematics and appearances of a gait by a few latent variables, respectively. The concept of gait manifold is proposed to capture the gait variability among different individuals by which KGGM and VGGM can be integrated together, so that a new gait with unknown kinematics can be inferred from gait appearances via KGGM and VGGM. Moreover, a new particle-filtering algorithm is proposed for dynamic gait estimation, which is embedded with a segmental jump-diffusion Markov Chain Monte Carlo scheme to accommodate the gait variability in a long observed sequence. The proposed algorithm is trained from the Carnegie Mellon University (CMU) Mocap data and tested on the Brown University HumanEva data with promising results.
Seven-quasiparticle bands in Ce139
NASA Astrophysics Data System (ADS)
Chanda, Somen; Bhattacharjee, Tumpa; Bhattacharyya, Sarmishtha; Mukherjee, Anjali; Basu, Swapan Kumar; Ragnarsson, I.; Bhowmik, R. K.; Muralithar, S.; Singh, R. P.; Ghugre, S. S.; Pramanik, U. Datta
2009-05-01
The high spin states in the Ce139 nucleus have been studied by in-beam γ-spectroscopic techniques using the reaction Te130(C12,3n)Ce139 at Ebeam=65 MeV. A gamma detector array, consisting of five Compton-suppressed Clover detectors was used for coincidence measurements. 15 new levels have been proposed and 28 new γ transitions have been assigned to Ce139 on the basis of γγ coincidence data. The level scheme of Ce139 has been extended above the known 70 ns (19)/(2)- isomer up to ~6.1 MeV in excitation energy and (35)/(2)ℏ in spin. The spin-parity assignments for most of the newly proposed levels have been made using the deduced Directional Correlation from Oriented states of nuclei (DCO ratio) and the Polarization Directional Correlation from Oriented states (PDCO ratio) for the de-exciting transitions. The observed level structure has been compared with a large basis shell model calculation and also with the predictions from cranked Nilsson-Strutinsky (CNS) calculations. A general consistency has been observed between these two different theoretical approaches.
Density of transneptunian object 229762 2007 UK126
NASA Astrophysics Data System (ADS)
Grundy, Will
2017-08-01
Densities provide unique information about bulk composition and interior structure and are key to going beyond the skin-deep view offered by remote-sensing techniques based on photometry, spectroscopy, and polarimetry. They are known for a handful of the relict planetesimals that populate our Solar System's Kuiper belt, revealing intriguing differences between small and large bodies. More and better quality data are needed to address fundamental questions about how planetesimals form from nebular solids, and how distinct materials are distributed through the nebula. Masses from binary orbits are generally quite precise, but a problem afflicting many of the known densities is that they depend on size estimates from thermal emission observations, with large model-dependent uncertainties that dominate the error bars on density estimates. Stellar occultations can provide much more accurate sizes and thus densities, but they depend on fortuitous geometry and thus can only be done for a few particularly valuable binaries. We propose observations of a system where an accurate density can be determined: 229762 2007 UK126. An accurate size is already available from multiple stellar occultation chords. This proposal will determine the mass, and thus the density.
Dependence of marine stratocumulus reflectivities on liquid water paths
NASA Technical Reports Server (NTRS)
Coakley, James A., Jr.; Snider, Jack B.
1990-01-01
Simple parameterizations that relate cloud liquid water content to cloud reflectivity are often used in general circulation climate models to calculate the effect of clouds in the earth's energy budget. Such parameterizations have been developed by Stephens (1978) and by Slingo and Schrecker (1982) and others. Here researchers seek to verify the parametric relationship through the use of simultaneous observations of cloud liquid water content and cloud reflectivity. The column amount of cloud liquid was measured using a microwave radiometer on San Nicolas Island following techniques described by Hogg et al., (1983). Cloud reflectivity was obtained through spatial coherence analysis of Advanced Very High Resolution Radiometer (AVHRR) imagery data (Coakley and Beckner, 1988). They present the dependence of the observed reflectivity on the observed liquid water path. They also compare this empirical relationship with that proposed by Stephens (1978). Researchers found that by taking clouds to be isotropic reflectors, the observed reflectivities and observed column amounts of cloud liquid water are related in a manner that is consistent with simple parameterizations often used in general circulation climate models to determine the effect of clouds on the earth's radiation budget. Attempts to use the results of radiative transfer calculations to correct for the anisotropy of the AVHRR derived reflectivities resulted in a greater scatter of the points about the relationship expected between liquid water path and reflectivity. The anisotropy of the observed reflectivities proved to be small, much smaller than indicated by theory. To critically assess parameterizations, more simultaneous observations of cloud liquid water and cloud reflectivities and better calibration of the AVHRR sensors are needed.
NASA Technical Reports Server (NTRS)
Wittenberger, J. D.; Behrendt, D. R.
1973-01-01
Diffusional creep in a polycrystalline alloy containing second-phase particles can disrupt the particle morphology. For alloys which depend on the particle distribution for strength, changes in the particle morphology can affect the mechanical properties. Recent observations of diffusional creep in alloys containing soluble particles (gamma-prime strengthened Ni base alloys) and inert particles have been reexamined in light of the basic mechanisms of diffusional creep, and a generalized model of this effect is proposed. The model indicates that diffusional creep will generally result in particle-free regions in the vicinity of grain boundaries serving as net vacancy sources. The factors which control the changes in second-phase morphology have been identified, and methods of reducing the effects of diffusional creep are suggested.
Determining the properties of accretion-gap neutron stars
NASA Technical Reports Server (NTRS)
Kluzniak, Wlodzimierz; Michelson, Peter; Wagoner, Robert V.
1990-01-01
If neutron stars have radii as small as has been argued by some, observations of accretion-powered X-rays could verify the existence of innermost stable circular orbits (predicted by general relativity) around weakly magnetized neutron stars. This may be done by detecting X-ray emission from clumps of matter before and after they cross the gap (where matter cannot be supported by rotation) between the inner accretion disk and the stellar surface. Assuming the validity of general relativity, it would then be possible to determine the masses of such neutron stars independently of any knowledge of binary orbital parameters. If an accurate mass determination were already available through any of the methods conventionally used, the new mass determination method proposed here could then be used to quantitatively test strong field effects of gravitational theory.
Data Assimilation as a Tool for Developing a Mars International Reference Atmosphere
NASA Technical Reports Server (NTRS)
Houben, Howard
2005-01-01
A new paradigm for a Mars International Reference Atmosphere is proposed. In general, as is certainly now the case for Mars, there are sufficient observational data to specify what the full atmospheric state was under a variety of circumstances (season, dustiness, etc.). There are also general circulation models capable of deter- mining the evolution of these states. If these capabilities are combined-using data assimilation techniques-the resulting analyzed states can be probed to answer a wide variety of questions, whether posed by scientists, mission planners, or others. This system would fulfill all the purposes of an international reference atmosphere and would make the scientific results of exploration missions readily available to the community. Preliminary work on a website that would incorporate this functionality has begun.
Explaining brain size variation: from social to cultural brain.
van Schaik, Carel P; Isler, Karin; Burkart, Judith M
2012-05-01
Although the social brain hypothesis has found near-universal acceptance as the best explanation for the evolution of extensive variation in brain size among mammals, it faces two problems. First, it cannot account for grade shifts, where species or complete lineages have a very different brain size than expected based on their social organization. Second, it cannot account for the observation that species with high socio-cognitive abilities also excel in general cognition. These problems may be related. For birds and mammals, we propose to integrate the social brain hypothesis into a broader framework we call cultural intelligence, which stresses the importance of the high costs of brain tissue, general behavioral flexibility and the role of social learning in acquiring cognitive skills. Copyright © 2012 Elsevier Ltd. All rights reserved.
Implementation of nursing conceptual models: observations of a multi-site research team.
Shea, H; Rogers, M; Ross, E; Tucker, D; Fitch, M; Smith, I
1989-01-01
The general acceptance by nursing of the nursing process as the methodology of practice enabled nurses to have a common grounding for practice, research and theory development in the 1970s. It has become clear, however, that the nursing process is just that--a process. What is sorely needed is the nursing content for that process and consequently in the past 10 years nursing theorists have further developed their particular conceptual models (CM). Three major teaching hospitals in Toronto have instituted a conceptual model (CM) of nursing as a basis of nursing practice. Mount Sinai Hospital has adopted Roy's adaptation model; Sunnybrook Medical Centre, Kings's goal attainment model; and Toronto General Hospital, Orem's self-care deficit theory model. All of these hospitals are affiliated through a series of cross appointments with the Faculty of Nursing at the University of Toronto. Two community hospitals, Mississauga and Scarborough General, have also adopted Orem's model and are related to the University through educational, community and interest groups. A group of researchers from these hospitals and the University of Toronto have proposed a collaborative project to determine what impact using a conceptual model will make on nursing practice. Discussions among the participants of this research group indicate that there are observations associated with instituting conceptual models that can be identified early in the process of implementation. These observations may be of assistance to others contemplating the implementation of conceptually based practice in their institution.
General form of a cooperative gradual maximal covering location problem
NASA Astrophysics Data System (ADS)
Bagherinejad, Jafar; Bashiri, Mahdi; Nikzad, Hamideh
2018-07-01
Cooperative and gradual covering are two new methods for developing covering location models. In this paper, a cooperative maximal covering location-allocation model is developed (CMCLAP). In addition, both cooperative and gradual covering concepts are applied to the maximal covering location simultaneously (CGMCLP). Then, we develop an integrated form of a cooperative gradual maximal covering location problem, which is called a general CGMCLP. By setting the model parameters, the proposed general model can easily be transformed into other existing models, facilitating general comparisons. The proposed models are developed without allocation for physical signals and with allocation for non-physical signals in discrete location space. Comparison of the previously introduced gradual maximal covering location problem (GMCLP) and cooperative maximal covering location problem (CMCLP) models with our proposed CGMCLP model in similar data sets shows that the proposed model can cover more demands and acts more efficiently. Sensitivity analyses are performed to show the effect of related parameters and the model's validity. Simulated annealing (SA) and a tabu search (TS) are proposed as solution algorithms for the developed models for large-sized instances. The results show that the proposed algorithms are efficient solution approaches, considering solution quality and running time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi-chune, Y.; Liu, L.
The evaporation, heating, and burning of single coal-water slurry droplets are studied. The coal selected in this study is Pittsburgh Seam number 8 coal which is a medium volatile caking bituminous coal. The droplet is suspended on a microthermocouple and exposed to a hot gas stream. Temperature measurement and microscopic observation are performed in the parametric studies. The duration of water evaporation in CWS droplets decreases with the reduction of the droplet size, increasing of coal weight fraction, and increasing of gas temperature and velocity. The duration of heat-up is always significant due to the agglomeration. The CWS droplets aremore » generally observed to swell like popcorn during heating. A model for the formation of the popped swelling is proposed and discussed.« less
NASA Astrophysics Data System (ADS)
Hassan Asemani, Mohammad; Johari Majd, Vahid
2015-12-01
This paper addresses a robust H∞ fuzzy observer-based tracking design problem for uncertain Takagi-Sugeno fuzzy systems with external disturbances. To have a practical observer-based controller, the premise variables of the system are assumed to be not measurable in general, which leads to a more complex design process. The tracker is synthesised based on a fuzzy Lyapunov function approach and non-parallel distributed compensation (non-PDC) scheme. Using the descriptor redundancy approach, the robust stability conditions are derived in the form of strict linear matrix inequalities (LMIs) even in the presence of uncertainties in the system, input, and output matrices simultaneously. Numerical simulations are provided to show the effectiveness of the proposed method.
A Chandra Snapshot Survey of Extremely Red Quasars from SDSS BOSS and WISE
NASA Astrophysics Data System (ADS)
Garmire, Gordon
2017-09-01
We propose Chandra snapshot observations of a sample of 15 extremely red and highly luminous quasars at z > 2. These Type 1 objects have recently been discovered via the SDSS BOSS and WISE surveys, and they are among the most-luminous quasars in the Universe. They appear to be part of the missing evolutionary link as merger-induced starburst galaxies transform into typical ultraviolet luminous quasars. Our aim is to efficiently gather X-ray information about a sufficiently large sample of these objects that general conclusions about their basic X-ray properties, especially obscuration level and luminosity, can be drawn reliably. The results will also allow effective targeting of promising objects in longer X-ray spectroscopic observations.
2013-01-01
We discuss the hypothesis proposed by Engstrom and coworkers that Migraineurs have a relative sleep deprivation, which lowers the pain threshold and predispose to attacks. Previous data indicate that Migraineurs have a reduction of Cyclic Alternating Pattern (CAP), an essential mechanism of NREM sleep regulation which allows to dump the effect of incoming disruptive stimuli, and to protect sleep. The modifications of CAP observed in Migraineurs are similar to those observed in patients with impaired arousal (narcolepsy) and after sleep deprivation. The impairment of this mechanism makes Migraineurs more vulnerable to stimuli triggering attacks during sleep, and represents part of a more general vulnerability to incoming stimuli. PMID:23758606
Changes in crime rates and family-related values in selected East European countries.
Krus, D J; Nelsen, E A; Webb, J M
1997-12-01
Observations and longitudinal comparisons of emerging trends within formerly socialist East European countries offer a unique opportunity to observe some of the social policies typical of the capitalist and socialist systems and their influence on society. Some of the emerging trends in the Czech Republic, former East Germany, and Russia pertaining to general areas of public health, family, and crime are described. Effects of these changes are discussed within the framework of a recently proposed multiple regression model of criminal behavior in which criminality is attributed to the confluence of gross inequalities in the distribution of wealth and to the disintegration of the traditional family. The changes should be considered in the conduct of research.
Utility-preserving anonymization for health data publishing.
Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn
2017-07-11
Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.
Particle Filter with State Permutations for Solving Image Jigsaw Puzzles
Yang, Xingwei; Adluru, Nagesh; Latecki, Longin Jan
2016-01-01
We deal with an image jigsaw puzzle problem, which is defined as reconstructing an image from a set of square and non-overlapping image patches. It is known that a general instance of this problem is NP-complete, and it is also challenging for humans, since in the considered setting the original image is not given. Recently a graphical model has been proposed to solve this and related problems. The target label probability function is then maximized using loopy belief propagation. We also formulate the problem as maximizing a label probability function and use exactly the same pairwise potentials. Our main contribution is a novel inference approach in the sampling framework of Particle Filter (PF). Usually in the PF framework it is assumed that the observations arrive sequentially, e.g., the observations are naturally ordered by their time stamps in the tracking scenario. Based on this assumption, the posterior density over the corresponding hidden states is estimated. In the jigsaw puzzle problem all observations (puzzle pieces) are given at once without any particular order. Therefore, we relax the assumption of having ordered observations and extend the PF framework to estimate the posterior density by exploring different orders of observations and selecting the most informative permutations of observations. This significantly broadens the scope of applications of the PF inference. Our experimental results demonstrate that the proposed inference framework significantly outperforms the loopy belief propagation in solving the image jigsaw puzzle problem. In particular, the extended PF inference triples the accuracy of the label assignment compared to that using loopy belief propagation. PMID:27795660
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1992-01-01
The present treatment of elliptic regions via hyperbolic flux-splitting and high order methods proposes a flux splitting in which the corresponding Jacobians have real and positive/negative eigenvalues. While resembling the flux splitting used in hyperbolic systems, the present generalization of such splitting to elliptic regions allows the handling of mixed-type systems in a unified and heuristically stable fashion. The van der Waals fluid-dynamics equation is used. Convergence with good resolution to weak solutions for various Riemann problems are observed.
Inflationary cosmology: First 30+ years
NASA Astrophysics Data System (ADS)
Sato, Katsuhiko; Yokoyama, Jun'ichi
2015-08-01
Starting with an account of historical developments in Japan and Russia, we review inflationary cosmology and its basic predictions in a pedagogical manner. We also introduce the generalized G-inflation model, in terms of which all the known single-field inflation models may be described. This formalism allows us to analyze and compare the many inflationary models that have been proposed simultaneously and within a common framework. Finally, current observational constraints on inflation are reviewed, with particular emphasis on the sensitivity of the inferred constraints to the choice of datasets used.
Effects of motion on jet exhaust noise from aircraft
NASA Technical Reports Server (NTRS)
Chun, K. S.; Berman, C. H.; Cowan, S. J.
1976-01-01
The various problems involved in the evaluation of the jet noise field prevailing between an observer on the ground and an aircraft in flight in a typical takeoff or landing approach pattern were studied. Areas examined include: (1) literature survey and preliminary investigation, (2) propagation effects, (3) source alteration effects, and (4) investigation of verification techniques. Sixteen problem areas were identified and studied. Six follow-up programs were recommended for further work. The results and the proposed follow-on programs provide a practical general technique for predicting flyover jet noise for conventional jet nozzles.
Life prediction of thermal-mechanical fatigue using strainrange partitioning
NASA Technical Reports Server (NTRS)
Halford, G. R.; Manson, S. S.
1975-01-01
This paper describes the applicability of the method of Strainrange Partitioning to the life prediction of thermal-mechanical strain-cycling fatigue. An in-phase test on 316 stainless steel is analyzed as an illustrative example. The observed life is in excellent agreement with the life predicted by the method using the recently proposed Step-Stress Method of experimental partitioning, the Interaction Damage Rule, and the life relationships determined at an isothermal temperature of 705 C. Implications of the present study are discussed relative to the general thermal fatigue problem.
Life prediction of thermal-mechanical fatigue using strain-range partitioning
NASA Technical Reports Server (NTRS)
Halford, G. R.; Manson, S. S.
1975-01-01
The applicability is described of the method of Strainrange Partitioning to the life prediction of thermal-mechanical strain-cycling fatigue. An in-phase test on 316 stainless steel is analyzed as an illustrative example. The observed life is in excellent agreement with the life predicted by the method using the recently proposed Step-Stress Method of experimental partitioning, the Interation Damage Rule, and the life relationships determined at an isothermal temperature of 705 C. Implications of the study are discussed relative to the general thermal fatigue problem.
Funding of Geosciences: Coordinating National and International Resources
NASA Astrophysics Data System (ADS)
Bye, B.; Fontaine, K. S.
2012-12-01
Funding is an important element of national as well as international policy for Earth observations. The Group on Earth Observations (GEO) is coordinating efforts to build a Global Earth Observation System of Systems, or GEOSS. The lack of dedicated funding to support specific S&T activities in support of GEOSS is one of the most important obstacles to engaging the S&T communities in its implementation. This problem can be addressed by establishing explicit linkages between research and development programmes funded by GEO Members and Participating Organizations and GEOSS. In appropriate funding programs, these links may take the form of requiring explanations of how projects to be funded will interface with GEOSS and ensuring that demonstrating significant relevance for GEOSS is viewed as an asset of these proposals, requiring registration of Earth observing systems developed in these projects, or stipulating that data and products must adhere to the GEOSS Data Sharing Principles. Examples of Earth observations include: - Measurements from ground-based, in situ monitors; - Observations from Earth satellites; - Products and predictive capabilities from Earth system models, often using the capabilities of high-performance computers; - Scientific knowledge about the Earth system; and, - Data visualization techniques. These examples of Earth observations activities requires different types of resources, R&D top-down, bottom-up funding and programs of various sizes. Where innovation and infrastructure are involved different kind of resources are better suited, for developing countries completely other sources of funding are applicable etc. The European Commission funded Egida project is coordinating the development of a funding mechanism based on current national and international funding instruments such as the European ERANet, the new Joint Programming Initiatives, ESFRI as well as other European and non-European instruments. A general introduction to various strategies and fundings instruments on international and regional level will be presented together with a proposed first step of a particular funding mechanism for both the implementation and sustained operation of GEOSS. Resources and capacity building is an integral part of national science policy making and an important element in its implementations in societal applications such as disaster management, natural resources management etc. In particular, funding instruments have to be in place to facilitate free, open, authoritative sources of quality data and general scientific results for the benefit of society.
Active Player Modeling in the Iterated Prisoner's Dilemma
Park, Hyunsoo; Kim, Kyung-Joong
2016-01-01
The iterated prisoner's dilemma (IPD) is well known within the domain of game theory. Although it is relatively simple, it can also elucidate important problems related to cooperation and trust. Generally, players can predict their opponents' actions when they are able to build a precise model of their behavior based on their game playing experience. However, it is difficult to make such predictions based on a limited number of games. The creation of a precise model requires the use of not only an appropriate learning algorithm and framework but also a good dataset. Active learning approaches have recently been introduced to machine learning communities. The approach can usually produce informative datasets with relatively little effort. Therefore, we have proposed an active modeling technique to predict the behavior of IPD players. The proposed method can model the opponent player's behavior while taking advantage of interactive game environments. This experiment used twelve representative types of players as opponents, and an observer used an active modeling algorithm to model these opponents. This observer actively collected data and modeled the opponent's behavior online. Most of our data showed that the observer was able to build, through direct actions, a more accurate model of an opponent's behavior than when the data were collected through random actions. PMID:26989405
Active Player Modeling in the Iterated Prisoner's Dilemma.
Park, Hyunsoo; Kim, Kyung-Joong
2016-01-01
The iterated prisoner's dilemma (IPD) is well known within the domain of game theory. Although it is relatively simple, it can also elucidate important problems related to cooperation and trust. Generally, players can predict their opponents' actions when they are able to build a precise model of their behavior based on their game playing experience. However, it is difficult to make such predictions based on a limited number of games. The creation of a precise model requires the use of not only an appropriate learning algorithm and framework but also a good dataset. Active learning approaches have recently been introduced to machine learning communities. The approach can usually produce informative datasets with relatively little effort. Therefore, we have proposed an active modeling technique to predict the behavior of IPD players. The proposed method can model the opponent player's behavior while taking advantage of interactive game environments. This experiment used twelve representative types of players as opponents, and an observer used an active modeling algorithm to model these opponents. This observer actively collected data and modeled the opponent's behavior online. Most of our data showed that the observer was able to build, through direct actions, a more accurate model of an opponent's behavior than when the data were collected through random actions.
Lu, Ji; Pan, Junhao; Zhang, Qiang; Dubé, Laurette; Ip, Edward H.
2015-01-01
With intensively collected longitudinal data, recent advances in Experience Sampling Method (ESM) benefit social science empirical research, but also pose important methodological challenges. As traditional statistical models are not generally well-equipped to analyze a system of variables that contain feedback loops, this paper proposes the utility of an extended hidden Markov model to model reciprocal relationship between momentary emotion and eating behavior. This paper revisited an ESM data set (Lu, Huet & Dube, 2011) that observed 160 participants’ food consumption and momentary emotions six times per day in 10 days. Focusing on the analyses on feedback loop between mood and meal healthiness decision, the proposed Reciprocal Markov Model (RMM) can accommodate both hidden (“general” emotional states: positive vs. negative state) and observed states (meal: healthier, same or less healthy than usual) without presuming independence between observations and smooth trajectories of mood or behavior changes. The results of RMM analyses illustrated the reciprocal chains of meal consumption and mood as well as the effect of contextual factors that moderate the interrelationship between eating and emotion. A simulation experiment that generated data consistent to the empirical study further demonstrated that the procedure is promising in terms of recovering the parameters. PMID:26717120
Optimal joint detection and estimation that maximizes ROC-type curves
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.
2017-01-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544
Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K
2016-09-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hearin, Andrew P.; Zentner, Andrew R., E-mail: aph15@pitt.edu, E-mail: zentner@pitt.edu
Forthcoming projects such as the Dark Energy Survey, Joint Dark Energy Mission, and the Large Synoptic Survey Telescope, aim to measure weak lensing shear correlations with unprecedented accuracy. Weak lensing observables are sensitive to both the distance-redshift relation and the growth of structure in the Universe. If the cause of accelerated cosmic expansion is dark energy within general relativity, both cosmic distances and structure growth are governed by the properties of dark energy. Consequently, one may use lensing to check for this consistency and test general relativity. After reviewing the phenomenology of such tests, we address a major challenge tomore » such a program. The evolution of the baryonic component of the Universe is highly uncertain and can influence lensing observables, manifesting as modified structure growth for a fixed cosmic distance scale. Using two proposed methods, we show that one could be led to reject the null hypothesis of general relativity when it is the true theory if this uncertainty in baryonic processes is neglected. Recent simulations suggest that we can correct for baryonic effects using a parameterized model in which the halo mass-concentration relation is modified. The correction suffices to render biases small compared to statistical uncertainties. We study the ability of future weak lensing surveys to constrain the internal structures of halos and test the null hypothesis of general relativity simultaneously. Compared to alternative methods which null information from small-scales to mitigate sensitivity to baryonic physics, this internal calibration program should provide limits on deviations from general relativity that are several times more constraining. Specifically, we find that limits on general relativity in the case of internal calibration are degraded by only {approx} 30% or less compared to the case of perfect knowledge of nonlinear structure.« less
Kruschwitz, J D; Waller, L; Daedelow, L S; Walter, H; Veer, I M
2018-05-01
One hallmark example of a link between global topological network properties of complex functional brain connectivity and cognitive performance is the finding that general intelligence may depend on the efficiency of the brain's intrinsic functional network architecture. However, although this association has been featured prominently over the course of the last decade, the empirical basis for this broad association of general intelligence and global functional network efficiency is quite limited. In the current study, we set out to replicate the previously reported association between general intelligence and global functional network efficiency using the large sample size and high quality data of the Human Connectome Project, and extended the original study by testing for separate association of crystallized and fluid intelligence with global efficiency, characteristic path length, and global clustering coefficient. We were unable to provide evidence for the proposed association between general intelligence and functional brain network efficiency, as was demonstrated by van den Heuvel et al. (2009), or for any other association with the global network measures employed. More specifically, across multiple network definition schemes, ranging from voxel-level networks to networks of only 100 nodes, no robust associations and only very weak non-significant effects with a maximal R 2 of 0.01 could be observed. Notably, the strongest (non-significant) effects were observed in voxel-level networks. We discuss the possibility that the low power of previous studies and publication bias may have led to false positive results fostering the widely accepted notion of general intelligence being associated to functional global network efficiency. Copyright © 2018 Elsevier Inc. All rights reserved.
Testing general relativity on accelerators
Kalaydzhyan, Tigran
2015-09-07
Within the general theory of relativity, the curvature of spacetime is related to the energy and momentum of the present matter and radiation. One of the more specific predictions of general relativity is the deflection of light and particle trajectories in the gravitational field of massive objects. Bending angles for electromagnetic waves and light in particular were measured with a high precision. However, the effect of gravity on relativistic massive particles was never studied experimentally. Here we propose and analyze experiments devoted to that purpose. We demonstrate a high sensitivity of the laser Compton scattering at high energy accelerators tomore » the effects of gravity. The main observable – maximal energy of the scattered photons – would experience a significant shift in the ambient gravitational field even for otherwise negligible violation of the equivalence principle. In conclusion, we confirm predictions of general relativity for ultrarelativistic electrons of energy of tens of GeV at a current level of resolution and expect our work to be a starting point of further high-precision studies on current and future accelerators, such as PETRA, European XFEL and ILC.« less
Lectins of beneficial microbes: system organisation, functioning and functional superfamily.
Lakhtin, M; Lakhtin, V; Alyoshkin, V; Afanasyev, S
2011-06-01
In this review our last results and proposals with respect to general aspects of lectin studies are summarised and compared. System presence, organisation and functioning of lectins are proposed, and accents on beneficial symbiotic microbial lectins studies are presented. The proposed general principles of lectin functioning allows for a comparison of lectins with other carbohydrate-recognition systems. A new structure-functional superfamily of symbiotic microbial lectins is proposed and its main properties are described. The proposed superfamily allows for extended searches of the biological activities of any microbial member. Prospects of lectins of beneficial symbiotic microorganisms are discussed.
Argenti, Fabrizio; Bianchi, Tiziano; Alparone, Luciano
2006-11-01
In this paper, a new despeckling method based on undecimated wavelet decomposition and maximum a posteriori MIAP) estimation is proposed. Such a method relies on the assumption that the probability density function (pdf) of each wavelet coefficient is generalized Gaussian (GG). The major novelty of the proposed approach is that the parameters of the GG pdf are taken to be space-varying within each wavelet frame. Thus, they may be adjusted to spatial image context, not only to scale and orientation. Since the MAP equation to be solved is a function of the parameters of the assumed pdf model, the variance and shape factor of the GG function are derived from the theoretical moments, which depend on the moments and joint moments of the observed noisy signal and on the statistics of speckle. The solution of the MAP equation yields the MAP estimate of the wavelet coefficients of the noise-free image. The restored SAR image is synthesized from such coefficients. Experimental results, carried out on both synthetic speckled images and true SAR images, demonstrate that MAP filtering can be successfully applied to SAR images represented in the shift-invariant wavelet domain, without resorting to a logarithmic transformation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
This report is concerned with the nature and scope of the technical services to be rendered and the general plan proposed for operation of Building 3525, High Radiation Level Examination Laboratory (HRLEL). The role of postirradiation examination in implementing the over- all task of irradiation testing for various programs under way at the Oak Ridge National Laboratory (ORNL) and the importance of this effort to the United Stat es reactor development program are stressed . The shielded-cell complex with provisions for remote decontamination, hot-equipment storage, and maintenance is described, as well as other supporting activities which are incorporated into themore » facility. The proposed technical functions include general observation, mensuration, nondestructive testing, burnup and induced-activity measurements, fission-gas sampling and analysis, corrosion evaluation and related measurements, disassembly and cutup, metallographic examination, mechanical-property determinations , and x -ray diffraction analyses. Equipment design and operational features to improve detection and measurement of selected properties in radioactive material s are described, also. The current status on design, procurement, construction, and preoperational testing of in- cell equipment in the mockup is presented along with a forecast of future needs. The mode of operation, manpower requirements, and management of the facility are discussed.« less
Bayesian nonparametric regression with varying residual density
Pati, Debdeep; Dunson, David B.
2013-01-01
We consider the problem of robust Bayesian inference on the mean regression function allowing the residual density to change flexibly with predictors. The proposed class of models is based on a Gaussian process prior for the mean regression function and mixtures of Gaussians for the collection of residual densities indexed by predictors. Initially considering the homoscedastic case, we propose priors for the residual density based on probit stick-breaking (PSB) scale mixtures and symmetrized PSB (sPSB) location-scale mixtures. Both priors restrict the residual density to be symmetric about zero, with the sPSB prior more flexible in allowing multimodal densities. We provide sufficient conditions to ensure strong posterior consistency in estimating the regression function under the sPSB prior, generalizing existing theory focused on parametric residual distributions. The PSB and sPSB priors are generalized to allow residual densities to change nonparametrically with predictors through incorporating Gaussian processes in the stick-breaking components. This leads to a robust Bayesian regression procedure that automatically down-weights outliers and influential observations in a locally-adaptive manner. Posterior computation relies on an efficient data augmentation exact block Gibbs sampler. The methods are illustrated using simulated and real data applications. PMID:24465053
Ground cloud related weather modification effects. [heavy lift launch vehicles
NASA Technical Reports Server (NTRS)
Lee, J.
1980-01-01
The principal concerns about inadvertent weather modification by the solar power satellite system rocket effluents are discussed, namely the possibility that the ground cloud might temporarily modify local weather and the cumulative effects of nearly 500 launches per year. These issues are discussed through the consideration of (1) the possible alteration of the microphysical processes of clouds in the general area due to rocket effluents and debris and cooling water entrained during the launch and (2) the direct dynamical and thermodynamical responses to the inputs of thermal energy and moisture from the rocket exhaust for given ambient meteorological conditions. The huge amount of thermal energy contained in the exhaust of the proposed launch vehicle would in some situations induce a saturated, wet convective cloud or enhance an existing convective activity. Nevertheless, the effects would be limited to the general area of the launch site. The observed long lasting high concentrations of cloud condensation nuclei produced during and after a rocket launch may appreciably affect the frequency of occurrence and persistence of fogs and haze. In view of the high mission frequency proposed for the vehicle launches, a potential exists for a cumulative effect.
Characteristics and evolution of writing impairment in Alzheimer's disease.
Platel, H; Lambert, J; Eustache, F; Cadet, B; Dary, M; Viader, F; Lechevalier, B
1993-11-01
Rapcsak et al. (Archs Neurol. 46, 65-67, 1989) proposed a hypothesis describing the evolution of agraphic impairments in dementia of the Alzheimer type (DAT): lexico-semantic disturbances at the beginning of the disease, impairments becoming more and more phonological as the dementia becomes more severe. Our study was conducted in an attempt to prove this hypothesis on the basis of an analysis of the changes observed in the agraphia impairment of patients with DAT. A writing test from dictation was proposed to 22 patients twice, with an interval of 9-12 months between the tests. The results show that within 1 year there was little change in the errors made by the patients in the writing test. The changes observed however were all found to develop within the same logical progression (as demonstrated by Correspondence Analysis). These findings made it possible to develop a general hypothesis indicating that the agraphic impairment evolves through three phases in patients with DAT. The first one is a phase of mild impairment (with a few possible phonologically plausible errors). In the second phase non-phonological spelling errors predominate, phonologically plausible errors are fewer and the errors mostly involve irregular words and non-words. The last phase involves more extreme disorders that affect all types of words. We observe many alterations due to impaired graphic motor capacity. This work would tend to confirm the hypothesis proposed by Rapcsak et al. concerning the development of agraphia, and would emphasize the importance of peripheral impairments, especially grapho-motor impairments which come in addition to the lexical and phonological impairments.
Estimating Precipitation Input to a Watershed by Combining Gauge and Radar Derived Observations
NASA Astrophysics Data System (ADS)
Ercan, M. B.; Goodall, J. L.
2011-12-01
One challenge in creating an accurate watershed model is obtaining estimates of precipitation intensity over the watershed area. While precipitation measurements are generally available from gauging stations and radar instruments, both of these approaches for measuring precipitation have strengths and weakness. A typical way of addressing this challenge is to use gauged precipitation estimates to calibrate radar based estimates, however this study proposes a slightly different approach in which the optimal daily precipitation value is selected from either the gauged or the radar estimates based on the observed streamflow for that day. Our proposed approach is perhaps most relevant for cases of modeling watersheds that do not have a nearby precipitation gauge, or for regions that experience convective storms that are often highly spatially variable. Using the Eno River watershed located in Orange County, NC, three different precipitation datasets were created to predict streamflow at the watershed outlet for the time period 2005-2010 using the Soil and Water Assessment Tool (SWAT): (1) estimates based on only precipitation gauging stations, (2) estimates based only on gauged-corrected radar observations, and (3) the combination of precipitation estimates from the gauge and radar data determined using our proposed approach. The results show that the combined precipitation approach significantly improves streamflow predictions (Nash-Sutcliffe Coefficient, E = 0.66) when compared to the gauged estimates alone (E = 0.47) and the radar based estimates alone (E = 0.45). Our study was limited to one watershed, therefore additional studies are needed to control for factors such as climate, ecology, and hydrogeology that will likely influence the results of the analysis.
NASA Astrophysics Data System (ADS)
Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.
2018-01-01
Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.
NASA Astrophysics Data System (ADS)
Faulk, S.; Moon, S.; Mitchell, J.; Lora, J. M.
2016-12-01
Titan's zonal-mean precipitation behavior has been widely investigated using general circulation models (GCMs), but the spatial and temporal variability of rainfall in Titan's active hydrologic cycle is less well understood. We conduct statistical analyses of rainfall, diagnosed from GCM simulations of Titan's atmosphere, to determine storm intensity and frequency. Intense storms of methane have been proposed to be critical for enabling mechanical erosion of Titan's surface, as indicated by extensive observations of dendritic valley networks. Using precipitation outputs from the Titan Atmospheric Model (TAM), a GCM shown to realistically simulate many features of Titan's atmosphere, we quantify the precipitation variability and resulting relative erosion rates within eight separate latitude bins for a variety of initial surface liquid distributions. We find that while the overall wettest regions are indeed the poles, the most intense rainfall generally occurs in the high mid-latitudes, between 45-67.5 degrees, consistent with recent geomorphological observations of alluvial fans concentrated at those latitudes. We also find that precipitation rates necessary for surface erosion, as estimated by Perron et al. (2006) J. Geophys. Res. 111, E11001, frequently occur at all latitudes, with recurrence intervals of less than one Titan year. Such analysis is crucial towards understanding the complex interaction between Titan's atmosphere and surface and defining the influence of precipitation on observed geomorphology.
NASA Astrophysics Data System (ADS)
Ji, Shengyue; Chen, Wu; Weng, Duojie; Wang, Zhenjie
2015-08-01
Hong Kong (22.3°N, 114.2°E, dip: 30.5°N; geomagnetic 15.7°N, 173.4°W, declination: 2.7°W) is a low-latitude area, and the Hong Kong Continuously Operating Reference Station (CORS) network has been developed and maintained by Lands Department of Hong Kong government since 2001. Based on the collected GPS observations of a whole solar cycle from 2001 to 2012, a method is proposed to estimate the zonal drift velocity as well as the tilt of the observed plasma bubbles, and the estimated results are statistically analyzed. It is found that although the plasma bubbles are basically vertical within the equatorial plane, the tilt can be as big as more than 60° eastward or westward sometimes. And, the tilt and the zonal drift velocity are correlated. When the velocity is large, the tilt is also large generally. Another finding is that large velocity and tilt generally occur in spring and autumn and in solar active years.
Assessment of corneal properties based on statistical modeling of OCT speckle
Jesus, Danilo A.; Iskander, D. Robert
2016-01-01
A new approach to assess the properties of the corneal micro-structure in vivo based on the statistical modeling of speckle obtained from Optical Coherence Tomography (OCT) is presented. A number of statistical models were proposed to fit the corneal speckle data obtained from OCT raw image. Short-term changes in corneal properties were studied by inducing corneal swelling whereas age-related changes were observed analyzing data of sixty-five subjects aged between twenty-four and seventy-three years. Generalized Gamma distribution has shown to be the best model, in terms of the Akaike’s Information Criterion, to fit the OCT corneal speckle. Its parameters have shown statistically significant differences (Kruskal-Wallis, p < 0.001) for short and age-related corneal changes. In addition, it was observed that age-related changes influence the corneal biomechanical behaviour when corneal swelling is induced. This study shows that Generalized Gamma distribution can be utilized to modeling corneal speckle in OCT in vivo providing complementary quantified information where micro-structure of corneal tissue is of essence. PMID:28101409
28 CFR 0.182 - Submission of proposed orders to the Office of Legal Counsel.
Code of Federal Regulations, 2010 CFR
2010-07-01
... OF THE DEPARTMENT OF JUSTICE Orders of the Attorney General § 0.182 Submission of proposed orders to the Office of Legal Counsel. All orders prepared for the approval or signature of the Attorney General...
Functional interaction-based nonlinear models with application to multiplatform genomics data.
Davenport, Clemontina A; Maity, Arnab; Baladandayuthapani, Veerabhadran
2018-05-07
Functional regression allows for a scalar response to be dependent on a functional predictor; however, not much work has been done when a scalar exposure that interacts with the functional covariate is introduced. In this paper, we present 2 functional regression models that account for this interaction and propose 2 novel estimation procedures for the parameters in these models. These estimation methods allow for a noisy and/or sparsely observed functional covariate and are easily extended to generalized exponential family responses. We compute standard errors of our estimators, which allows for further statistical inference and hypothesis testing. We compare the performance of the proposed estimators to each other and to one found in the literature via simulation and demonstrate our methods using a real data example. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Singh, Navneet K.; Singh, Asheesh K.; Tripathy, Manoj
2012-05-01
For power industries electricity load forecast plays an important role for real-time control, security, optimal unit commitment, economic scheduling, maintenance, energy management, and plant structure planning
Understanding similarity of groundwater systems with empirical copulas
NASA Astrophysics Data System (ADS)
Haaf, Ezra; Kumar, Rohini; Samaniego, Luis; Barthel, Roland
2016-04-01
Within the classification framework for groundwater systems that aims for identifying similarity of hydrogeological systems and transferring information from a well-observed to an ungauged system (Haaf and Barthel, 2015; Haaf and Barthel, 2016), we propose a copula-based method for describing groundwater-systems similarity. Copulas are an emerging method in hydrological sciences that make it possible to model the dependence structure of two groundwater level time series, independently of the effects of their marginal distributions. This study is based on Samaniego et al. (2010), which described an approach calculating dissimilarity measures from bivariate empirical copula densities of streamflow time series. Subsequently, streamflow is predicted in ungauged basins by transferring properties from similar catchments. The proposed approach is innovative because copula-based similarity has not yet been applied to groundwater systems. Here we estimate the pairwise dependence structure of 600 wells in Southern Germany using 10 years of weekly groundwater level observations. Based on these empirical copulas, dissimilarity measures are estimated, such as the copula's lower- and upper corner cumulated probability, copula-based Spearman's rank correlation - as proposed by Samaniego et al. (2010). For the characterization of groundwater systems, copula-based metrics are compared with dissimilarities obtained from precipitation signals corresponding to the presumed area of influence of each groundwater well. This promising approach provides a new tool for advancing similarity-based classification of groundwater system dynamics. Haaf, E., Barthel, R., 2015. Methods for assessing hydrogeological similarity and for classification of groundwater systems on the regional scale, EGU General Assembly 2015, Vienna, Austria. Haaf, E., Barthel, R., 2016. An approach for classification of hydrogeological systems at the regional scale based on groundwater hydrographs EGU General Assembly 2016, Vienna, Austria. Samaniego, L., Bardossy, A., Kumar, R., 2010. Streamflow prediction in ungauged catchments using copula-based dissimilarity measures. Water Resources Research, 46. DOI:10.1029/2008wr007695
NASA Astrophysics Data System (ADS)
Yu, M.; Wu, B.
2017-12-01
As an important part of the coupled Eco-Hydrological processes, evaporation is the bond for exchange of energy and heat between the surface and the atmosphere. However, the estimation of evaporation remains a challenge compared with other main hydrological factors in water cycle. The complementary relationship which proposed by Bouchet (1963) has laid the foundation for various approaches to estimate evaporation from land surfaces, the essence of the principle is a relationship between three types of evaporation in the environment. It can simply implemented with routine meteorological data without the need for resistance parameters of the vegetation and bare land, which are difficult to observed and complicated to estimate in most surface flux models. On this basis the generalized nonlinear formulation was proposed by Brutsaert (2015). The daily evaporation can be estimated once the potential evaporation (Epo) and apparent potential evaporation (Epa) are known. The new formulation has a strong physical basis and can be expected to perform better under natural water stress conditions, nevertheless, the model has not been widely validated over different climate types and underlying surface patterns. In this study, we attempted to apply the generalized nonlinear complementary relationship in North China, three flux stations in North China are used for testing the universality and accuracy of this model against observed evaporation over different vegetation types, including Guantao Site, Miyun Site and Huailai Site. Guantao Site has double-cropping systems and crop rotations with summer maize and winter wheat; the other two sites are dominated by spring maize. Detailed measurements of meteorological factors at certain heights above ground surface from automatic weather stations offered necessary parameters for daily evaporation estimation. Using the Bowen ratio, the surface energy measured by the eddy covariance systems at the flux stations is adjusted on a daily scale to satisfy the surface energy closure. After calibration the estimated daily evaporation are in good agreement with EC-measured flux data with a mean correlation coefficient in excess of 0.85. The results indicate that the generalized nonlinear complementary relationship can be applied in plant growing and non-growing season in North China.
NASA Astrophysics Data System (ADS)
Wang, Hui-Lin; An, Ru; You, Jia-jun; Wang, Ying; Chen, Yuehong; Shen, Xiao-ji; Gao, Wei; Wang, Yi-nan; Zhang, Yu; Wang, Zhe; Quaye-Ballard, Jonathan Arthur
2017-10-01
Soil moisture plays an important role in the water cycle within the surface ecosystem, and it is the basic condition for the growth of plants. Currently, the spatial resolutions of most soil moisture data from remote sensing range from ten to several tens of km, while those observed in-situ and simulated for watershed hydrology, ecology, agriculture, weather, and drought research are generally <1 km. Therefore, the existing coarse-resolution remotely sensed soil moisture data need to be downscaled. This paper proposes a universal and multitemporal soil moisture downscaling method suitable for large areas. The datasets comprise land surface, brightness temperature, precipitation, and soil and topographic parameters from high-resolution data and active/passive microwave remotely sensed essential climate variable soil moisture (ECV_SM) data with a spatial resolution of 25 km. Using this method, a total of 288 soil moisture maps of 1-km resolution from the first 10-day period of January 2003 to the last 10-day period of December 2010 were derived. The in-situ observations were used to validate the downscaled ECV_SM. In general, the downscaled soil moisture values for different land cover and land use types are consistent with the in-situ observations. Mean square root error is reduced from 0.070 to 0.061 using 1970 in-situ time series observation data from 28 sites distributed over different land uses and land cover types. The performance was also assessed using the GDOWN metric, a measure of the overall performance of the downscaling methods based on the same dataset. It was positive in 71.429% of cases, indicating that the suggested method in the paper generally improves the representation of soil moisture at 1-km resolution.
Dense Bicoid hubs accentuate binding along the morphogen gradient
Mir, Mustafa; Reimer, Armando; Haines, Jenna E.; Li, Xiao-Yong; Stadler, Michael; Garcia, Hernan
2017-01-01
Morphogen gradients direct the spatial patterning of developing embryos; however, the mechanisms by which these gradients are interpreted remain elusive. Here we used lattice light-sheet microscopy to perform in vivo single-molecule imaging in early Drosophila melanogaster embryos of the transcription factor Bicoid that forms a gradient and initiates patterning along the anteroposterior axis. In contrast to canonical models, we observed that Bicoid binds to DNA with a rapid off rate throughout the embryo such that its average occupancy at target loci is on-rate-dependent. We further observed Bicoid forming transient “hubs” of locally high density that facilitate binding as factor levels drop, including in the posterior, where we observed Bicoid binding despite vanishingly low protein levels. We propose that localized modulation of transcription factor on rates via clustering provides a general mechanism to facilitate binding to low-affinity targets and that this may be a prevalent feature of other developmental transcription factors. PMID:28982761
Coordinated ultraviolet and radio observations of selected nearby stars
NASA Technical Reports Server (NTRS)
Lang, Kenneth R.
1987-01-01
All of the US2 shifts assigned were successfully completed with simultaneous International Ultraviolet Explorer (IUE) and the Very Large Array (VLA) observations of the proposed target stars. The target stars included dwarf M flare stars and RS CVn stars. The combined ultraviolet (IUE) and microwave (VLA) observations have provided important new insights to the radiation mechanisms at these two widely-separated regions of the electromagnetic spectrum. The VLA results included the discovery of narrow-band microwave radiation and rapid time variations in the microwave radiation of dwarf M flare stars. The results indicate that conventional radiation mechanisms cannot explain the microwave emission from these stars. In general, ultraviolet variations and bursts occur when no similar variations are detected at microwave wavelengths and vice versa. Although these is some overlap, the variations in these two spectral regions are usually uncorrelated, suggesting that there is little interaction between the activity centers at the two associated atmospheric levels.
Passport Officers’ Errors in Face Matching
White, David; Kemp, Richard I.; Jenkins, Rob; Matheson, Michael; Burton, A. Mike
2014-01-01
Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of ‘fraudulent’ photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately – though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection. PMID:25133682
Passport officers' errors in face matching.
White, David; Kemp, Richard I; Jenkins, Rob; Matheson, Michael; Burton, A Mike
2014-01-01
Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of 'fraudulent' photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately--though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection.
Langevin dynamics for ramified structures
NASA Astrophysics Data System (ADS)
Méndez, Vicenç; Iomin, Alexander; Horsthemke, Werner; Campos, Daniel
2017-06-01
We propose a generalized Langevin formalism to describe transport in combs and similar ramified structures. Our approach consists of a Langevin equation without drift for the motion along the backbone. The motion along the secondary branches may be described either by a Langevin equation or by other types of random processes. The mean square displacement (MSD) along the backbone characterizes the transport through the ramified structure. We derive a general analytical expression for this observable in terms of the probability distribution function of the motion along the secondary branches. We apply our result to various types of motion along the secondary branches of finite or infinite length, such as subdiffusion, superdiffusion, and Langevin dynamics with colored Gaussian noise and with non-Gaussian white noise. Monte Carlo simulations show excellent agreement with the analytical results. The MSD for the case of Gaussian noise is shown to be independent of the noise color. We conclude by generalizing our analytical expression for the MSD to the case where each secondary branch is n dimensional.
Generalized nucleation and looping model for epigenetic memory of histone modifications
Erdel, Fabian; Greene, Eric C.
2016-01-01
Histone modifications can redistribute along the genome in a sequence-independent manner, giving rise to chromatin position effects and epigenetic memory. The underlying mechanisms shape the endogenous chromatin landscape and determine its response to ectopically targeted histone modifiers. Here, we simulate linear and looping-driven spreading of histone modifications and compare both models to recent experiments on histone methylation in fission yeast. We find that a generalized nucleation-and-looping mechanism describes key observations on engineered and endogenous methylation domains including intrinsic spatial confinement, independent regulation of domain size and memory, variegation in the absence of antagonists, and coexistence of short- and long-term memory at loci with weak and strong constitutive nucleation. These findings support a straightforward relationship between the biochemical properties of chromatin modifiers and the spatiotemporal modification pattern. The proposed mechanism gives rise to a phase diagram for cellular memory that may be generally applicable to explain epigenetic phenomena across different species. PMID:27382173
Crespo-Bojorque, Paola; Toro, Juan M
2015-02-01
Traditionally, physical features in musical chords have been proposed to be at the root of consonance perception. Alternatively, recent studies suggest that different types of experience modulate some perceptual foundations for musical sounds. The present study tested whether the mechanisms involved in the perception of consonance are present in an animal with no extensive experience with harmonic stimuli and a relatively limited vocal repertoire. In Experiment 1, rats were trained to discriminate consonant from dissonant chords and tested to explore whether they could generalize such discrimination to novel chords. In Experiment 2, we tested if rats could discriminate between chords differing only in their interval ratios and generalize them to different octaves. To contrast the observed pattern of results, human adults were tested with the same stimuli in Experiment 3. Rats successfully discriminated across chords in both experiments, but they did not generalize to novel items in either Experiment 1 or Experiment 2. On the contrary, humans not only discriminated among both consonance-dissonance categories, and among sets of interval ratios, they also generalized their responses to novel items. These results suggest that experience with harmonic sounds may be required for the construction of categories among stimuli varying in frequency ratios. However, the discriminative capacity observed in rats suggests that at least some components of auditory processing needed to distinguish chords based on their interval ratios are shared across species. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Gunn, Jane M; Palmer, Victoria J; Dowrick, Christopher F; Herrman, Helen E; Griffiths, Frances E; Kokanovic, Renata; Blashki, Grant A; Hegarty, Kelsey L; Johnson, Caroline L; Potiriadis, Maria; May, Carl R
2010-08-06
Depression and related disorders represent a significant part of general practitioners (GPs) daily work. Implementing the evidence about what works for depression care into routine practice presents a challenge for researchers and service designers. The emerging consensus is that the transfer of efficacious interventions into routine practice is strongly linked to how well the interventions are based upon theory and take into account the contextual factors of the setting into which they are to be transferred. We set out to develop a conceptual framework to guide change and the implementation of best practice depression care in the primary care setting. We used a mixed method, observational approach to gather data about routine depression care in a range of primary care settings via: audit of electronic health records; observation of routine clinical care; and structured, facilitated whole of organisation meetings. Audit data were summarised using simple descriptive statistics. Observational data were collected using field notes. Organisational meetings were audio taped and transcribed. All the data sets were grouped, by organisation, and considered as a whole case. Normalisation Process Theory (NPT) was identified as an analytical theory to guide the conceptual framework development. Five privately owned primary care organisations (general practices) and one community health centre took part over the course of 18 months. We successfully developed a conceptual framework for implementing an effective model of depression care based on the four constructs of NPT: coherence, which proposes that depression work requires the conceptualisation of boundaries of who is depressed and who is not depressed and techniques for dealing with diffuseness; cognitive participation, which proposes that depression work requires engagement with a shared set of techniques that deal with depression as a health problem; collective action, which proposes that agreement is reached about how care is organised; and reflexive monitoring, which proposes that depression work requires agreement about how depression work will be monitored at the patient and practice level. We describe how these constructs can be used to guide the design and implementation of effective depression care in a way that can take account of contextual differences. Ideas about what is required for an effective model and system of depression care in primary care need to be accompanied by theoretically informed frameworks that consider how these can be implemented. The conceptual framework we have presented can be used to guide organisational and system change to develop common language around each construct between policy makers, service users, professionals, and researchers. This shared understanding across groups is fundamental to the effective implementation of change in primary care for depression.
Zanetti-Polzi, Laura; Corni, Stefano; Daidone, Isabella; Amadei, Andrea
2016-07-21
Here, a methodology is proposed to investigate the collective fluctuation modes of an arbitrary set of observables, maximally contributing to the fluctuation of another functionally relevant observable. The methodology, based on the analysis of fully classical molecular dynamics (MD) simulations, exploits the essential dynamics (ED) method, originally developed to analyse the collective motions in proteins. We apply this methodology to identify the residues that are more relevant for determining the reduction potential (E(0)) of a redox-active protein. To this aim, the fluctuation modes of the single-residue electrostatic potentials mostly contributing to the fluctuations of the total electrostatic potential (the main determinant of E(0)) are investigated for wild-type azurin and two of its mutants with a higher E(0). By comparing the results here obtained with a previous study on the same systems [Zanetti-Polzi et al., Org. Biomol. Chem., 2015, 13, 11003] we show that the proposed methodology is able to identify the key sites that determine E(0). This information can be used for a general deeper understanding of the molecular mechanisms on the basis of the redox properties of the proteins under investigation, as well as for the rational design of mutants with a higher or lower E(0). From the results of the present analysis we propose a new azurin mutant that, according to our calculations, shows a further increase of E(0).
Lo, Kenneth
2011-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375
Lo, Kenneth; Gottardo, Raphael
2012-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.
Proposal for a new categorization of aseptic processing facilities based on risk assessment scores.
Katayama, Hirohito; Toda, Atsushi; Tokunaga, Yuji; Katoh, Shigeo
2008-01-01
Risk assessment of aseptic processing facilities was performed using two published risk assessment tools. Calculated risk scores were compared with experimental test results, including environmental monitoring and media fill run results, in three different types of facilities. The two risk assessment tools used gave a generally similar outcome. However, depending on the tool used, variations were observed in the relative scores between the facilities. For the facility yielding the lowest risk scores, the corresponding experimental test results showed no contamination, indicating that these ordinal testing methods are insufficient to evaluate this kind of facility. A conventional facility having acceptable aseptic processing lines gave relatively high risk scores. The facility showing a rather high risk score demonstrated the usefulness of conventional microbiological test methods. Considering the significant gaps observed in calculated risk scores and in the ordinal microbiological test results between advanced and conventional facilities, we propose a facility categorization based on risk assessment. The most important risk factor in aseptic processing is human intervention. When human intervention is eliminated from the process by advanced hardware design, the aseptic processing facility can be classified into a new risk category that is better suited for assuring sterility based on a new set of criteria rather than on currently used microbiological analysis. To fully benefit from advanced technologies, we propose three risk categories for these aseptic facilities.
Reply. [to the comment by Anderson et al. (1993)
NASA Technical Reports Server (NTRS)
Hegg, Dean A.; Ferek, Ronald G.; Hobbs, Peter V.
1994-01-01
While Hegg et al. (1993) accepts the criticism of Anderson et al. (1994) in principle, this involves the adoption of an aerosol composition model and the model that they propose to reconcile these observations with the assertion of Charlson et al. (1992) does not agree with many observations, particularly those made over the North Atlantic Ocean. Although the use of a gain factor (i.e. the partial derivative of aerosol mass with respect to the sulfate ion), proposed by Anderson et al., may be valid for particular cases where a proposed composition model really reflects the actual aerosol composition, this procedure is considered questionable in general. The use of sulfate as a tracer for nonsulfate aerosol mass is questionable, because in the present authors' data set, sulfate averaged only about 26% of the dry aerosol mass. The ammonium mass associated with sulfate mass is not analogous to that betwen the oxygen mass and sulfur mass in the sulfate ion. Strong chemical bonds are present between sulfur and oxygen in sulfate, whereas ammonium and sulfate in haze droplets are ions in solution that may or may not be associated with one another. Thus, there is no reason to assume that sulfate will act as a reliable tracer of ammonium mass. Hegg et al. expresses the view that their approach used for estimating sulfate light scattering efficiency is appropriate for the current level of understanding of atmospheric aerosols.
Regression analysis of sparse asynchronous longitudinal data
Cao, Hongyuan; Zeng, Donglin; Fine, Jason P.
2015-01-01
Summary We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus. PMID:26568699
NASA Astrophysics Data System (ADS)
Willamo, T.; Usoskin, I. G.; Kovaltsov, G. A.
2018-04-01
The method of active-day fraction (ADF) was proposed recently to calibrate different solar observers to standard observational conditions. The result of the calibration may depend on the overall level of solar activity during the observational period. This dependency is studied quantitatively using data of the Royal Greenwich Observatory by formally calibrating synthetic pseudo-observers to the full reference dataset. It is shown that the sunspot group number is precisely estimated by the ADF method for periods of moderate activity, may be slightly underestimated by 0.5 - 1.5 groups ({≤} 10%) for strong and very strong activity, and is strongly overestimated by up to 2.5 groups ({≤} 30%) for weak-to-moderate activity. The ADF method becomes inapplicable for the periods of grand minima of activity. In general, the ADF method tends to overestimate the overall level of activity and to reduce the long-term trends.
Initiation of Solar Eruptions: Recent Observations and Implications for Theories
NASA Technical Reports Server (NTRS)
Sterling, A. C.
2006-01-01
Solar eruptions involve the violent disruption of a system of magnetic field. Just how the field is destabilized and explodes to produce flares and coronal mass ejections (CMEs) is still being debated in the solar community. Here I discuss recent observational work into these questions by ourselves (me and my colleagues) and others. Our work has concentrated mainly on eruptions that include filaments. We use the filament motion early in the event as a tracer of the motion of the general erupting coronal field in and around the filament, since that field itself is hard to distinguish otherwise. Our main data sources are EUV images from SOHO/EIT and TRACE, soft Xray images from Yohkoh, and magnetograms from SOHO/MDI, supplemented with coronagraph images from SOHO/LASCO, hard X-ray data, and ground-based observations. We consider the observational findings in terms of three proposed eruption-initiation mechanisms: (i) runaway internal tether-cutting reconnection, (ii) slow external tether-cutting reconnection ("breakout"), and (iii) ideal MHD instability.
[Thoracic aspergillosis: indications for surgery for a multifaceted disease!].
Massard, G
2004-04-01
We reviewed the different clinical forms of thoracic aspergillosis and detailed surgical options. Classical aspergiloma where a tuft of Aspergillus grows in a parenchymal cavity is the most well-known entity. Simple forms (little clinical expression, thin-walled cavity without impact on neighboring tIssue) can be distinguished from complex forms (poor general status, thickened cavity, sequellae). Surgery is the last resort for complex forms, but the procedure is benign for simple forms allowing interruption of the spontaneous evolution. Pleural aspergillosis is a common complication of the excision procedure, whether performed early or at mid-term. Thoracoplasty is often required due to the Volume of parenchyma removed. Surgery can be proposed for acute invasive aspergillosis in two situations: to prevent cataclysmic hemoptysis due to a paravascular lesion, or for resection of sequestered mycotic deposits which could lead to generalized reinfection. Semi-invasive aspergillosis is usually observed in areas of post-radiation fibrosis where the typical aspergillar excavation appears after the initial phase of invasion leading to lobular pneumonia. Thoracoplasty is often the only surgical option. Ulcerated aspergillar tracheobronchitis is observed after (heart)-lung transplantation and raises the risk of characteristic invasive aspergillosis. Finally rare observations of parietal aspergillosis have been treated by surgical resection in combination with systemic antifungal agents. Multidisciplinary consultation is required to establish the most appropriate approach.
Dionne, Raymond A
2016-09-01
Recently proposed revisions to the American Dental Association's Guidelines for the Use of Sedation and General Anesthesia by Dentists, aimed at improving safety in dental offices, differentiate between levels of sedation based on drug-induced changes in physiologic and behavioral states. However, the author of this op-ed is concerned the proposed revisions may have far-reaching and unintended consequences.
Generalized SMO algorithm for SVM-based multitask learning.
Cai, Feng; Cherkassky, Vladimir
2012-06-01
Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.
Novel jet observables from machine learning
NASA Astrophysics Data System (ADS)
Datta, Kaustuv; Larkoski, Andrew J.
2018-03-01
Previous studies have demonstrated the utility and applicability of machine learning techniques to jet physics. In this paper, we construct new observables for the discrimination of jets from different originating particles exclusively from information identified by the machine. The approach we propose is to first organize information in the jet by resolved phase space and determine the effective N -body phase space at which discrimination power saturates. This then allows for the construction of a discrimination observable from the N -body phase space coordinates. A general form of this observable can be expressed with numerous parameters that are chosen so that the observable maximizes the signal vs. background likelihood. Here, we illustrate this technique applied to discrimination of H\\to b\\overline{b} decays from massive g\\to b\\overline{b} splittings. We show that for a simple parametrization, we can construct an observable that has discrimination power comparable to, or better than, widely-used observables motivated from theory considerations. For the case of jets on which modified mass-drop tagger grooming is applied, the observable that the machine learns is essentially the angle of the dominant gluon emission off of the b\\overline{b} pair.
A counterfactual p-value approach for benefit-risk assessment in clinical trials.
Zeng, Donglin; Chen, Ming-Hui; Ibrahim, Joseph G; Wei, Rachel; Ding, Beiying; Ke, Chunlei; Jiang, Qi
2015-01-01
Clinical trials generally allow various efficacy and safety outcomes to be collected for health interventions. Benefit-risk assessment is an important issue when evaluating a new drug. Currently, there is a lack of standardized and validated benefit-risk assessment approaches in drug development due to various challenges. To quantify benefits and risks, we propose a counterfactual p-value (CP) approach. Our approach considers a spectrum of weights for weighting benefit-risk values and computes the extreme probabilities of observing the weighted benefit-risk value in one treatment group as if patients were treated in the other treatment group. The proposed approach is applicable to single benefit and single risk outcome as well as multiple benefit and risk outcomes assessment. In addition, the prior information in the weight schemes relevant to the importance of outcomes can be incorporated in the approach. The proposed CPs plot is intuitive with a visualized weight pattern. The average area under CP and preferred probability over time are used for overall treatment comparison and a bootstrap approach is applied for statistical inference. We assess the proposed approach using simulated data with multiple efficacy and safety endpoints and compare its performance with a stochastic multi-criteria acceptability analysis approach.
A stepwise model to predict monthly streamflow
NASA Astrophysics Data System (ADS)
Mahmood Al-Juboori, Anas; Guven, Aytac
2016-12-01
In this study, a stepwise model empowered with genetic programming is developed to predict the monthly flows of Hurman River in Turkey and Diyalah and Lesser Zab Rivers in Iraq. The model divides the monthly flow data to twelve intervals representing the number of months in a year. The flow of a month, t is considered as a function of the antecedent month's flow (t - 1) and it is predicted by multiplying the antecedent monthly flow by a constant value called K. The optimum value of K is obtained by a stepwise procedure which employs Gene Expression Programming (GEP) and Nonlinear Generalized Reduced Gradient Optimization (NGRGO) as alternative to traditional nonlinear regression technique. The degree of determination and root mean squared error are used to evaluate the performance of the proposed models. The results of the proposed model are compared with the conventional Markovian and Auto Regressive Integrated Moving Average (ARIMA) models based on observed monthly flow data. The comparison results based on five different statistic measures show that the proposed stepwise model performed better than Markovian model and ARIMA model. The R2 values of the proposed model range between 0.81 and 0.92 for the three rivers in this study.
A novel framework for feature extraction in multi-sensor action potential sorting.
Wu, Shun-Chi; Swindlehurst, A Lee; Nenadic, Zoran
2015-09-30
Extracellular recordings of multi-unit neural activity have become indispensable in neuroscience research. The analysis of the recordings begins with the detection of the action potentials (APs), followed by a classification step where each AP is associated with a given neural source. A feature extraction step is required prior to classification in order to reduce the dimensionality of the data and the impact of noise, allowing source clustering algorithms to work more efficiently. In this paper, we propose a novel framework for multi-sensor AP feature extraction based on the so-called Matched Subspace Detector (MSD), which is shown to be a natural generalization of standard single-sensor algorithms. Clustering using both simulated data and real AP recordings taken in the locust antennal lobe demonstrates that the proposed approach yields features that are discriminatory and lead to promising results. Unlike existing methods, the proposed algorithm finds joint spatio-temporal feature vectors that match the dominant subspace observed in the two-dimensional data without needs for a forward propagation model and AP templates. The proposed MSD approach provides more discriminatory features for unsupervised AP sorting applications. Copyright © 2015 Elsevier B.V. All rights reserved.
Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie
2017-08-01
Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.
Conceptual design studies of the 5 m terahertz antenna for Dome A, Antarctica
NASA Astrophysics Data System (ADS)
Yang, Ji; Zuo, Ying-Xi; Lou, Zheng; Cheng, Jing-Quan; Zhang, Qi-Zhou; Shi, Sheng-Cai; Huang, Jia-Sheng; Yao, Qi-Jun; Wang, Zhong
2013-12-01
As the highest, coldest and driest place in Antarctica, Dome A provides exceptionally good observing conditions for ground-based observations over terahertz wavebands. The 5 m Dome A Terahertz Explorer (DATE5) has been proposed to explore new terahertz windows, primarily over wavelengths between 350 and 200 μm. DATE5 will be an open-air, fully-steerable telescope that can function by unmanned operation with remote control. The telescope will be able to endure the harsh polar environment, including high altitude, very low temperature and very low air pressure. The unique specifications, including high accuracies for surface shape and pointing and fully automatic year-around remote operation, along with a stringent limit on the periods of on-site assembly, testing and maintenance, bring a number of challenges to the design, construction, assembly and operation of this telescope. This paper introduces general concepts related to the design of the DATE5 antenna. Beginning from an overview of the environmental and operational limitations, the design specifications and requirements of the DATE5 antenna are listed. From these, major aspects on the conceptual design studies, including the antenna optics, the backup structure, the panels, the subreflector, the mounting and the antenna base structure, are explained. Some critical issues of performance are justified through analyses that use computational fluid dynamics, thermal analysis and de-icing studies, and the proposed approaches for test operation and on-site assembly. Based on these studies, we conclude that the specifications of the DATE5 antenna can generally be met by using enhanced technological approaches.
Tang, Yongqiang
2017-12-01
Control-based pattern mixture models (PMM) and delta-adjusted PMMs are commonly used as sensitivity analyses in clinical trials with non-ignorable dropout. These PMMs assume that the statistical behavior of outcomes varies by pattern in the experimental arm in the imputation procedure, but the imputed data are typically analyzed by a standard method such as the primary analysis model. In the multiple imputation (MI) inference, Rubin's variance estimator is generally biased when the imputation and analysis models are uncongenial. One objective of the article is to quantify the bias of Rubin's variance estimator in the control-based and delta-adjusted PMMs for longitudinal continuous outcomes. These PMMs assume the same observed data distribution as the mixed effects model for repeated measures (MMRM). We derive analytic expressions for the MI treatment effect estimator and the associated Rubin's variance in these PMMs and MMRM as functions of the maximum likelihood estimator from the MMRM analysis and the observed proportion of subjects in each dropout pattern when the number of imputations is infinite. The asymptotic bias is generally small or negligible in the delta-adjusted PMM, but can be sizable in the control-based PMM. This indicates that the inference based on Rubin's rule is approximately valid in the delta-adjusted PMM. A simple variance estimator is proposed to ensure asymptotically valid MI inferences in these PMMs, and compared with the bootstrap variance. The proposed method is illustrated by the analysis of an antidepressant trial, and its performance is further evaluated via a simulation study. © 2017, The International Biometric Society.
Ngai, K L; Wang, Li-Min
2011-11-21
Quasielastic neutron scattering and molecular dynamics simulation data from poly(ethylene oxide) (PEO)/poly(methyl methacrylate) (PMMA) blends found that for short times the self-dynamics of PEO chain follows the Rouse model, but at longer times past t(c) = 1-2 ns it becomes slower and departs from the Rouse model in dependences on time, momentum transfer, and temperature. To explain the anomalies, others had proposed the random Rouse model (RRM) in which each monomer has different mobility taken from a broad log-normal distribution. Despite the success of the RRM, Diddens et al. [Eur. Phys. Lett. 95, 56003 (2011)] extracted the distribution of friction coefficients from the MD simulations of a PEO/PMMA blend and found that the distribution is much narrower than expected from the RRM. We propose a simpler alternative explanation of the data by utilizing alone the observed crossover of PEO chain dynamics at t(c). The present problem is just a special case of a general property of relaxation in interacting systems, which is the crossover from independent relaxation to coupled many-body relaxation at some t(c) determined by the interaction potential and intermolecular coupling/constraints. The generality is brought out vividly by pointing out that the crossover also had been observed by neutron scattering from entangled chains relaxation in monodisperse homopolymers, and from the segmental α-relaxation of PEO in blends with PMMA. The properties of all the relaxation processes in connection with the crossover are similar, despite the length scales of the relaxation in these systems are widely different.
NASA Astrophysics Data System (ADS)
Ngai, K. L.; Wang, Li-Min
2011-11-01
Quasielastic neutron scattering and molecular dynamics simulation data from poly(ethylene oxide) (PEO)/poly(methyl methacrylate) (PMMA) blends found that for short times the self-dynamics of PEO chain follows the Rouse model, but at longer times past tc = 1-2 ns it becomes slower and departs from the Rouse model in dependences on time, momentum transfer, and temperature. To explain the anomalies, others had proposed the random Rouse model (RRM) in which each monomer has different mobility taken from a broad log-normal distribution. Despite the success of the RRM, Diddens et al. [Eur. Phys. Lett. 95, 56003 (2011)] extracted the distribution of friction coefficients from the MD simulations of a PEO/PMMA blend and found that the distribution is much narrower than expected from the RRM. We propose a simpler alternative explanation of the data by utilizing alone the observed crossover of PEO chain dynamics at tc. The present problem is just a special case of a general property of relaxation in interacting systems, which is the crossover from independent relaxation to coupled many-body relaxation at some tc determined by the interaction potential and intermolecular coupling/constraints. The generality is brought out vividly by pointing out that the crossover also had been observed by neutron scattering from entangled chains relaxation in monodisperse homopolymers, and from the segmental α-relaxation of PEO in blends with PMMA. The properties of all the relaxation processes in connection with the crossover are similar, despite the length scales of the relaxation in these systems are widely different.
Ecological baseline study of the Yakima Firing Center proposed land acquisition: A status report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, L.E.; Beedlow, P.A.; Eberhardt, L.E.
1989-01-01
This report provides baseline environmental information for the property identified for possible expansion of the Yakima Firing Center. Results from this work provide general descriptions of the animals and major plant communities present. A vegetation map derived from a combination of on-site surveillance and remotely sensed imagery is provided as part of this report. Twenty-seven wildlife species of special interest (protected, sensitive, furbearer, game animal, etc.), and waterfowl, were observed on the proposed expansion area. Bird censuses revealed 13 raptorial species (including four of special interest: bald eagle, golden eagle, osprey, and prairie falcon); five upland game bird species (sagemore » grouse, California quail, chukar, gray partridge, and ring-necked pheasant); common loons (a species proposed for state listing as threatened); and five other species of special interest (sage thrasher, loggerhead shrike, mourning dove, sage sparrow, and long-billed curlew). Estimates of waterfowl abundance are included for the Priest Rapids Pool of the Columbia River. Six small mammal species were captured during this study; one, the sagebrush vole, is a species of special interest. Two large animal species, mule deer and elk, were noted on the site. Five species of furbearing animals were observed (coyote, beaver, raccoon, mink, and striped skunk). Four species of reptiles and one amphibian were noted. Fisheries surveys were conducted to document the presence of gamefish, and sensitive-classified fish and aquatic invertebrates. Rainbow trout were the only fish collected within the boundaries of the proposed northern expansion area. 22 refs., 10 figs., 4 tabs.« less
Multichannel lens-free CMOS sensors for real-time monitoring of cell growth.
Chang, Ko-Tung; Chang, Yu-Jen; Chen, Chia-Ling; Wang, Yao-Nan
2015-02-01
A low-cost platform is proposed for the growth and real-time monitoring of biological cells. The main components of the platform include a PMMA cell culture microchip and a multichannel lens-free CMOS (complementary metal-oxide-semiconductor) / LED imaging system. The PMMA microchip comprises a three-layer structure and is fabricated using a low-cost CO2 laser ablation technique. The CMOS / LED monitoring system is controlled using a self-written LabVIEW program. The platform has overall dimensions of just 130 × 104 × 115 mm(3) and can therefore be placed within a commercial incubator. The feasibility of the proposed system is demonstrated using HepG2 cancer cell samples with concentrations of 5000, 10 000, 20 000, and 40 000 cells/mL. In addition, cell cytotoxicity tests are performed using 8, 16, and 32 mM cyclophosphamide. For all of the experiments, the cell growth is observed over a period of 48 h. The cell growth rate is found to vary in the range of 44∼52% under normal conditions and from 17.4∼34.5% under cyclophosphamide-treated conditions. In general, the results confirm the long-term cell growth and real-time monitoring ability of the proposed system. Moreover, the magnification provided by the lens-free CMOS / LED observation system is around 40× that provided by a traditional microscope. Consequently, the proposed system has significant potential for long-term cell proliferation and cytotoxicity evaluation investigations. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Illumination invariant feature point matching for high-resolution planetary remote sensing images
NASA Astrophysics Data System (ADS)
Wu, Bo; Zeng, Hai; Hu, Han
2018-03-01
Despite its success with regular close-range and remote-sensing images, the scale-invariant feature transform (SIFT) algorithm is essentially not invariant to illumination differences due to the use of gradients for feature description. In planetary remote sensing imagery, which normally lacks sufficient textural information, salient regions are generally triggered by the shadow effects of keypoints, reducing the matching performance of classical SIFT. Based on the observation of dual peaks in a histogram of the dominant orientations of SIFT keypoints, this paper proposes an illumination-invariant SIFT matching method for high-resolution planetary remote sensing images. First, as the peaks in the orientation histogram are generally aligned closely with the sub-solar azimuth angle at the time of image collection, an adaptive suppression Gaussian function is tuned to level the histogram and thereby alleviate the differences in illumination caused by a changing solar angle. Next, the suppression function is incorporated into the original SIFT procedure for obtaining feature descriptors, which are used for initial image matching. Finally, as the distribution of feature descriptors changes after anisotropic suppression, and the ratio check used for matching and outlier removal in classical SIFT may produce inferior results, this paper proposes an improved matching procedure based on cross-checking and template image matching. The experimental results for several high-resolution remote sensing images from both the Moon and Mars, with illumination differences of 20°-180°, reveal that the proposed method retrieves about 40%-60% more matches than the classical SIFT method. The proposed method is of significance for matching or co-registration of planetary remote sensing images for their synergistic use in various applications. It also has the potential to be useful for flyby and rover images by integrating with the affine invariant feature detectors.
Gauvin, Hanna S; De Baene, Wouter; Brass, Marcel; Hartsuiker, Robert J
2016-02-01
To minimize the number of errors in speech, and thereby facilitate communication, speech is monitored before articulation. It is, however, unclear at which level during speech production monitoring takes place, and what mechanisms are used to detect and correct errors. The present study investigated whether internal verbal monitoring takes place through the speech perception system, as proposed by perception-based theories of speech monitoring, or whether mechanisms independent of perception are applied, as proposed by production-based theories of speech monitoring. With the use of fMRI during a tongue twister task we observed that error detection in internal speech during noise-masked overt speech production and error detection in speech perception both recruit the same neural network, which includes pre-supplementary motor area (pre-SMA), dorsal anterior cingulate cortex (dACC), anterior insula (AI), and inferior frontal gyrus (IFG). Although production and perception recruit similar areas, as proposed by perception-based accounts, we did not find activation in superior temporal areas (which are typically associated with speech perception) during internal speech monitoring in speech production as hypothesized by these accounts. On the contrary, results are highly compatible with a domain general approach to speech monitoring, by which internal speech monitoring takes place through detection of conflict between response options, which is subsequently resolved by a domain general executive center (e.g., the ACC). Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jiménez, Noé; Camarena, Francisco; Redondo, Javier; Sánchez-Morcillo, Víctor; Konofagou, Elisa E.
2015-10-01
We report a numerical method for solving the constitutive relations of nonlinear acoustics, where multiple relaxation processes are included in a generalized formulation that allows the time-domain numerical solution by an explicit finite differences scheme. Thus, the proposed physical model overcomes the limitations of the one-way Khokhlov-Zabolotskaya-Kuznetsov (KZK) type models and, due to the Lagrangian density is implicitly included in the calculation, the proposed method also overcomes the limitations of Westervelt equation in complex configurations for medical ultrasound. In order to model frequency power law attenuation and dispersion, such as observed in biological media, the relaxation parameters are fitted to both exact frequency power law attenuation/dispersion media and also empirically measured attenuation of a variety of tissues that does not fit an exact power law. Finally, a computational technique based on artificial relaxation is included to correct the non-negligible numerical dispersion of the finite difference scheme, and, on the other hand, improve stability trough artificial attenuation when shock waves are present. This technique avoids the use of high-order finite-differences schemes leading to fast calculations. The present algorithm is especially suited for practical configuration where spatial discontinuities are present in the domain (e.g. axisymmetric domains or zero normal velocity boundary conditions in general). The accuracy of the method is discussed by comparing the proposed simulation solutions to one dimensional analytical and k-space numerical solutions.
Comparing NEO Search Telescopes
NASA Astrophysics Data System (ADS)
Myhrvold, Nathan
2016-04-01
Multiple terrestrial and space-based telescopes have been proposed for detecting and tracking near-Earth objects (NEOs). Detailed simulations of the search performance of these systems have used complex computer codes that are not widely available, which hinders accurate cross-comparison of the proposals and obscures whether they have consistent assumptions. Moreover, some proposed instruments would survey infrared (IR) bands, whereas others would operate in the visible band, and differences among asteroid thermal and visible-light models used in the simulations further complicate like-to-like comparisons. I use simple physical principles to estimate basic performance metrics for the ground-based Large Synoptic Survey Telescope and three space-based instruments—Sentinel, NEOCam, and a Cubesat constellation. The performance is measured against two different NEO distributions, the Bottke et al. distribution of general NEOs, and the Veres et al. distribution of Earth-impacting NEO. The results of the comparison show simplified relative performance metrics, including the expected number of NEOs visible in the search volumes and the initial detection rates expected for each system. Although these simplified comparisons do not capture all of the details, they give considerable insight into the physical factors limiting performance. Multiple asteroid thermal models are considered, including FRM, NEATM, and a new generalized form of FRM. I describe issues with how IR albedo and emissivity have been estimated in previous studies, which may render them inaccurate. A thermal model for tumbling asteroids is also developed and suggests that tumbling asteroids may be surprisingly difficult for IR telescopes to observe.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-22
... Procedure Act (APA), or any other law, to publish general notice of proposed rulemaking.'' The RFA exempts... permits are permits, not rulemakings, under the APA and thus not subject to APA rulemaking requirements or...
Sample RFP for Architectural Services, 2000.
ERIC Educational Resources Information Center
Arizona State School Facilities Board, Phoenix.
This document presents a sample request for proposal that Arizona school districts can use when requesting architectural services, from the general request requirements to response information and signature sheet. General proposal requirements cover such areas as information on special terms and conditions, the scope of architectural services…
Code of Federal Regulations, 2014 CFR
2014-01-01
... FUNDING REGULATIONS International Cooperation Assistance § 917.30 General. (a) 33 U.S.C. 1124a sets up a... for and receive International Cooperation Assistance funding. (b) International Cooperation Assistance funding proposals will be expected to address: (1) The nature and focus of the proposed project, (2) the...
Code of Federal Regulations, 2010 CFR
2010-01-01
... FUNDING REGULATIONS International Cooperation Assistance § 917.30 General. (a) 33 U.S.C. 1124a sets up a... for and receive International Cooperation Assistance funding. (b) International Cooperation Assistance funding proposals will be expected to address: (1) The nature and focus of the proposed project, (2) the...
Code of Federal Regulations, 2011 CFR
2011-01-01
... FUNDING REGULATIONS International Cooperation Assistance § 917.30 General. (a) 33 U.S.C. 1124a sets up a... for and receive International Cooperation Assistance funding. (b) International Cooperation Assistance funding proposals will be expected to address: (1) The nature and focus of the proposed project, (2) the...
Code of Federal Regulations, 2012 CFR
2012-01-01
... FUNDING REGULATIONS International Cooperation Assistance § 917.30 General. (a) 33 U.S.C. 1124a sets up a... for and receive International Cooperation Assistance funding. (b) International Cooperation Assistance funding proposals will be expected to address: (1) The nature and focus of the proposed project, (2) the...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-23
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2012-N-0129] Agency Information Collection Activities; Proposed Collection; Comment Request; General Licensing Provisions; Section 351(k) Biosimilar Applications; Correction AGENCY: Food and Drug Administration, HHS...
13 CFR 307.4 - Award requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.4 Award requirements. (a) General. EDA will select... criteria provided in paragraphs (b) and (c) of this section, as applicable. (b) Strategy Grants. EDA will review Strategy Grant proposals to ensure that the proposed activities conform to the CEDS requirements...
NASA Astrophysics Data System (ADS)
Batalin, Igor; Marnelius, Robert
1998-02-01
A general field-antifield BV formalism for antisymplectic first class constraints is proposed. It is as general as the corresponding symplectic BFV-BRST formulation and it is demonstrated to be consistent with a previously proposed formalism for antisymplectic second class constraints through a generalized conversion to corresponding first class constraints. Thereby the basic concept of gauge symmetry is extended to apply to quite a new class of gauge theories potentially possible to exist.
NASA Astrophysics Data System (ADS)
Wang, L. M.
2017-09-01
A novel model-free adaptive sliding mode strategy is proposed for a generalized projective synchronization (GPS) between two entirely unknown fractional-order chaotic systems subject to the external disturbances. To solve the difficulties from the little knowledge about the master-slave system and to overcome the bad effects of the external disturbances on the generalized projective synchronization, the radial basis function neural networks are used to approach the packaged unknown master system and the packaged unknown slave system (including the external disturbances). Consequently, based on the slide mode technology and the neural network theory, a model-free adaptive sliding mode controller is designed to guarantee asymptotic stability of the generalized projective synchronization error. The main contribution of this paper is that a control strategy is provided for the generalized projective synchronization between two entirely unknown fractional-order chaotic systems subject to the unknown external disturbances, and the proposed control strategy only requires that the master system has the same fractional orders as the slave system. Moreover, the proposed method allows us to achieve all kinds of generalized projective chaos synchronizations by turning the user-defined parameters onto the desired values. Simulation results show the effectiveness of the proposed method and the robustness of the controlled system.
On a hierarchy of nonlinearly dispersive generalized Korteweg - de Vries evolution equations
Christov, Ivan C.
2015-08-20
We propose a hierarchy of nonlinearly dispersive generalized Korteweg–de Vries (KdV) evolution equations based on a modification of the Lagrangian density whose induced action functional the KdV equation extremizes. Two recent nonlinear evolution equations describing wave propagation in certain generalized continua with an inherent material length scale are members of the proposed hierarchy. Like KdV, the equations from the proposed hierarchy possess Hamiltonian structure. Unlike KdV, the solutions to these equations can be compact (i.e., they vanish outside of some open interval) and, in addition, peaked. Implicit solutions for these peaked, compact traveling waves (“peakompactons”) are presented.
Important Nearby Galaxies without Accurate Distances
NASA Astrophysics Data System (ADS)
McQuinn, Kristen
2014-10-01
The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.
The dynamics of temperature and light on the growth of phytoplankton.
Chen, Ming; Fan, Meng; Liu, Rui; Wang, Xiaoyu; Yuan, Xing; Zhu, Huaiping
2015-11-21
Motivated by some lab and field observations of the hump shaped effects of water temperature and light on the growth of phytoplankton, a bottom-up nutrient phytoplankton model, which incorporates the combined effects of temperature and light, is proposed and analyzed to explore the dynamics of phytoplankton bloom. The population growth model reasonably captures such observed dynamics qualitatively. An ecological reproductive index is defined to characterize the growth of the phytoplankton which also allows a comprehensive analysis of the role of temperature and light on the growth and reproductive characteristics of phytoplankton in general. The model provides a framework to study the mechanisms of phytoplankton dynamics in shallow lake and may even be employed to study the controlled phytoplankton bloom. Copyright © 2015 Elsevier Ltd. All rights reserved.
Predicting missing links and identifying spurious links via likelihood analysis
NASA Astrophysics Data System (ADS)
Pan, Liming; Zhou, Tao; Lü, Linyuan; Hu, Chin-Kun
2016-03-01
Real network data is often incomplete and noisy, where link prediction algorithms and spurious link identification algorithms can be applied. Thus far, it lacks a general method to transform network organizing mechanisms to link prediction algorithms. Here we use an algorithmic framework where a network’s probability is calculated according to a predefined structural Hamiltonian that takes into account the network organizing principles, and a non-observed link is scored by the conditional probability of adding the link to the observed network. Extensive numerical simulations show that the proposed algorithm has remarkably higher accuracy than the state-of-the-art methods in uncovering missing links and identifying spurious links in many complex biological and social networks. Such method also finds applications in exploring the underlying network evolutionary mechanisms.
Discrete Time-Crystalline Order in Cavity and Circuit QED Systems
NASA Astrophysics Data System (ADS)
Gong, Zongping; Hamazaki, Ryusuke; Ueda, Masahito
2018-01-01
Discrete time crystals are a recently proposed and experimentally observed out-of-equilibrium dynamical phase of Floquet systems, where the stroboscopic dynamics of a local observable repeats itself at an integer multiple of the driving period. We address this issue in a driven-dissipative setup, focusing on the modulated open Dicke model, which can be implemented by cavity or circuit QED systems. In the thermodynamic limit, we employ semiclassical approaches and find rich dynamical phases on top of the discrete time-crystalline order. In a deep quantum regime with few qubits, we find clear signatures of a transient discrete time-crystalline behavior, which is absent in the isolated counterpart. We establish a phenomenology of dissipative discrete time crystals by generalizing the Landau theory of phase transitions to Floquet open systems.
NASA Technical Reports Server (NTRS)
Zdziarski, Andrzej A.; Lightman, Alan P.; Maciolek-Niedzwiecki, Andrzej
1993-01-01
We show that the recent observations of the Seyfert galaxy NGC 4151 in hard X-rays and soft gamma rays by the OSSE and SIGMA detectors on board CGRO and GRANAT, respectively, are well explained by a nonthermal model with acceleration of relativistic electrons at an efficiency of less than 50 percent and with the remaining power dissipated thermally in the source (the standard nonthermal e(+/-) pair model assumed 100 percent efficiency). Such an acceleration efficiency is generally expected on physical grounds. The resulting model unifies previously proposed purely thermal and purely nonthermal models. The pure nonthermal model for NGC 4151 appears to be ruled out. The pure thermal model gives a worse fit to the data than our hybrid nonthermal/thermal model.
Time sequence analysis of flickering auroras. I - Application of Fourier analysis. [in atmosphere
NASA Technical Reports Server (NTRS)
Berkey, F. T.; Silevitch, M. B.; Parsons, N. R.
1980-01-01
Using a technique that enables one to digitize the brightness of auroral displays from individual fields of a video signal, we have analyzed the frequency content of flickering aurora. Through the application of Fourier analysis to our data, we have found that flickering aurora contains a wide range of enhanced frequencies, although the dominant frequency enhancement generally occurs in the range 6-12 Hz. Each incidence of flickering that we observed was associated with increased radio wave absorption. Furthermore, we have found that flickering occurs in bright auroral surges, the occurrence of which is not limited to the 'breakup' phase of auroral substorms. Our results are interpreted in terms of a recently proposed theory of fluctuating double layers that accounts for a number of the observational features.
Detection of entanglement in asymmetric quantum networks and multipartite quantum steering.
Cavalcanti, D; Skrzypczyk, P; Aguilar, G H; Nery, R V; Ribeiro, P H Souto; Walborn, S P
2015-08-03
The future of quantum communication relies on quantum networks composed by observers sharing multipartite quantum states. The certification of multipartite entanglement will be crucial to the usefulness of these networks. In many real situations it is natural to assume that some observers are more trusted than others in the sense that they have more knowledge of their measurement apparatuses. Here we propose a general method to certify all kinds of multipartite entanglement in this asymmetric scenario and experimentally demonstrate it in an optical experiment. Our results, which can be seen as a definition of genuine multipartite quantum steering, give a method to detect entanglement in a scenario in between the standard entanglement and fully device-independent scenarios, and provide a basis for semi-device-independent cryptographic applications in quantum networks.
Collisional dynamics of perturbed particle disks in the solar system
NASA Technical Reports Server (NTRS)
Roberts, W. W.; Stewart, G. R.
1987-01-01
Investigations of the collisional evolution of particulate disks subject to the gravitational perturbation of a more massive particle orbiting within the disk are underway. Both numerical N-body simulations using a novel collision algorithm and analytical kinetic theory are being employed to extend our understanding of perturbed disks in planetary rings and during the formation of the solar system. Particular problems proposed for investigation are: (1) The development and testing of general criteria for a small moonlet to clear a gap and produce observable morphological features in planetary rings; (2) The development of detailed models of collisional damping of the wavy edges observed on the Encke division of Saturn's A ring; and (3) The determination of the extent of runaway growth of the few largest planetesimals during the early stages of planetary accretion.
Predicting missing links and identifying spurious links via likelihood analysis
Pan, Liming; Zhou, Tao; Lü, Linyuan; Hu, Chin-Kun
2016-01-01
Real network data is often incomplete and noisy, where link prediction algorithms and spurious link identification algorithms can be applied. Thus far, it lacks a general method to transform network organizing mechanisms to link prediction algorithms. Here we use an algorithmic framework where a network’s probability is calculated according to a predefined structural Hamiltonian that takes into account the network organizing principles, and a non-observed link is scored by the conditional probability of adding the link to the observed network. Extensive numerical simulations show that the proposed algorithm has remarkably higher accuracy than the state-of-the-art methods in uncovering missing links and identifying spurious links in many complex biological and social networks. Such method also finds applications in exploring the underlying network evolutionary mechanisms. PMID:26961965
33 CFR 276.6 - General policy.
Code of Federal Regulations, 2011 CFR
2011-07-01
... specifications for the work they propose to undertake. However, those non-Federal engineering costs and overhead... commenced after certification shall be eligible for certification except for local engineering work noted...; certification of the proposal will be in the general public interest. (d) Costs assigned to that part of the...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-16
... Board on behalf of General Dynamics Ordnance and Tactical Systems Munitions Services (GDOTS), located in...--Springfield, Missouri; Notification of Proposed Production Activity; General Dynamics Ordnance and Tactical Systems Munitions Services (Demilitarization of Munitions); Carthage, Missouri The City of Springfield...
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1981-11-01
In response to a 1980 Department of Energy solicitation, the General Refractories Company submitted a Proposal for a feasibility study of a low Btu gasification facility for its Florence, KY plant. The proposed facility would substitute low Btu gas from a fixed bed gasifier for natural gas now used in the manufacture of insulation board. The Proposal from General Refractories was prompted by a concern over the rising costs of natural gas, and the anticipation of a severe increase in fuel costs resulting from deregulation. The proposed feasibility study is defined. The intent is to provide General Refractories with themore » basis upon which to determine the feasibility of incorporating such a facility in Florence. To perform the work, a Grant for which was awarded by the DOE, General Refractories selected Dravo Engineers and Contractors based upon their qualifications in the field of coal conversion, and the fact that Dravo has acquired the rights to the Wellman-Galusha technology. The LBG prices for the five-gasifier case are encouraging. Given the various natural gas forecasts available, there seems to be a reasonable possibility that the five-gasifier LBG prices will break even with natural gas prices somewhere between 1984 and 1989. General Refractories recognizes that there are many uncertainties in developing these natural gas forecasts, and if the present natural gas decontrol plan is not fully implemented some financial risks occur in undertaking the proposed gasification facility. Because of this, General Refractories has decided to wait for more substantiating evidence that natural gas prices will rise as is now being predicted.« less
Generalized Smooth Transition Map Between Tent and Logistic Maps
NASA Astrophysics Data System (ADS)
Sayed, Wafaa S.; Fahmy, Hossam A. H.; Rezk, Ahmed A.; Radwan, Ahmed G.
There is a continuous demand on novel chaotic generators to be employed in various modeling and pseudo-random number generation applications. This paper proposes a new chaotic map which is a general form for one-dimensional discrete-time maps employing the power function with the tent and logistic maps as special cases. The proposed map uses extra parameters to provide responses that fit multiple applications for which conventional maps were not enough. The proposed generalization covers also maps whose iterative relations are not based on polynomials, i.e. with fractional powers. We introduce a framework for analyzing the proposed map mathematically and predicting its behavior for various combinations of its parameters. In addition, we present and explain the transition map which results in intermediate responses as the parameters vary from their values corresponding to tent map to those corresponding to logistic map case. We study the properties of the proposed map including graph of the map equation, general bifurcation diagram and its key-points, output sequences, and maximum Lyapunov exponent. We present further explorations such as effects of scaling, system response with respect to the new parameters, and operating ranges other than transition region. Finally, a stream cipher system based on the generalized transition map validates its utility for image encryption applications. The system allows the construction of more efficient encryption keys which enhances its sensitivity and other cryptographic properties.
NASA Astrophysics Data System (ADS)
Hapugoda, J. C.; Sooriyarachchi, M. R.
2017-09-01
Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.
Bardage, Carola; Westerlund, Tommy; Barzi, Sahra; Bernsten, Cecilia
2013-04-01
The purpose of this study is to map and analyze the content and quality of the encounter when customers buy non-prescription medicines for pain and fever. 297 pharmacies and 801 general sales stores (GSS) in Sweden were selected. A "Mystery shopper" exercise was conducted. Three scenarios were used and a total of 366 units were selected for each scenario. There were in total 625 observers: 208 in the child with fever scenario, 225 in the Reliv scenario, and 192 in the painkiller during pregnancy scenario. 21st September to 20th November 2011. In two out of three visits to GSS, the staff proposed a medicine for a heavily pregnant woman. The staff suggested in 9% of the visits a medicine that is inappropriate in late pregnancy. The corresponding percentage in pharmacies was 1%. Both pharmacies and GSS proposed, in 6% a medicine that is inappropriate for babies to a feverish child. Only 16% of the pharmacists and 14% of the staff in GSS asked for the age of the child. General sales staff recommended in 10% ibuprofen and in 4% an acetylsalicylic acid product when an acetaminophen preparation was requested. The corresponding percentage in the pharmacy were 4% ibuprofen, 2% diclofenac, and 1% an acetylsalicylic acid product. The staff in GSS and pharmacies do not pay sufficient attention to the heterogeneity of painkillers, which lead to inappropriate recommendations. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Collins, Nathan A.; Hughes, Scott A.
2004-06-01
Astronomical observations have established that extremely compact, massive objects are common in the Universe. It is generally accepted that these objects are, in all likelihood, black holes. As observational technology has improved, it has become possible to test this hypothesis in ever greater detail. In particular, it is or will be possible to measure the properties of orbits deep in the strong field of a black hole candidate (using x-ray timing or future gravitational-wave measurements) and to test whether they have the characteristics of black hole orbits in general relativity. Past work has shown that, in principle, such measurements can be used to map the spacetime of a massive compact object, testing in particular whether the object’s multipolar structure satisfies the rather strict constraints imposed by the black hole hypothesis. Performing such a test in practice requires that we be able to compare against objects with the “wrong” multipole structure. In this paper, we present tools for constructing the spacetimes of bumpy black holes: objects that are almost black holes, but that have some multipoles with the wrong value. In this first analysis, we focus on objects with no angular momentum. Generalization to bumpy Kerr black holes should be straightforward, albeit labor intensive. Our construction has two particularly desirable properties. First, the spacetimes which we present are good deep into the strong field of the object—we do not use a “large r” expansion (except to make contact with weak field intuition). Second, our spacetimes reduce to the exact black hole spacetimes of general relativity in a natural way, by dialing the “bumpiness” of the black hole to zero. We propose that bumpy black holes can be used as the foundation for a null experiment: if black hole candidates are indeed the black holes of general relativity, their bumpiness should be zero. By comparing the properties of orbits in a bumpy spacetime with those measured from an astrophysical source, observations should be able to test this hypothesis, stringently testing whether they are in fact the black holes of general relativity.
NASA Astrophysics Data System (ADS)
Jia, Xiaodong; Zhao, Ming; Di, Yuan; Li, Pin; Lee, Jay
2018-03-01
Sparsity is becoming a more and more important topic in the area of machine learning and signal processing recently. One big family of sparse measures in current literature is the generalized lp /lq norm, which is scale invariant and is widely regarded as normalized lp norm. However, the characteristics of the generalized lp /lq norm are still less discussed and its application to the condition monitoring of rotating devices has been still unexplored. In this study, we firstly discuss the characteristics of the generalized lp /lq norm for sparse optimization and then propose a method of sparse filtering with the generalized lp /lq norm for the purpose of impulsive signature enhancement. Further driven by the trend of industrial big data and the need of reducing maintenance cost for industrial equipment, the proposed sparse filter is customized for vibration signal processing and also implemented on bearing and gearbox for the purpose of condition monitoring. Based on the results from the industrial implementations in this paper, the proposed method has been found to be a promising tool for impulsive feature enhancement, and the superiority of the proposed method over previous methods is also demonstrated.
Li, Linlin; Ding, Steven X; Qiu, Jianbin; Yang, Ying
2017-02-01
This paper is concerned with a real-time observer-based fault detection (FD) approach for a general type of nonlinear systems in the presence of external disturbances. To this end, in the first part of this paper, we deal with the definition and the design condition for an L ∞ / L 2 type of nonlinear observer-based FD systems. This analytical framework is fundamental for the development of real-time nonlinear FD systems with the aid of some well-established techniques. In the second part, we address the integrated design of the L ∞ / L 2 observer-based FD systems by applying Takagi-Sugeno (T-S) fuzzy dynamic modeling technique as the solution tool. This fuzzy observer-based FD approach is developed via piecewise Lyapunov functions, and can be applied to the case that the premise variables of the FD system is nonsynchronous with the premise variables of the fuzzy model of the plant. In the end, a case study on the laboratory setup of three-tank system is given to show the efficiency of the proposed results.
NASA Astrophysics Data System (ADS)
Slaski, G.; Ohde, B.
2016-09-01
The article presents the results of a statistical dispersion analysis of an energy and power demand for tractive purposes of a battery electric vehicle. The authors compare data distribution for different values of an average speed in two approaches, namely a short and long period of observation. The short period of observation (generally around several hundred meters) results from a previously proposed macroscopic energy consumption model based on an average speed per road section. This approach yielded high values of standard deviation and coefficient of variation (the ratio between standard deviation and the mean) around 0.7-1.2. The long period of observation (about several kilometers long) is similar in length to standardized speed cycles used in testing a vehicle energy consumption and available range. The data were analysed to determine the impact of observation length on the energy and power demand variation. The analysis was based on a simulation of electric power and energy consumption performed with speed profiles data recorded in Poznan agglomeration.
NASA Earth Observations Informing Renewable Energy Management and Policy Decision Making
NASA Technical Reports Server (NTRS)
Eckman, Richard S.; Stackhouse, Paul W., Jr.
2008-01-01
The NASA Applied Sciences Program partners with domestic and international governmental organizations, universities, and private entities to improve their decisions and assessments. These improvements are enabled by using the knowledge generated from research resulting from spacecraft observations and model predictions conducted by NASA and providing these as inputs to the decision support and scenario assessment tools used by partner organizations. The Program is divided into eight societal benefit areas, aligned in general with the Global Earth Observation System of Systems (GEOSS) themes. The Climate Application of the Applied Sciences Program has as one of its focuses, efforts to provide for improved decisions and assessments in the areas of renewable energy technologies, energy efficiency, and climate change impacts. The goals of the Applied Sciences Program are aligned with national initiatives such as the U.S. Climate Change Science and Technology Programs and with those of international organizations including the Group on Earth Observations (GEO) and the Committee on Earth Observation Satellites (CEOS). Activities within the Program are funded principally through proposals submitted in response to annual solicitations and reviewed by peers.
Decentralized coordinated control of elastic web winding systems without tension sensor.
Hou, Hailiang; Nian, Xiaohong; Chen, Jie; Xiao, Dengfeng
2018-06-26
In elastic web winding systems, precise regulation of web tension in each span is critical to ensure final product quality, and to achieve low cost by reducing the occurrence of web break or fold. Generally, web winding systems use load cells or swing rolls as tension sensors, which add cost, reduce system reliability and increase the difficulty of control. In this paper, a decentralized coordinated control scheme with tension observers is designed for a three-motor web-winding system. First, two tension observers are proposed to estimate the unwinding and winding tension. The designed observers consider the essential dynamic, radius, and inertial variation effects and only require the modest computational effort. Then, using the estimated tensions as feedback signals, a robust decentralized coordinated controller is adopted to reduce the interaction between subsystems. Asymptotic stabilities of the observer error dynamics and the closed-loop winding systems are demonstrated via Lyapunov stability theory. The observer gains and the controller gains can be obtained by solving matrix inequalities. Finally, some simulations and experiments are performed on a paper winding setup to test the performance of the designed observers and the observer-base DCC method, respectively. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
An analysis of USSPACECOM's space surveillance network sensor tasking methodology
NASA Astrophysics Data System (ADS)
Berger, Jeff M.; Moles, Joseph B.; Wilsey, David G.
1992-12-01
This study provides the basis for the development of a cost/benefit assessment model to determine the effects of alterations to the Space Surveillance Network (SSN) on orbital element (OE) set accuracy. It provides a review of current methods used by NORAD and the SSN to gather and process observations, an alternative to the current Gabbard classification method, and the development of a model to determine the effects of observation rate and correction interval on OE set accuracy. The proposed classification scheme is based on satellite J2 perturbations. Specifically, classes were established based on mean motion, eccentricity, and inclination since J2 perturbation effects are functions of only these elements. Model development began by creating representative sensor observations using a highly accurate orbital propagation model. These observations were compared to predicted observations generated using the NORAD Simplified General Perturbation (SGP4) model and differentially corrected using a Bayes, sequential estimation, algorithm. A 10-run Monte Carlo analysis was performed using this model on 12 satellites using 16 different observation rate/correction interval combinations. An ANOVA and confidence interval analysis of the results show that this model does demonstrate the differences in steady state position error based on varying observation rate and correction interval.
Brankov, Jovan G
2013-10-21
The channelized Hotelling observer (CHO) has become a widely used approach for evaluating medical image quality, acting as a surrogate for human observers in early-stage research on assessment and optimization of imaging devices and algorithms. The CHO is typically used to measure lesion detectability. Its popularity stems from experiments showing that the CHO's detection performance can correlate well with that of human observers. In some cases, CHO performance overestimates human performance; to counteract this effect, an internal-noise model is introduced, which allows the CHO to be tuned to match human-observer performance. Typically, this tuning is achieved using example data obtained from human observers. We argue that this internal-noise tuning step is essentially a model training exercise; therefore, just as in supervised learning, it is essential to test the CHO with an internal-noise model on a set of data that is distinct from that used to tune (train) the model. Furthermore, we argue that, if the CHO is to provide useful insights about new imaging algorithms or devices, the test data should reflect such potential differences from the training data; it is not sufficient simply to use new noise realizations of the same imaging method. Motivated by these considerations, the novelty of this paper is the use of new model selection criteria to evaluate ten established internal-noise models, utilizing four different channel models, in a train-test approach. Though not the focus of the paper, a new internal-noise model is also proposed that outperformed the ten established models in the cases tested. The results, using cardiac perfusion SPECT data, show that the proposed train-test approach is necessary, as judged by the newly proposed model selection criteria, to avoid spurious conclusions. The results also demonstrate that, in some models, the optimal internal-noise parameter is very sensitive to the choice of training data; therefore, these models are prone to overfitting, and will not likely generalize well to new data. In addition, we present an alternative interpretation of the CHO as a penalized linear regression wherein the penalization term is defined by the internal-noise model.
Using Internet-Based Robotic Telescopes to Engage Non-Science Majors in Astronomical Observation
NASA Astrophysics Data System (ADS)
Berryhill, K. J.; Coble, K.; Slater, T. F.; McLin, K. M.; Cominsky, L. R.
2013-12-01
Responding to national science education reform documents calling for students to have more opportunities for authentic research experiences, several national projects have developed online telescope networks to provide students with Internet-access to research grade telescopes. The nature of astronomical observation (e.g., remote sites, expensive equipment, and odd hours) has been a barrier in the past. Internet-based robotic telescopes allow scientists to conduct observing sessions on research-grade telescopes half a world away. The same technology can now be harnessed by STEM educators to engage students and reinforce what is being taught in the classroom, as seen in some early research in elementary schools (McKinnon and Mainwaring 2000 and McKinnon and Geissinger 2002), middle/high schools (Sadler et al. 2001, 2007 and Gehret et al. 2005) and undergraduate programs (e.g., McLin et al. 2009). This project looks at the educational value of using Internet-based robotic telescopes in a general education introductory astronomy course at the undergraduate level. Students at a minority-serving institution in the midwestern United States conducted observational programs using the Global Telescope Network (GTN). The project consisted of the use of planetarium software to determine object visibility, observing proposals (with abstract, background, goals, and dissemination sections), peer review (including written reviews and panel discussion according to NSF intellectual merit and broader impacts criteria), and classroom presentations showing the results of the observation. The GTN is a network of small telescopes funded by the Fermi mission to support the science of high energy astrophysics. It is managed by the NASA E/PO Group at Sonoma State University and is controlled using SkyNet. Data includes course artifacts (proposals, reviews, panel summaries, presentations, and student reflections) for six semesters plus student interviews. Using a grounded theory approach, the data were coded to examine the value that the students did or did not gain from the project, including students' understanding of the process of science. Preliminary analysis of course artifacts and interviews suggest that students value using research-grade instrumentation after obtaining their own scientific data and develop deeper understandings of the nature of scientific research when formulating proposals for telescope use.
ERIC Educational Resources Information Center
Langevin, Paul
This document is a Spanish translation of French educational reform proposals and general educational philosophy. Initial remarks in the document concern educational objectives and general aims of the particular educational levels. Different, possible, educational progressions are considered, and the university system is discussed. Teacher…
76 FR 6088 - Installed Systems and Equipment for Use by the Flightcrew
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-03
... requirements proposed here with the intent of achieving that balance. General Discussion of the Proposal..., ``General requirements.'' Under that section, the FAA is charged with prescribing regulations and minimum... appropriate balance is needed among them. There have been cases in the past where design characteristics known...
Regularized Generalized Structured Component Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun
2009-01-01
Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-10
... effort to reduce paperwork and respondent burden, invites the general public and other Federal agencies.... Estimated response time per survey: 1 hour. Estimated number of respondents per survey: 850 hours. Total Annual Burden: 12,500 hours. General Description of Collection: The information collected in these...
75 FR 28252 - Notice of a Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-20
... GENERAL SERVICES ADMINISTRATION Notice of a Computer Matching Program AGENCY: General Services... providing notice of a proposed computer match. The purpose of this match is to identify individuals who are... providing notice of a proposed computer match. The purpose of this match is to identify individuals who are...
Electron cyclotron wave acceleration outside a flaring loop
NASA Technical Reports Server (NTRS)
Sprangle, P.; Vlahos, L.
1983-01-01
A model for the secondary acceleration of electrons outside a flaring loop is proposed. The results suggest that the narrow bandwidth radiation emitted by the unstable electron distribution inside a flaring loop can become the driver for secondary electron acceleration outside the loop. It is shown that a system of electrons gyrating about and streaming along an adiabatically spatially varying, static magnetic field can be efficiently accelerated to high energies by an electromagnetic wave propagating along and polarized transverse to the static magnetic field. The predictions from our model appear to be in general agreement with existing observations.
Solvent tuning configurational conversion of lycopene aggregates in organic-aqueous mixing solvent
NASA Astrophysics Data System (ADS)
Dong, Jia; Zhang, Di; Wang, Xin-Yue; Wang, Peng
2018-06-01
In general cases, carotenoid aggregates are prepared in organic-water mixing solvent depending on its hydrophobic character. It is well-known that one of carotenoids, lycopene, is more likely to form typical H-aggregates. In this study, new type lycopene J-aggregates were prepared in DMSO-water mixing solvent with small amount of toluene, which was observed for the first time. We proposed a potential structure model combining with exciton model to interpret the mechanism of spectra changes. Our finding has provided new methods and novel ideas for controlling carotenoid aggregates formation.
Quantum Information Processing with Large Nuclear Spins in GaAs Semiconductors
NASA Astrophysics Data System (ADS)
Leuenberger, Michael N.; Loss, Daniel; Poggio, M.; Awschalom, D. D.
2002-10-01
We propose an implementation for quantum information processing based on coherent manipulations of nuclear spins I=3/2 in GaAs semiconductors. We describe theoretically an NMR method which involves multiphoton transitions and which exploits the nonequidistance of nuclear spin levels due to quadrupolar splittings. Starting from known spin anisotropies we derive effective Hamiltonians in a generalized rotating frame, valid for arbitrary I, which allow us to describe the nonperturbative time evolution of spin states generated by magnetic rf fields. We identify an experimentally observable regime for multiphoton Rabi oscillations. In the nonlinear regime, we find Berry phase interference.
Montagnese, Matteo; Otter, Marian; Zotos, Xenophon; Fishman, Dmitry A; Hlubek, Nikolai; Mityashkin, Oleg; Hess, Christian; Saint-Martin, Romuald; Singh, Surjeet; Revcolevschi, Alexandre; van Loosdrecht, Paul H M
2013-04-05
Thirty-five years ago, Sanders and Walton [Phys. Rev. B 15, 1489 (1977)] proposed a method to measure the phonon-magnon interaction in antiferromagnets through thermal transport which so far has not been verified experimentally. We show that a dynamical variant of this approach allows direct extraction of the phonon-magnon equilibration time, yielding 400 μs for the cuprate spin-ladder system Ca(9)La(5)Cu(24)O(41). The present work provides a general method to directly address the spin-phonon interaction by means of dynamical transport experiments.
Prediction of nonlinear evolution character of energetic-particle-driven instabilities
Duarte, Vinicius N.; Berk, H. L.; Gorelenkov, N. N.; ...
2017-03-17
A general criterion is proposed and found to successfully predict the emergence of chirping oscillations of unstable Alfvénic eigenmodes in tokamak plasma experiments. The model includes realistic eigenfunction structure, detailed phase-space dependences of the instability drive, stochastic scattering and the Coulomb drag. The stochastic scattering combines the effects of collisional pitch angle scattering and micro-turbulence spatial diffusion. Furthermore, the latter mechanism is essential to accurately identify the transition between the fixed-frequency mode behavior and rapid chirping in tokamaks and to resolve the disparity with respect to chirping observation in spherical and conventional tokamaks.
GET electronics samples data analysis
NASA Astrophysics Data System (ADS)
Giovinazzo, J.; Goigoux, T.; Anvar, S.; Baron, P.; Blank, B.; Delagnes, E.; Grinyer, G. F.; Pancin, J.; Pedroza, J. L.; Pibernat, J.; Pollacco, E.; Rebii, A.; Roger, T.; Sizun, P.
2016-12-01
The General Electronics for TPCs (GET) has been developed to equip a generation of time projection chamber detectors for nuclear physics, and may also be used for a wider range of detector types. The goal of this paper is to propose first analysis procedures to be applied on raw data samples from the GET system, in order to correct for systematic effects observed on test measurements. We also present a method to estimate the response function of the GET system channels. The response function is required in analysis where the input signal needs to be reconstructed, in terms of time distribution, from the registered output samples.
Trinidad, Bradley J.; Shi, Jiong
2015-01-01
Calcium is essential for both neurotransmitter release and muscle contraction. Given these important physiological processes, it seems reasonable to assume that hypocalcemia may lead to reduced neuromuscular excitability. Counterintuitively, however, clinical observation has frequently documented hypocalcemia’s role in induction of seizures and general excitability processes such as tetany, Chvostek’s sign, and bronchospasm. The mechanism of this calcium paradox remains elusive, and very few pathophysiological studies have addressed this conundrum. Nevertheless, several studies primarily addressing other biophysical issues have provided some clues. In this review, we analyze the data of these studies and propose an integrative model to explain this hypocalcemic paradox. PMID:25810356
Acoustic performance of a Herschel Quincke tube modified with an interconnecting pipe
NASA Astrophysics Data System (ADS)
Desantes, J. M.; Torregrosa, A. J.; Climent, H.; Moya, D.
2005-06-01
The classical two-duct Herschel-Quincke tube is modified by means of an additional pipe connecting both paths. A transfer matrix is obtained for a mesh system with five arbitrary branches and then particularized to the proposed scheme. Experimental attenuation measurements were performed on several prototypes, and the results compared favourably with predictions from the previous theoretical development. Finally, transmission loss contour plots were used to study the influence of the connecting pipe on the resonance frequencies. The results confirm the nontrivial character of the influence observed, and simple relationships are obtained for the general trends.
Fire suppression as a thermal implosion
NASA Astrophysics Data System (ADS)
Novozhilov, Vasily
2017-01-01
The present paper discusses the possibility of the thermal implosion scenario. This process would be a reverse of the well known thermal explosion (autoignition) phenomenon. The mechanism for thermal implosion scenario is proposed which involves quick suppression of the turbulent diffusion flame. Classical concept of the thermal explosion is discussed first. Then a possible scenario for the reverse process (thermal implosion) is discussed and illustrated by a relevant mathematical model. Based on the arguments presented in the paper, thermal implosion may be observed as an unstable equilibrium point on the generalized Semenov diagram for turbulent flame, however this hypothesis requires ultimate experimental confirmation.
Free-space laser communication system with rapid acquisition based on astronomical telescopes.
Wang, Jianmin; Lv, Junyi; Zhao, Guang; Wang, Gang
2015-08-10
The general structure of a free-space optical (FSO) communication system based on astronomical telescopes is proposed. The light path for astronomical observation and for communication can be easily switched. A separate camera is used as a star sensor to determine the pointing direction of the optical terminal's antenna. The new system exhibits rapid acquisition and is widely applicable in various astronomical telescope systems and wavelengths. We present a detailed analysis of the acquisition time, which can be decreased by one order of magnitude compared with traditional optical communication systems. Furthermore, we verify software algorithms and tracking accuracy.
Prediction of nonlinear evolution character of energetic-particle-driven instabilities
NASA Astrophysics Data System (ADS)
Duarte, V. N.; Berk, H. L.; Gorelenkov, N. N.; Heidbrink, W. W.; Kramer, G. J.; Nazikian, R.; Pace, D. C.; Podestà, M.; Tobias, B. J.; Van Zeeland, M. A.
2017-05-01
A general criterion is proposed and found to successfully predict the emergence of chirping oscillations of unstable Alfvénic eigenmodes in tokamak plasma experiments. The model includes realistic eigenfunction structure, detailed phase-space dependences of the instability drive, stochastic scattering and the Coulomb drag. The stochastic scattering combines the effects of collisional pitch angle scattering and micro-turbulence spatial diffusion. The latter mechanism is essential to accurately identify the transition between the fixed-frequency mode behavior and rapid chirping in tokamaks and to resolve the disparity with respect to chirping observation in spherical and conventional tokamaks.
Nuclear Reactions in Micro/Nano-Scale Metal Particles
NASA Astrophysics Data System (ADS)
Kim, Y. E.
2013-03-01
Low-energy nuclear reactions in micro/nano-scale metal particles are described based on the theory of Bose-Einstein condensation nuclear fusion (BECNF). The BECNF theory is based on a single basic assumption capable of explaining the observed LENR phenomena; deuterons in metals undergo Bose-Einstein condensation. The BECNF theory is also a quantitative predictive physical theory. Experimental tests of the basic assumption and theoretical predictions are proposed. Potential application to energy generation by ignition at low temperatures is described. Generalized theory of BECNF is used to carry out theoretical analyses of recently reported experimental results for hydrogen-nickel system.
Magnetic transport property of NiFe/WSe2/NiFe spin valve structure
NASA Astrophysics Data System (ADS)
Zhao, Kangkang; Xing, Yanhui; Han, Jun; Feng, Jiafeng; Shi, Wenhua; Zhang, Baoshun; Zeng, Zhongming
2017-06-01
Two-dimensional (2D) materials have been proposed as promising candidate for spintronic applications due to their atomic crystal structure and physical properties. Here, we introduce exfoliated few-layer tungsten diselenide (WSe2) as spacer in a Py/WSe2/Py vertical spin valve. In this junction, the WSe2 spacer exhibits metallic behavior. We observed negative magnetoresistance (MR) with a ratio of -1.1% at 4 K and -0.21% at 300 K. A general phenomenological analysis of the negative MR property is discussed. Our result is anticipated to be beneficial for future spintronic applications.
Robertson, Scott
2014-11-01
Analog gravity experiments make feasible the realization of black hole space-times in a laboratory setting and the observational verification of Hawking radiation. Since such analog systems are typically dominated by dispersion, efficient techniques for calculating the predicted Hawking spectrum in the presence of strong dispersion are required. In the preceding paper, an integral method in Fourier space is proposed for stationary 1+1-dimensional backgrounds which are asymptotically symmetric. Here, this method is generalized to backgrounds which are different in the asymptotic regions to the left and right of the scattering region.
A heating mechanism for the chromospheres of M dwarf stars
NASA Technical Reports Server (NTRS)
Giampapa, M. S.; Golub, L.; Rosner, R.; Vaiana, G.; Linsky, J. L.; Worden, S. P.
1981-01-01
The atmospheric structure of the dwarf M-stars which is especially important to the general field of stellar chromospheres and coronae was investigated. The M-dwarf stars constitute a class of objects for which the discrepancy between the predictions of the acoustic wave chromospheric/coronal heating hypothesis and the observations is most vivid. It is assumed that they represent a class of stars where alternative atmospheric heating mechanisms, presumably magnetically related, are most clearly manifested. Ascertainment of the validity of a hypothesis to account for the origin of the chromospheric and transition region line emission in M-dwarf stars is proposed.
Universal description of III-V/Si epitaxial growth processes
NASA Astrophysics Data System (ADS)
Lucci, I.; Charbonnier, S.; Pedesseau, L.; Vallet, M.; Cerutti, L.; Rodriguez, J.-B.; Tournié, E.; Bernard, R.; Létoublon, A.; Bertru, N.; Le Corre, A.; Rennesson, S.; Semond, F.; Patriarche, G.; Largeau, L.; Turban, P.; Ponchet, A.; Cornet, C.
2018-06-01
Here, we experimentally and theoretically clarify III-V/Si crystal growth processes. Atomically resolved microscopy shows that monodomain three-dimensional islands are observed at the early stages of AlSb, AlN, and GaP epitaxy on Si, independently of misfit. It is also shown that complete III-V/Si wetting cannot be achieved in most III-V/Si systems. Surface/interface contributions to the free-energy variations are found to be prominent over strain relief processes. We finally propose a general and unified description of III-V/Si growth processes, including a description of the formation of antiphase boundaries.
Kintsch, Walter
2012-01-01
In this essay, I explore how cognitive science could illuminate the concept of beauty. Two results from the extensive literature on aesthetics guide my discussion. As the term "beauty" is overextended in general usage, I choose as my starting point the notion of "perfect form." Aesthetic theorists are in reasonable agreement about the criteria for perfect form. What do these criteria imply for mental representations that are experienced as beautiful? Complexity theory can be used to specify constraints on mental representations abstractly formulated as vectors in a high-dimensional space. A central feature of the proposed model is that perfect form depends both on features of the objects or events perceived and on the nature of the encoding strategies or model of the observer. A simple example illustrates the proposed calculations. A number of interesting implications that arise as a consequence of reformulating beauty in this way are noted. Copyright © 2012 Cognitive Science Society, Inc.
Genetic mixed linear models for twin survival data.
Ha, Il Do; Lee, Youngjo; Pawitan, Yudi
2007-07-01
Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*
Cai, T. Tony; Zhang, Anru
2016-01-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.
Cai, T Tony; Zhang, Anru
2016-09-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.
Hybrid methods for witnessing entanglement in a microscopic-macroscopic system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spagnolo, Nicolo; Consorzio Nazionale Interuniversitario per le Scienze Fisiche della Materia, Piazzale Aldo Moro 5, I-00185 Roma; Vitelli, Chiara
2011-09-15
We propose a hybrid approach to the experimental assessment of the genuine quantum features of a general system consisting of microscopic and macroscopic parts. We infer entanglement by combining dichotomic measurements on a bidimensional system and phase-space inference through the Wigner distribution associated with the macroscopic component of the state. As a benchmark, we investigate the feasibility of our proposal in a bipartite-entangled state composed of a single-photon and a multiphoton field. Our analysis shows that, under ideal conditions, maximal violation of a Clauser-Horne-Shimony-Holt-based inequality is achievable regardless of the number of photons in the macroscopic part of the state.more » The difficulty in observing entanglement when losses and detection inefficiency are included can be overcome by using a hybrid entanglement witness that allows efficient correction for losses in the few-photon regime.« less
The S-Lagrangian and a theory of homeostasis in living systems
NASA Astrophysics Data System (ADS)
Sandler, U.; Tsitolovsky, L.
2017-04-01
A major paradox of living things is their ability to actively counteract degradation in a continuously changing environment or being injured through homeostatic protection. In this study, we propose a dynamic theory of homeostasis based on a generalized Lagrangian approach (S-Lagrangian), which can be equally applied to physical and nonphysical systems. Following discoverer of homeostasis Cannon (1935), we assume that homeostasis results from tendency of the organisms to decrease of the stress and avoid of death. We show that the universality of homeostasis is a consequence of analytical properties of the S-Lagrangian, while peculiarities of the biochemical and physiological mechanisms of homeostasis determine phenomenological parameters of the S-Lagrangian. Additionally, we reveal that plausible assumptions about S-Lagrangian features lead to good agreement between theoretical descriptions and observed homeostatic behavior. Here, we have focused on homeostasis of living systems, however, the proposed theory is also capable of being extended to social systems.
Dynamic Infinite Mixed-Membership Stochastic Blockmodel.
Fan, Xuhui; Cao, Longbing; Xu, Richard Yi Da
2015-09-01
Directional and pairwise measurements are often used to model interactions in a social network setting. The mixed-membership stochastic blockmodel (MMSB) was a seminal work in this area, and its ability has been extended. However, models such as MMSB face particular challenges in modeling dynamic networks, for example, with the unknown number of communities. Accordingly, this paper proposes a dynamic infinite mixed-membership stochastic blockmodel, a generalized framework that extends the existing work to potentially infinite communities inside a network in dynamic settings (i.e., networks are observed over time). Additional model parameters are introduced to reflect the degree of persistence among one's memberships at consecutive time stamps. Under this framework, two specific models, namely mixture time variant and mixture time invariant models, are proposed to depict two different time correlation structures. Two effective posterior sampling strategies and their results are presented, respectively, using synthetic and real-world data.
New detectors to explore the lifetime frontier
NASA Astrophysics Data System (ADS)
Chou, John Paul; Curtin, David; Lubatti, H. J.
2017-04-01
Long-lived particles (LLPs) are a common feature in many beyond the Standard Model theories, including supersymmetry, and are generically produced in exotic Higgs decays. Unfortunately, no existing or proposed search strategy will be able to observe the decay of non-hadronic electrically neutral LLPs with masses above ∼ GeV and lifetimes near the limit set by Big Bang Nucleosynthesis (BBN), cτ ≲107-108 m. We propose the MATHUSLA surface detector concept (MAssive Timing Hodoscope for Ultra Stable neutraL pArticles), which can be implemented with existing technology and in time for the high luminosity LHC upgrade to find such ultra-long-lived particles (ULLPs), whether produced in exotic Higgs decays or more general production modes. We also advocate a dedicated LLP detector at a future 100 TeV collider, where a modestly sized underground design can discover ULLPs with lifetimes at the BBN limit produced in sub-percent level exotic Higgs decays.
Design of an optimal preview controller for linear discrete-time descriptor systems with state delay
NASA Astrophysics Data System (ADS)
Cao, Mengjuan; Liao, Fucheng
2015-04-01
In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.
MacNamara, Aine; Collins, Dave
2014-01-01
Gulbin and colleagues (Gulbin, J. P., Croser, M. J., Morley, E. J., & Weissensteiner, J. R. (2013). An integrated framework for the optimisation of sport and athlete development: A practitioner approach. Journal of Sports Sciences) present a new sport and athlete development framework that evolved from empirical observations from working with the Australian Institute of Sport. The FTEM (Foundations, Talent, Elite, Mastery) framework is proposed to integrate general and specialised phases of development for participants within the active lifestyle, sport participation and sport excellence pathways. A number of issues concerning the FTEM framework are presented. We also propose the need to move beyond prescriptive models of talent identification and development towards a consideration of features of best practice and process markers of development together with robust guidelines about the implementation of these in applied practice.
Two proposed convergence criteria for Monte Carlo solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Pederson, S.P.; Booth, T.E.
1992-01-01
The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such asmore » statistical error reduction proportional to 1/[radical]N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf).« less
Epitaxially influenced boundary layer model for size effect in thin metallic films
NASA Astrophysics Data System (ADS)
Bažant, Zdeněk P.; Guo, Zaoyang; Espinosa, Horacio D.; Zhu, Yong; Peng, Bei
2005-04-01
It is shown that the size effect recently observed by Espinosa et al., [J. Mech. Phys. Solids51, 47 (2003)] in pure tension tests on free thin metallic films can be explained by the existence of a boundary layer of fixed thickness, located at the surface of the film that was attached onto the substrate during deposition. The boundary layer is influenced by the epitaxial effects of crystal growth on the dislocation density and texture (manifested by prevalent crystal plane orientations). This influence is assumed to cause significantly elevated yield strength. Furthermore, the observed gradual postpeak softening, along with its size independence, which is observed in short film strips subjected to pure tension, is explained by slip localization, originating at notch-like defects, and by damage, which can propagate in a stable manner when the film strip under pure tension is sufficiently thin and short. For general applications, the present epitaxially influenced boundary layer model may be combined with the classical strain-gradient plasticity proposed by Gao et al., [J. Mech. Phys. Solids 47, 1239 (1999)], and it is shown that this combination is necessary to fit the test data on both pure tension and bending of thin films by one and the same theory. To deal with films having different crystal grain sizes, the Hall-Petch relation for the yield strength dependence on the grain size needs to be incorporated into the combined theory. For very thin films, in which a flattened grain fills the whole film thickness, the Hall-Petch relation needs a cutoff, and the asymptotic increase of yield strength with diminishing film thickness is then described by the extension of Nix's model of misfit dislocations by Zhang and Zhou [J. Adv. Mater. 38, 51 (2002)]. The final result is a proposal of a general theory for strength, size effect, hardening, and softening of thin metallic films.
Centrosomes are autocatalytic droplets of pericentriolar material organized by centrioles.
Zwicker, David; Decker, Markus; Jaensch, Steffen; Hyman, Anthony A; Jülicher, Frank
2014-07-01
Centrosomes are highly dynamic, spherical organelles without a membrane. Their physical nature and their assembly are not understood. Using the concept of phase separation, we propose a theoretical description of centrosomes as liquid droplets. In our model, centrosome material occurs in a form soluble in the cytosol and a form that tends to undergo phase separation from the cytosol. We show that an autocatalytic chemical transition between these forms accounts for the temporal evolution observed in experiments. Interestingly, the nucleation of centrosomes can be controlled by an enzymatic activity of the centrioles, which are present at the core of all centrosomes. This nonequilibrium feature also allows for multiple stable centrosomes, a situation that is unstable in equilibrium phase separation. Our theory explains the growth dynamics of centrosomes for all cell sizes down to the eight-cell stage of the Caenorhabditis elegans embryo, and it also accounts for data acquired in experiments with aberrant numbers of centrosomes and altered cell volumes. Furthermore, the model can describe unequal centrosome sizes observed in cells with perturbed centrioles. We also propose an interpretation of the molecular details of the involved proteins in the case of C. elegans. Our example suggests a general picture of the organization of membraneless organelles.
Against Genetic Tests for Athletic Talent: The Primacy of the Phenotype.
Loland, Sigmund
2015-09-01
New insights into the genetics of sport performance lead to new areas of application. One area is the use of genetic tests to identify athletic talent. Athletic performances involve a high number of complex phenotypical traits. Based on the ACCE model (review of Analytic and Clinical validity, Clinical utility, and Ethical, legal and social implications), a critique is offered of the lack of validity and predictive power of genetic tests for talent. Based on the ideal of children's right to an open future, a moral argument is given against such tests on children and young athletes. A possible role of genetic tests in sport is proposed in terms of identifying predisposition for injury. In meeting ACCE requirements, such tests could improve individualised injury prevention and increase athlete health. More generally, limitations of science are discussed in the identification of talent and in the understanding of complex human performance phenotypes. An alternative approach to talent identification is proposed in terms of ethically sensitive, systematic and evidence-based holistic observation over time of relevant phenotypical traits by experienced observers. Talent identification in sport should be based on the primacy of the phenotype.
A Global Regulation Inducing the Shape of Growing Folded Leaves
Couturier, Etienne; Courrech du Pont, Sylvain; Douady, Stéphane
2009-01-01
Shape is one of the important characteristics for the structures observed in living organisms. Whereas biologists have proposed models where the shape is controlled on a molecular level [1], physicists, following Turing [2] and d'Arcy Thomson [3], have developed theories where patterns arise spontaneously [4]. Here, we propose that volume constraints restrict the possible shapes of leaves. Focusing on palmate leaves (with lobes), the central observation is that developing leaves first grow folded inside a bud, limited by the previous and subsequent leaves. We show that the lobe perimeters end at the border of this small volume. This induces a direct relationship between the way it was folded and the final unfolded shape of the leaf. These dependencies can be approximated as simple geometrical relationships that we confirm on both folded embryonic and unfolded mature leaves. We find that independent of their position in the phylogenetic tree, these relationships work for folded species, but do not work for non-folded species. This global regulation for the leaf growth could come from a mechanical steric constraint. Such steric regulation should be more general and considered as a new simple means of global regulation. PMID:19956690
NASA Astrophysics Data System (ADS)
Moghim, S.; Hsu, K.; Bras, R. L.
2013-12-01
General Circulation Models (GCMs) are used to predict circulation and energy transfers between the atmosphere and the land. It is known that these models produce biased results that will have impact on their uses. This work proposes a new method for bias correction: the equidistant cumulative distribution function-artificial neural network (EDCDFANN) procedure. The method uses artificial neural networks (ANNs) as a surrogate model to estimate bias-corrected temperature, given an identification of the system derived from GCM models output variables. A two-layer feed forward neural network is trained with observations during a historical period and then the adjusted network can be used to predict bias-corrected temperature for future periods. To capture the extreme values this method is combined with the equidistant CDF matching method (EDCDF, Li et al. 2010). The proposed method is tested with the Community Climate System Model (CCSM3) outputs using air and skin temperature, specific humidity, shortwave and longwave radiation as inputs to the ANN. This method decreases the mean square error and increases the spatial correlation between the modeled temperature and the observed one. The results indicate the EDCDFANN has potential to remove the biases of the model outputs.
Monitoring Poisson observations using combined applications of Shewhart and EWMA charts
NASA Astrophysics Data System (ADS)
Abujiya, Mu'azu Ramat
2017-11-01
The Shewhart and exponentially weighted moving average (EWMA) charts for nonconformities are the most widely used procedures of choice for monitoring Poisson observations in modern industries. Individually, the Shewhart EWMA charts are only sensitive to large and small shifts, respectively. To enhance the detection abilities of the two schemes in monitoring all kinds of shifts in Poisson count data, this study examines the performance of combined applications of the Shewhart, and EWMA Poisson control charts. Furthermore, the study proposes modifications based on well-structured statistical data collection technique, ranked set sampling (RSS), to detect shifts in the mean of a Poisson process more quickly. The relative performance of the proposed Shewhart-EWMA Poisson location charts is evaluated in terms of the average run length (ARL), standard deviation of the run length (SDRL), median run length (MRL), average ratio ARL (ARARL), average extra quadratic loss (AEQL) and performance comparison index (PCI). Consequently, all the new Poisson control charts based on RSS method are generally more superior than most of the existing schemes for monitoring Poisson processes. The use of these combined Shewhart-EWMA Poisson charts is illustrated with an example to demonstrate the practical implementation of the design procedure.
Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A
2015-02-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.
Noll, Douglas C.; Fessler, Jeffrey A.
2014-01-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484
Reheating-volume measure in the string theory landscape
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winitzki, Sergei
2008-12-15
I recently proposed the ''reheating-volume'' (RV) prescription as a possible solution to the measure problem in ''multiverse'' cosmology. The goal of this work is to extend the RV measure to scenarios involving bubble nucleation, such as the string theory landscape. In the spirit of the RV prescription, I propose to calculate the distribution of observable quantities in a landscape that is conditioned in probability to nucleate a finite total number of bubbles to the future of an initial bubble. A general formula for the relative number of bubbles of different types can be derived. I show that the RV measuremore » is well defined and independent of the choice of the initial bubble type, as long as that type supports further bubble nucleation. Applying the RV measure to a generic landscape, I find that the abundance of Boltzmann brains is always negligibly small compared with the abundance of ordinary observers in the bubbles of the same type. As an illustration, I present explicit results for a toy landscape containing four vacuum states, and for landscapes with a single high-energy vacuum and a large number of low-energy vacua.« less
Generalizing Evidence From Randomized Clinical Trials to Target Populations
Cole, Stephen R.; Stuart, Elizabeth A.
2010-01-01
Properly planned and conducted randomized clinical trials remain susceptible to a lack of external validity. The authors illustrate a model-based method to standardize observed trial results to a specified target population using a seminal human immunodeficiency virus (HIV) treatment trial, and they provide Monte Carlo simulation evidence supporting the method. The example trial enrolled 1,156 HIV-infected adult men and women in the United States in 1996, randomly assigned 577 to a highly active antiretroviral therapy and 579 to a largely ineffective combination therapy, and followed participants for 52 weeks. The target population was US people infected with HIV in 2006, as estimated by the Centers for Disease Control and Prevention. Results from the trial apply, albeit muted by 12%, to the target population, under the assumption that the authors have measured and correctly modeled the determinants of selection that reflect heterogeneity in the treatment effect. In simulations with a heterogeneous treatment effect, a conventional intent-to-treat estimate was biased with poor confidence limit coverage, but the proposed estimate was largely unbiased with appropriate confidence limit coverage. The proposed method standardizes observed trial results to a specified target population and thereby provides information regarding the generalizability of trial results. PMID:20547574
Seven-quasiparticle bands in {sup 139}Ce
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chanda, Somen; Bhattacharjee, Tumpa; Bhattacharyya, Sarmishtha
2009-05-15
The high spin states in the {sup 139}Ce nucleus have been studied by in-beam {gamma}-spectroscopic techniques using the reaction {sup 130}Te({sup 12}C,3n){sup 139}Ce at E{sub beam}=65 MeV. A gamma detector array, consisting of five Compton-suppressed Clover detectors was used for coincidence measurements. 15 new levels have been proposed and 28 new {gamma} transitions have been assigned to {sup 139}Ce on the basis of {gamma}{gamma} coincidence data. The level scheme of {sup 139}Ce has been extended above the known 70 ns (19/2){sup -} isomer up to {approx}6.1 MeV in excitation energy and (35/2)({Dirac_h}/2{pi}) in spin. The spin-parity assignments for most ofmore » the newly proposed levels have been made using the deduced Directional Correlation from Oriented states of nuclei (DCO ratio) and the Polarization Directional Correlation from Oriented states (PDCO ratio) for the de-exciting transitions. The observed level structure has been compared with a large basis shell model calculation and also with the predictions from cranked Nilsson-Strutinsky (CNS) calculations. A general consistency has been observed between these two different theoretical approaches.« less
From self-observation to imitation: visuomotor association on a robotic hand.
Chaminade, Thierry; Oztop, Erhan; Cheng, Gordon; Kawato, Mitsuo
2008-04-15
Being at the crux of human cognition and behaviour, imitation has become the target of investigations ranging from experimental psychology and neurophysiology to computational sciences and robotics. It is often assumed that the imitation is innate, but it has more recently been argued, both theoretically and experimentally, that basic forms of imitation could emerge as a result of self-observation. Here, we tested this proposal on a realistic experimental platform, comprising an associative network linking a 16 degrees of freedom robotic hand and a simple visual system. We report that this minimal visuomotor association is sufficient to bootstrap basic imitation. Our results indicate that crucial features of human imitation, such as generalization to new actions, may emerge from a connectionist associative network. Therefore, we suggest that a behaviour as complex as imitation could be, at the neuronal level, founded on basic mechanisms of associative learning, a notion supported by a recent proposal on the developmental origin of mirror neurons. Our approach can be applied to the development of realistic cognitive architectures for humanoid robots as well as to shed new light on the cognitive processes at play in early human cognitive development.
Rescaling the complementary relationship for land surface evaporation
NASA Astrophysics Data System (ADS)
Crago, R.; Szilagyi, J.; Qualls, R.; Huntington, J.
2016-11-01
Recent research into the complementary relationship (CR) between actual and apparent potential evaporation has resulted in numerous alternative forms for the CR. Inspired by Brutsaert (2015), who derived a general CR in the form y = function (x), where x is the ratio of potential evaporation to apparent potential evaporation and y is the ratio of actual to apparent potential evaporation, an equation is proposed to calculate the value of x at which y goes to zero, denoted xmin. The value of xmin varies even at an individual observation site, but can be calculated using only the data required for the Penman (1948) equation as expressed here, so no calibration of xmin is required. It is shown that the scatter in x-y plots using experimental data is reduced when x is replaced by X = (x - xmin)/(1 - xmin). This rescaling results in data falling along the line y = X, which is proposed as a new version of the CR. While a reinterpretation of the fundamental boundary conditions proposed by Brutsaert (2015) is required, the physical constraints behind them are still met. An alternative formulation relating y to X is also discussed.
Finding user personal interests by tweet-mining using advanced machine learning algorithm in R
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Roy, P.; Asha Jerlin, M.
2017-11-01
The social-media plays a key role in every individual’s life by anyone’s personal views about their liking-ness/disliking-ness. This methodology is a sharp departure from the traditional techniques of inferring interests of a user from the tweets that he/she posts or receives. It is showed that the topics of interest inferred by the proposed methodology are far superior than the topics extracted by state-of-the-art techniques such as using topic models (Labelled LDA) on tweets. Based upon the proposed methodology, a system has been built, “Who is interested in what”, which can infer the interests of millions of Twitter users. A novel mechanism is proposed to infer topics of interest of individual users in the twitter social network. It has been observed that in twitter, a user generally follows experts on various topics of his/her interest in order to acquire information on those topics. A methodology based on social annotations is used to first deduce the topical expertise of popular twitter users and then transitively infer the interests of the users who follow them.
Structural zeros in high-dimensional data with applications to microbiome studies.
Kaul, Abhishek; Davidov, Ori; Peddada, Shyamal D
2017-07-01
This paper is motivated by the recent interest in the analysis of high-dimensional microbiome data. A key feature of these data is the presence of "structural zeros" which are microbes missing from an observation vector due to an underlying biological process and not due to error in measurement. Typical notions of missingness are unable to model these structural zeros. We define a general framework which allows for structural zeros in the model and propose methods of estimating sparse high-dimensional covariance and precision matrices under this setup. We establish error bounds in the spectral and Frobenius norms for the proposed estimators and empirically verify them with a simulation study. The proposed methodology is illustrated by applying it to the global gut microbiome data of Yatsunenko and others (2012. Human gut microbiome viewed across age and geography. Nature 486, 222-227). Using our methodology we classify subjects according to the geographical location on the basis of their gut microbiome. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
3-D rigid body tracking using vision and depth sensors.
Gedik, O Serdar; Alatan, A Aydn
2013-10-01
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.
Airburst height computation method of Sea-Impact Test
NASA Astrophysics Data System (ADS)
Kim, Jinho; Kim, Hyungsup; Chae, Sungwoo; Park, Sungho
2017-05-01
This paper describes the ways how to measure the airburst height of projectiles and rockets. In general, the airburst height could be determined by using triangulation method or the images from the camera installed on the radar. There are some limitations in these previous methods when the missiles impact the sea surface. To apply triangulation method, the cameras should be installed so that the lines of sight intersect at angles from 60 to 120 degrees. There could be no effective observation towers to install the optical system. In case the range of the missile is more than 50km, the images from the camera of the radar could be useless. This paper proposes the method to measure the airburst height of sea impact projectile by using a single camera. The camera would be installed on the island near to the impact area and the distance could be computed by using the position and attitude of camera and sea level. To demonstrate the proposed method, the results from the proposed method are compared with that from the previous method.
Dark matter "transporting" mechanism explaining positron excesses
NASA Astrophysics Data System (ADS)
Kim, Doojin; Park, Jong-Chul; Shin, Seodong
2018-04-01
We propose a novel mechanism to explain the positron excesses, which are observed by satellite-based telescopes including PAMELA and AMS-02, in dark matter (DM) scenarios. The novelty behind the proposal is that it makes direct use of DM around the Galactic Center where DM populates most densely, allowing us to avoid tensions from cosmological and astrophysical measurements. The key ingredients of this mechanism include DM annihilation into unstable states with a very long laboratory-frame life time and their "retarded" decay near the Earth to electron-positron pair(s) possibly with other (in)visible particles. We argue that this sort of explanation is not in conflict with relevant constraints from big bang nucleosynthesis and cosmic microwave background. Regarding the resultant positron spectrum, we provide a generalized source term in the associated diffusion equation, which can be readily applicable to any type of two-"stage" DM scenarios wherein production of Standard Model particles occurs at completely different places from those of DM annihilation. We then conduct a data analysis with the recent AMS-02 data to validate our proposal.
Addison, Paul S; Wang, Rui; Uribe, Alberto A; Bergese, Sergio D
2015-01-01
DPOP (ΔPOP or Delta-POP) is a noninvasive parameter which measures the strength of respiratory modulations present in the pulse oximeter waveform. It has been proposed as a noninvasive alternative to pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. We considered a number of simple techniques for better determining the underlying relationship between the two parameters. It was shown numerically that baseline-induced signal errors were asymmetric in nature, which corresponded to observation, and we proposed a method which combines a least-median-of-squares estimator with the requirement that the relationship passes through the origin (the LMSO method). We further developed a method of normalization of the parameters through rescaling DPOP using the inverse gradient of the linear fitted relationship. We propose that this normalization method (LMSO-N) is applicable to the matching of a wide range of clinical parameters. It is also generally applicable to the self-normalizing of parameters whose behaviour may change slightly due to algorithmic improvements.
Addison, Paul S.; Wang, Rui; Uribe, Alberto A.; Bergese, Sergio D.
2015-01-01
DPOP (ΔPOP or Delta-POP) is a noninvasive parameter which measures the strength of respiratory modulations present in the pulse oximeter waveform. It has been proposed as a noninvasive alternative to pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. We considered a number of simple techniques for better determining the underlying relationship between the two parameters. It was shown numerically that baseline-induced signal errors were asymmetric in nature, which corresponded to observation, and we proposed a method which combines a least-median-of-squares estimator with the requirement that the relationship passes through the origin (the LMSO method). We further developed a method of normalization of the parameters through rescaling DPOP using the inverse gradient of the linear fitted relationship. We propose that this normalization method (LMSO-N) is applicable to the matching of a wide range of clinical parameters. It is also generally applicable to the self-normalizing of parameters whose behaviour may change slightly due to algorithmic improvements. PMID:25691912
Exponential Family Functional data analysis via a low-rank model.
Li, Gen; Huang, Jianhua Z; Shen, Haipeng
2018-05-08
In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.
Label Information Guided Graph Construction for Semi-Supervised Learning.
Zhuang, Liansheng; Zhou, Zihan; Gao, Shenghua; Yin, Jingwen; Lin, Zhouchen; Ma, Yi
2017-09-01
In the literature, most existing graph-based semi-supervised learning methods only use the label information of observed samples in the label propagation stage, while ignoring such valuable information when learning the graph. In this paper, we argue that it is beneficial to consider the label information in the graph learning stage. Specifically, by enforcing the weight of edges between labeled samples of different classes to be zero, we explicitly incorporate the label information into the state-of-the-art graph learning methods, such as the low-rank representation (LRR), and propose a novel semi-supervised graph learning method called semi-supervised low-rank representation. This results in a convex optimization problem with linear constraints, which can be solved by the linearized alternating direction method. Though we take LRR as an example, our proposed method is in fact very general and can be applied to any self-representation graph learning methods. Experiment results on both synthetic and real data sets demonstrate that the proposed graph learning method can better capture the global geometric structure of the data, and therefore is more effective for semi-supervised learning tasks.
SymPS: BRDF Symmetry Guided Photometric Stereo for Shape and Light Source Estimation.
Lu, Feng; Chen, Xiaowu; Sato, Imari; Sato, Yoichi
2018-01-01
We propose uncalibrated photometric stereo methods that address the problem due to unknown isotropic reflectance. At the core of our methods is the notion of "constrained half-vector symmetry" for general isotropic BRDFs. We show that such symmetry can be observed in various real-world materials, and it leads to new techniques for shape and light source estimation. Based on the 1D and 2D representations of the symmetry, we propose two methods for surface normal estimation; one focuses on accurate elevation angle recovery for surface normals when the light sources only cover the visible hemisphere, and the other for comprehensive surface normal optimization in the case that the light sources are also non-uniformly distributed. The proposed robust light source estimation method also plays an essential role to let our methods work in an uncalibrated manner with good accuracy. Quantitative evaluations are conducted with both synthetic and real-world scenes, which produce the state-of-the-art accuracy for all of the non-Lambertian materials in MERL database and the real-world datasets.
Phase recovery in temporal speckle pattern interferometry using the generalized S-transform.
Federico, Alejandro; Kaufmann, Guillermo H
2008-04-15
We propose a novel approach based on the generalized S-transform to retrieve optical phase distributions in temporal speckle pattern interferometry. The performance of the proposed approach is compared with those given by well-known techniques based on the continuous wavelet, the Hilbert transforms, and a smoothed time-frequency distribution by analyzing interferometric data degraded by noise, nonmodulating pixels, and modulation loss. The advantages and limitations of the proposed phase retrieval approach are discussed.
Quantum gravity in the sky: interplay between fundamental theory and observations
NASA Astrophysics Data System (ADS)
Ashtekar, Abhay; Gupt, Brajesh
2017-01-01
Observational missions have provided us with a reliable model of the evolution of the universe starting from the last scattering surface all the way to future infinity. Furthermore given a specific model of inflation, using quantum field theory on curved space-times this history can be pushed back in time to the epoch when space-time curvature was some 1062 times that at the horizon of a solar mass black hole! However, to extend the history further back to the Planck regime requires input from quantum gravity. An important aspect of this input is the choice of the background quantum geometry and of the Heisenberg state of cosmological perturbations thereon, motivated by Planck scale physics. This paper introduces first steps in that direction. Specifically we propose two principles that link quantum geometry and Heisenberg uncertainties in the Planck epoch with late time physics and explore in detail the observational consequences of the initial conditions they select. We find that the predicted temperature-temperature (T-T) correlations for scalar modes are indistinguishable from standard inflation at small angular scales even though the initial conditions are now set in the deep Planck regime. However, there is a specific power suppression at large angular scales. As a result, the predicted spectrum provides a better fit to the PLANCK mission data than standard inflation, where the initial conditions are set in the general relativity regime. Thus, our proposal brings out a deep interplay between the ultraviolet and the infrared. Finally, the proposal also leads to specific predictions for power suppression at large angular scales also for the (T-E and E-E) correlations involving electric polarization3. The PLANCK team is expected to release this data in the coming year.
Modeling of the Reaction Mechanism of Enzymatic Radical C–C Coupling by Benzylsuccinate Synthase
Szaleniec, Maciej; Heider, Johann
2016-01-01
Molecular modeling techniques and density functional theory calculations were performed to study the mechanism of enzymatic radical C–C coupling catalyzed by benzylsuccinate synthase (BSS). BSS has been identified as a glycyl radical enzyme that catalyzes the enantiospecific fumarate addition to toluene initiating its anaerobic metabolism in the denitrifying bacterium Thauera aromatica, and this reaction represents the general mechanism of toluene degradation in all known anaerobic degraders. In this work docking calculations, classical molecular dynamics (MD) simulations, and DFT+D2 cluster modeling was employed to address the following questions: (i) What mechanistic details of the BSS reaction yield the most probable molecular model? (ii) What is the molecular basis of enantiospecificity of BSS? (iii) Is the proposed mechanism consistent with experimental observations, such as an inversion of the stereochemistry of the benzylic protons, syn addition of toluene to fumarate, exclusive production of (R)-benzylsuccinate as a product and a kinetic isotope effect (KIE) ranging between 2 and 4? The quantum mechanics (QM) modeling confirms that the previously proposed hypothetical mechanism is the most probable among several variants considered, although C–H activation and not C–C coupling turns out to be the rate limiting step. The enantiospecificity of the enzyme seems to be enforced by a thermodynamic preference for binding of fumarate in the pro(R) orientation and reverse preference of benzyl radical attack on fumarate in pro(S) pathway which results with prohibitively high energy barrier of the radical quenching. Finally, the proposed mechanism agrees with most of the experimental observations, although the calculated intrinsic KIE from the model (6.5) is still higher than the experimentally observed values (4.0) which suggests that both C–H activation and radical quenching may jointly be involved in the kinetic control of the reaction. PMID:27070573
Nonparametric identification of nonlinear dynamic systems using a synchronisation-based method
NASA Astrophysics Data System (ADS)
Kenderi, Gábor; Fidlin, Alexander
2014-12-01
The present study proposes an identification method for highly nonlinear mechanical systems that does not require a priori knowledge of the underlying nonlinearities to reconstruct arbitrary restoring force surfaces between degrees of freedom. This approach is based on the master-slave synchronisation between a dynamic model of the system as the slave and the real system as the master using measurements of the latter. As the model synchronises to the measurements, it becomes an observer of the real system. The optimal observer algorithm in a least-squares sense is given by the Kalman filter. Using the well-known state augmentation technique, the Kalman filter can be turned into a dual state and parameter estimator to identify parameters of a priori characterised nonlinearities. The paper proposes an extension of this technique towards nonparametric identification. A general system model is introduced by describing the restoring forces as bilateral spring-dampers with time-variant coefficients, which are estimated as augmented states. The estimation procedure is followed by an a posteriori statistical analysis to reconstruct noise-free restoring force characteristics using the estimated states and their estimated variances. Observability is provided using only one measured mechanical quantity per degree of freedom, which makes this approach less demanding in the number of necessary measurement signals compared with truly nonparametric solutions, which typically require displacement, velocity and acceleration signals. Additionally, due to the statistical rigour of the procedure, it successfully addresses signals corrupted by significant measurement noise. In the present paper, the method is described in detail, which is followed by numerical examples of one degree of freedom (1DoF) and 2DoF mechanical systems with strong nonlinearities of vibro-impact type to demonstrate the effectiveness of the proposed technique.
Interactions in the Dark Sector of Cosmology
NASA Astrophysics Data System (ADS)
Bean, Rachel
The success of modern cosmology hinges on two dramatic augmentations beyond the minimalist assumption of baryonic matter interacting gravitationally through general relativity. The first assumption is that there must exist either new gravitational dynamics or a new component of the cosmic energy budget - dark matter - that allows structure to form and accounts for weak lensing and galactic rotation curves. The second assumption is that a further dynamical modification or energy component - dark energy - exists, driving late-time cosmic acceleration. The need for these is now firmly established through a host of observations, which have raised crucial questions, and present a deep challenge to fundamental physics. The central theme of this proposal is the detailed understanding of the nature of the dark sector through the inevitable interactions between its individual components and with the visible universe. Such interactions can be crucial to a given model's viability, affecting its capability to reproduce the cosmic expansion history; the detailed predictions or structure formation; the gravitational dynamics on astrophysical and solar system scales; the stability of the microphysical model, and its ultimate consistency. While many models are consistent with cosmology on the coarsest scales, as is often the case, the devil may lie in the details. In this proposal we plan a comprehensive analysis of these details, focusing on the interactions within the dark sector and between it and visible matter, and on how these interactions affect the observational and theoretical consistency of models. Since it is unlikely that there will be a silver bullet allowing us to isolate the cause of cosmic acceleration, it is critical to develop a coherent view of the landscape of proposed models, extract clear predictions, and determine what combination of experiments and observations might allow us to test these predictions.
Agent-Centric Approach for Cybersecurity Decision-Support with Partial Observability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, Ramakrishna; Chatterjee, Samrat; Paulson, Patrick R.
Generating automated cyber resilience policies for real-world settings is a challenging research problem that must account for uncertainties in system state over time and dynamics between attackers and defenders. In addition to understanding attacker and defender motives and tools, and identifying “relevant” system and attack data, it is also critical to develop rigorous mathematical formulations representing the defender’s decision-support problem under uncertainty. Game-theoretic approaches involving cyber resource allocation optimization with Markov decision processes (MDP) have been previously proposed in the literature. Moreover, advancements in reinforcement learning approaches have motivated the development of partially observable stochastic games (POSGs) in various multi-agentmore » problem domains with partial information. Recent advances in cyber-system state space modeling have also generated interest in potential applicability of POSGs for cybersecurity. However, as is the case in strategic card games such as poker, research challenges using game-theoretic approaches for practical cyber defense applications include: 1) solving for equilibrium and designing efficient algorithms for large-scale, general problems; 2) establishing mathematical guarantees that equilibrium exists; 3) handling possible existence of multiple equilibria; and 4) exploitation of opponent weaknesses. Inspired by advances in solving strategic card games while acknowledging practical challenges associated with the use of game-theoretic approaches in cyber settings, this paper proposes an agent-centric approach for cybersecurity decision-support with partial system state observability.« less
Constraining the Type Ia Supernova Progenitor: The Search for Hydrogen in Nebular Spectra
NASA Astrophysics Data System (ADS)
Leonard, Douglas
2006-02-01
The progenitor systems of Type Ia supernovae (SNe Ia) are observationally unconstrained. Prevailing theory invokes a carbon- oxygen white dwarf accreting matter from a companion until a thermonuclear runaway ensues that incinerates the white dwarf. While models of exploding carbon-oxygen white dwarfs faithfully reproduce the main characteristics of SNe Ia, we are ignorant about the nature of the proposed companion star. Simulations resulting from this single- degenerate binary channel, however, demand the presence of low-velocity, H(alpha) emission in spectra taken in the nebular phase (250 - 400 days after maximum light), since a portion of the companion's envelope becomes entrained in the ejecta. This hydrogen has never been detected, and only generally weak limits have heretofore been set from ~ 6 SNe Ia observed during the nebular phase at low resolution and often with a low signal-to-noise ratio (S/N). We propose to remedy this situation through high S/N observations of two nearby, nebular-phase SNe Ia, with sufficient sensitivity and resolution to detect ~ 0.01 Msun of solar abundance material in the ejecta. The detection of late- time H(alpha) emission would be considered a ``smoking gun'' for the binary scenario. If H(alpha) is not detected, the limits will effectively rule out sub-giant, red giant, and all but the most widely separated main-sequence companions.
NASA Technical Reports Server (NTRS)
Murchie, Scott; Erard, Stephane
1993-01-01
The surface of Phobos has been proposed to consist of carbonaceous chondrite or optically darkened ordinary chondrite ('black chondrite'). Measurements of Phobos's spectrum are key evidence for testing these hypotheses. Disk-integrated measurements were obtained by the Mariner 9 UV spectrometer, Viking Lander cameras, and groundbased observations. In 1989 disk-resolved measurements of Phobos and Mars were obtained by three instruments on Phobos 2: the KRFM spectrometer, which covered the wavelength range 0.32 - 0.6 microns; the ISM imaging spectrometer, which covered the wavelength range 0.76 - 3.16 microns; and the VSK TV cameras, whose wavelength ranges overlap those of KRFM and ISM. Here we report analysis of the Phobos 2 measurements completed since earlier results were reported. We validated calibration of the Phobos measurements using observations of Mars for reference, and compared them with pre-1989 measurements. We also combined spectra from the three detectors to produce an integrated spectrum of Phobos from 0.3 - 2.6 microns. Phobos 2 results agree well with previous measurements, contrary to some reports. The general shape of the spectrum is consistent with both proposed analogues. However position and depth of the previously unobserved 1 micron absorption are more diagnostic, and indicate the composition of typical surfaces to be more consistent with black chondrite.