Estimation of Initial and Response Times of Laser Dew-Point Hygrometer by Measurement Simulation
NASA Astrophysics Data System (ADS)
Matsumoto, Sigeaki; Toyooka, Satoru
1995-10-01
The initial and the response times of the laser dew-point hygrometer were evaluated by measurement simulation. The simulation was based on loop computations of the surface temperature of a plate with dew deposition, the quantity of dew deposited and the intensity of scattered light from the surface at each short interval of measurement. The initial time was defined as the time necessary for the hygrometer to reach a temperature within ±0.5° C of the measured dew point from the start time of measurement, and the response time was also defined for stepwise dew-point changes of +5° C and -5° C. The simulation results are in approximate agreement with the recorded temperature and intensity of scattered light of the hygrometer. The evaluated initial time ranged from 0.3 min to 5 min in the temperature range from 0° C to 60° C, and the response time was also evaluated to be from 0.2 min to 3 min.
One-loop gravitational wave spectrum in de Sitter spacetime
NASA Astrophysics Data System (ADS)
Fröb, Markus B.; Roura, Albert; Verdaguer, Enric
2012-08-01
The two-point function for tensor metric perturbations around de Sitter spacetime including one-loop corrections from massless conformally coupled scalar fields is calculated exactly. We work in the Poincaré patch (with spatially flat sections) and employ dimensional regularization for the renormalization process. Unlike previous studies we obtain the result for arbitrary time separations rather than just equal times. Moreover, in contrast to existing results for tensor perturbations, ours is manifestly invariant with respect to the subgroup of de Sitter isometries corresponding to a simultaneous time translation and rescaling of the spatial coordinates. Having selected the right initial state for the interacting theory via an appropriate iepsilon prescription is crucial for that. Finally, we show that although the two-point function is a well-defined spacetime distribution, the equal-time limit of its spatial Fourier transform is divergent. Therefore, contrary to the well-defined distribution for arbitrary time separations, the power spectrum is strictly speaking ill-defined when loop corrections are included.
A time-efficient algorithm for implementing the Catmull-Clark subdivision method
NASA Astrophysics Data System (ADS)
Ioannou, G.; Savva, A.; Stylianou, V.
2015-10-01
Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.
76 FR 41454 - Caribbean Fishery Management Council; Scoping Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-14
... based on alternative selected in Action 3(a) and time series of landings data as defined in Action 1(a...., Puerto Rico, St. Thomas/St. John, St. Croix) based on the preferred management reference point time series selected by the Council in Actions 1(a) and 2(a). Alternative 2A. Use a mid-point or equidistant...
A General Approach to Defining Latent Growth Components
ERIC Educational Resources Information Center
Mayer, Axel; Steyer, Rolf; Mueller, Horst
2012-01-01
We present a 3-step approach to defining latent growth components. In the first step, a measurement model with at least 2 indicators for each time point is formulated to identify measurement error variances and obtain latent variables that are purged from measurement error. In the second step, we use contrast matrices to define the latent growth…
Compensatable muon collider calorimeter with manageable backgrounds
Raja, Rajendran
2015-02-17
A method and system for reducing background noise in a particle collider, comprises identifying an interaction point among a plurality of particles within a particle collider associated with a detector element, defining a trigger start time for each of the pixels as the time taken for light to travel from the interaction point to the pixel and a trigger stop time as a selected time after the trigger start time, and collecting only detections that occur between the start trigger time and the stop trigger time in order to thereafter compensate the result from the particle collider to reduce unwanted background detection.
A model of cloud application assignments in software-defined storages
NASA Astrophysics Data System (ADS)
Bolodurina, Irina P.; Parfenov, Denis I.; Polezhaev, Petr N.; E Shukhman, Alexander
2017-01-01
The aim of this study is to analyze the structure and mechanisms of interaction of typical cloud applications and to suggest the approaches to optimize their placement in storage systems. In this paper, we describe a generalized model of cloud applications including the three basic layers: a model of application, a model of service, and a model of resource. The distinctive feature of the model suggested implies analyzing cloud resources from the user point of view and from the point of view of a software-defined infrastructure of the virtual data center (DC). The innovation character of this model is in describing at the same time the application data placements, as well as the state of the virtual environment, taking into account the network topology. The model of software-defined storage has been developed as a submodel within the resource model. This model allows implementing the algorithm for control of cloud application assignments in software-defined storages. Experimental researches returned this algorithm decreases in cloud application response time and performance growth in user request processes. The use of software-defined data storages allows the decrease in the number of physical store devices, which demonstrates the efficiency of our algorithm.
Detecting multiple moving objects in crowded environments with coherent motion regions
Cheriyadat, Anil M.; Radke, Richard J.
2013-06-11
Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.
Knee point search using cascading top-k sorting with minimized time complexity.
Wang, Zheng; Tseng, Shian-Shyong
2013-01-01
Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.
Complementarity in the multiverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bousso, Raphael
2009-06-15
In the multiverse, as in AdS space, light cones relate bulk points to boundary scales. This holographic UV-IR connection defines a preferred global time cutoff that regulates the divergences of eternal inflation. An entirely different cutoff, the causal patch, arises in the holographic description of black holes. Remarkably, I find evidence that these two regulators define the same probability measure in the multiverse. Initial conditions for the causal patch are controlled by the late-time attractor regime of the global description.
Lightning Simulation and Design Program (LSDP)
NASA Astrophysics Data System (ADS)
Smith, D. A.
This computer program simulates a user-defined lighting configuration. It has been developed as a tool to aid in the design of exterior lighting systems. Although this program is used primarily for perimeter security lighting design, it has potential use for any application where the light can be approximated by a point source. A data base of luminaire photometric information is maintained for use with this program. The user defines the surface area to be illuminated with a rectangular grid and specifies luminaire positions. Illumination values are calculated for regularly spaced points in that area and isolux contour plots are generated. The numerical and graphical output for a particular site mode are then available for analysis. The amount of time spent on point-to-point illumination computation with this progress is much less than that required for tedious hand calculations. The ease with which various parameters can be interactively modified with the progress also reduces the time and labor expended. Consequently, the feasibility of design ideas can be examined, modified, and retested more thoroughly, and overall design costs can be substantially lessened by using this progress as an adjunct to the design process.
Registration of 4D time-series of cardiac images with multichannel Diffeomorphic Demons.
Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Pennec, Xavier; Xu, Chenyang; Ayache, Nicholas
2008-01-01
In this paper, we propose a generic framework for intersubject non-linear registration of 4D time-series images. In this framework, spatio-temporal registration is defined by mapping trajectories of physical points as opposed to spatial registration that solely aims at mapping homologous points. First, we determine the trajectories we want to register in each sequence using a motion tracking algorithm based on the Diffeomorphic Demons algorithm. Then, we perform simultaneously pairwise registrations of corresponding time-points with the constraint to map the same physical points over time. We show this trajectory registration can be formulated as a multichannel registration of 3D images. We solve it using the Diffeomorphic Demons algorithm extended to vector-valued 3D images. This framework is applied to the inter-subject non-linear registration of 4D cardiac CT sequences.
Quantum Field Theory on Spacetimes with a Compactly Generated Cauchy Horizon
NASA Astrophysics Data System (ADS)
Kay, Bernard S.; Radzikowski, Marek J.; Wald, Robert M.
1997-02-01
We prove two theorems which concern difficulties in the formulation of the quantum theory of a linear scalar field on a spacetime, (M,g_{ab}), with a compactly generated Cauchy horizon. These theorems demonstrate the breakdown of the theory at certain base points of the Cauchy horizon, which are defined as 'past terminal accumulation points' of the horizon generators. Thus, the theorems may be interpreted as giving support to Hawking's 'Chronology Protection Conjecture', according to which the laws of physics prevent one from manufacturing a 'time machine'. Specifically, we prove: Theorem 1. There is no extension to (M,g_{ab}) of the usual field algebra on the initial globally hyperbolic region which satisfies the condition of F-locality at any base point. In other words, any extension of the field algebra must, in any globally hyperbolic neighbourhood of any base point, differ from the algebra one would define on that neighbourhood according to the rules for globally hyperbolic spacetimes. Theorem 2. The two-point distribution for any Hadamard state defined on the initial globally hyperbolic region must (when extended to a distributional bisolution of the covariant Klein-Gordon equation on the full spacetime) be singular at every base point x in the sense that the difference between this two point distribution and a local Hadamard distribution cannot be given by a bounded function in any neighbourhood (in M 2 M) of (x,x). In consequence of Theorem 2, quantities such as the renormalized expectation value of J2 or of the stress-energy tensor are necessarily ill-defined or singular at any base point. The proof of these theorems relies on the 'Propagation of Singularities' theorems of Duistermaat and Hörmander.
NASA Astrophysics Data System (ADS)
Boggild, Peter; Hjorth Petersen, Dirch; Sardan Sukas, Ozlem; Dam, Henrik Friis; Lei, Anders; Booth, Timothy; Molhave, Kristian; Eicchorn, Volkmar
2010-03-01
We present a range of highly adaptable microtools for direct interaction with nanoscale structures; (i) semiautomatic pick-and-place assembly of multiwalled carbon nanotubes onto cantilevers for high-aspect ratio scanning probe microscopy, using electrothermal microgrippers inside a SEM. Topology optimisation was used to calculate the optimal gripper shape defined by the boundary conditions, resulting in 10-100 times better performance. By instead pre-defining detachable tips using electron beam lithography, free-form scanning probe tips (Nanobits) can be mounted in virtually any position on a cantilever; (ii) scanning micro four point probes allow fast, non- destructive mapping of local electrical properties (sheet resistance and Hall mobility) and hysteresis effects of graphene sheets; (iii) sub 100 nm freestanding devices with wires, heaters, actuators, sensors, resonators and probes were defined in a 100 nm thin membrane with focused ion beam milling. By patterning generic membrane templates (Nembranes) the fabrication time of a TEM compatible NEMS device is effectively reduced to less around 20 minutes.
Speed Approach for UAV Collision Avoidance
NASA Astrophysics Data System (ADS)
Berdonosov, V. D.; Zivotova, A. A.; Htet Naing, Zaw; Zhuravlev, D. O.
2018-05-01
The article represents a new approach of defining potential collision of two or more UAVs in a common aviation area. UAVs trajectories are approximated by two or three trajectories’ points obtained from the ADS-B system. In the process of defining meeting points of trajectories, two cutoff values of the critical speed range, at which a UAVs collision is possible, are calculated. As calculation expressions for meeting points and cutoff values of the critical speed are represented in the analytical form, even if an on-board computer system has limited computational capacity, the time for calculation will be far less than the time of receiving data from ADS-B. For this reason, calculations can be updated at each cycle of new data receiving, and the trajectory approximation can be bounded by straight lines. Such approach allows developing the compact algorithm of collision avoidance, even for a significant amount of UAVs (more than several dozens). To proof the research adequacy, modeling was performed using a software system developed specifically for this purpose.
The Legal Rights of Tenured and Part-Time Faculty Members in Higher Education.
ERIC Educational Resources Information Center
Corley, Sherie P.
A review of faculty-related court decisions in the areas of status, compensation, and unit determination points out legal rights of part-time and full-time faculty in higher education. These rights have been tested and defined by many court cases. Litigation has occurred about the difference between part-time and full-time faculty. In regard to…
76 FR 2665 - Caribbean Fishery Management Council; Scoping Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-14
... time series of catch data that is considered to be consistently reliable across all islands as defined... based on what the Council considers to be the longest time series of catch data that is consistently... preferred management reference point time series. Action 3b. Recreational Bag Limits Option 1: No action. Do...
Vortex motion in doubly connected domains
NASA Astrophysics Data System (ADS)
Zannetti, L.; Gallizio, F.; Ottino, G. M.
The unsteady two-dimensional rotational flow past doubly connected domains is analytically addressed. By concentrating the vorticity in point vortices, the flow is modelled as a potential flow with point singularities. The dependence of the complex potential on time is defined according to the Kelvin theorem. The general case of non-null circulations around the solid bodies is discussed. Vortex shedding and time evolution of the circulation past a two-element airfoil and past a two-bladed Darrieus turbine are presented as physically coherent examples.
Determination of minimal clinically important change in early and advanced Parkinson's disease.
Hauser, Robert A; Auinger, Peggy
2011-04-01
Two common primary efficacy outcome measures in Parkinson's disease (PD) are change in Unified Parkinson's Disease Rating Scale (UPDRS) scores in early PD and change in "off" time in patients with motor fluctuations. Defining the minimal clinically important change (MCIC) in these outcome measures is important to interpret the clinical relevance of changes observed in clinical trials and other situations. We analyzed data from 2 multicenter, placebo-controlled, randomized clinical trials of rasagiline; TEMPO studied 404 early PD subjects, and PRESTO studied 472 levodopa-treated subjects with motor fluctuations. An anchor-based approach using clinical global impression of improvement (CGI-I) was used to determine MCIC for UPDRS scores and daily "off" time. MCIC was defined as mean change in actively treated subjects rated minimally improved on CGI-I. Receiver operating characteristic (ROC) curves defined optimal cutoffs discriminating between changed and unchanged subjects. MCIC for improvement in total UPDRS score (parts I-III) in early PD was determined to be -3.5 points based on mean scores and -3.0 points based on ROC curves. In addition, we found an MCIC for reduction in "off" time of 1.0 hours as defined by mean reduction in "off" time in active treated subjects self-rated as minimally improved on CGI-I minus mean reduction in "off" time in placebo-treated subjects self-rated as unchanged (1.9-0.9 hours). We hypothesize that many methodological factors can influence determination of the MCIC, and a range of values is likely to emerge from multiple studies. Copyright © 2011 Movement Disorder Society.
ERIC Educational Resources Information Center
Ibrahim, Norhayati; Freeman, Steven A.; Shelley, Mack C.
2012-01-01
The study explored the influence of work experience on adult part-time students' academic success as defined by their cumulative grade point average. The sample consisted of 614 part-time students from four polytechnic institutions in Malaysia. The study identified six factors to measure the perceived influence of work experiences--positive…
Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang
2018-04-05
We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.
Self-calibrating multiplexer circuit
Wahl, Chris P.
1997-01-01
A time domain multiplexer system with automatic determination of acceptable multiplexer output limits, error determination, or correction is comprised of a time domain multiplexer, a computer, a constant current source capable of at least three distinct current levels, and two series resistances employed for calibration and testing. A two point linear calibration curve defining acceptable multiplexer voltage limits may be defined by the computer by determining the voltage output of the multiplexer to very accurately known input signals developed from predetermined current levels across the series resistances. Drift in the multiplexer may be detected by the computer when the output voltage limits, expected during normal operation, are exceeded, or the relationship defined by the calibration curve is invalidated.
Cloud and boundary layer structure over San Nicolas Island during FIRE
NASA Technical Reports Server (NTRS)
Albrecht, Bruce A.; Fairall, Christopher W.; Syrett, William J.; Schubert, Wayne H.; Snider, Jack B.
1990-01-01
The temporal evolution of the structure of the marine boundary layer and of the associated low-level clouds observed in the vicinity of the San Nicolas Island (SNI) is defined from data collected during the First ISCCP Regional Experiment (FIRE) Marine Stratocumulus Intense Field Observations (IFO) (July 1 to 19). Surface, radiosonde, and remote-sensing measurements are used for this analysis. Sounding from the Island and from the ship Point Sur, which was located approximately 100 km northwest of SNI, are used to define variations in the thermodynamic structure of the lower-troposphere on time scales of 12 hours and longer. Time-height sections of potential temperature and equivalent potential temperature clearly define large-scale variations in the height and the strength of the inversion and periods where the conditions for cloud-top entrainment instability (CTEI) are met. Well defined variations in the height and the strength of the inversion were associated with a Cataline Eddy that was present at various times during the experiment and with the passage of the remnants of a tropical cyclone on July 18. The large-scale variations in the mean thermodynamic structure at SNI correlate well with those observed from the Point Sur. Cloud characteristics are defined for 19 days of the experiment using data from a microwave radiometer, a cloud ceilometer, a sodar, and longwave and shortwave radiometers. The depth of the cloud layer is estimated by defining inversion heights from the sodar reflectivity and cloud-base heights from a laser ceilometer. The integrated liquid water obtained from NOAA's microwave radiometer is compared with the adiabatic liquid water content that is calculated by lifting a parcel adiabatically from cloud base. In addition, the cloud structure is characterized by the variability in cloud-base height and in the integrated liquid water.
2011-01-01
Background The Prospective Space-Time scan statistic (PST) is widely used for the evaluation of space-time clusters of point event data. Usually a window of cylindrical shape is employed, with a circular or elliptical base in the space domain. Recently, the concept of Minimum Spanning Tree (MST) was applied to specify the set of potential clusters, through the Density-Equalizing Euclidean MST (DEEMST) method, for the detection of arbitrarily shaped clusters. The original map is cartogram transformed, such that the control points are spread uniformly. That method is quite effective, but the cartogram construction is computationally expensive and complicated. Results A fast method for the detection and inference of point data set space-time disease clusters is presented, the Voronoi Based Scan (VBScan). A Voronoi diagram is built for points representing population individuals (cases and controls). The number of Voronoi cells boundaries intercepted by the line segment joining two cases points defines the Voronoi distance between those points. That distance is used to approximate the density of the heterogeneous population and build the Voronoi distance MST linking the cases. The successive removal of edges from the Voronoi distance MST generates sub-trees which are the potential space-time clusters. Finally, those clusters are evaluated through the scan statistic. Monte Carlo replications of the original data are used to evaluate the significance of the clusters. An application for dengue fever in a small Brazilian city is presented. Conclusions The ability to promptly detect space-time clusters of disease outbreaks, when the number of individuals is large, was shown to be feasible, due to the reduced computational load of VBScan. Instead of changing the map, VBScan modifies the metric used to define the distance between cases, without requiring the cartogram construction. Numerical simulations showed that VBScan has higher power of detection, sensitivity and positive predicted value than the Elliptic PST. Furthermore, as VBScan also incorporates topological information from the point neighborhood structure, in addition to the usual geometric information, it is more robust than purely geometric methods such as the elliptic scan. Those advantages were illustrated in a real setting for dengue fever space-time clusters. PMID:21513556
A Critique of the DoD Materiel Distribution Study,
1979-03-01
are generated on order cycle times by their components: communication times, depot order processing times, depot capacity delay times, and transit...exceeded, the order was placed in one of three priority queues. The order processing time was determined by priority group by depot. A 20-point probability...time was defined to be the sum of communication, depot order processing , depot capacity delay, and transit times. As has been argued, the first three of
Statistical aspects of point count sampling
Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.
14 CFR Appendix B to Part 420 - Method for Defining a Flight Corridor
Code of Federal Regulations, 2010 CFR
2010-01-01
... trajectory simulation software. Trajectory time intervals shall be no greater than one second. If an... applicant shall construct a launch area of a flight corridor using the processes and equations of this paragraph for each trajectory position. An applicant shall repeat these processes at time points on the...
ERIC Educational Resources Information Center
Miner, Norris
The operations of an institution can be viewed from three perspectives: (1) the "actual operating measurement" such as income and expenditures of a cost center at a point in time; (2) the "criterion" which reflects the established policy for a time period; and (3) the "efficiency level" wherein a balance between input and output is defined.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umansky, M. V.; Ryutov, D. D.
Reduced MHD equations are used for studying toroidally symmetric plasma dynamics near the divertor null point. Numerical solution of these equations exhibits a plasma vortex localized at the null point with the time-evolution defined by interplay of the curvature drive, magnetic restoring force, and dissipation. Convective motion is easier to achieve for a second-order null (snowflake) divertor than for a regular x-point configuration, and the size of the convection zone in a snowflake configuration grows with plasma pressure at the null point. In conclusion, the trends in simulations are consistent with tokamak experiments which indicate the presence of enhanced transportmore » at the null point.« less
Limiting Magnitude, τ, t eff, and Image Quality in DES Year 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
H. Neilsen, Jr.; Bernstein, Gary; Gruendl, Robert
The Dark Energy Survey (DES) is an astronomical imaging survey being completed with the DECam imager on the Blanco telescope at CTIO. After each night of observing, the DES data management (DM) group performs an initial processing of that night's data, and uses the results to determine which exposures are of acceptable quality, and which need to be repeated. The primary measure by which we declare an image of acceptable quality ismore » $$\\tau$$, a scaling of the exposure time. This is the scale factor that needs to be applied to the open shutter time to reach the same photometric signal to noise ratio for faint point sources under a set of canonical good conditions. These conditions are defined to be seeing resulting in a PSF full width at half maximum (FWHM) of 0.9" and a pre-defined sky brightness which approximates the zenith sky brightness under fully dark conditions. Point source limiting magnitude and signal to noise should therefore vary with t in the same way they vary with exposure time. Measurements of point sources and $$\\tau$$ in the first year of DES data confirm that they do. In the context of DES, the symbol $$t_{eff}$$ and the expression "effective exposure time" usually refer to the scaling factor, $$\\tau$$, rather than the actual effective exposure time; the "effective exposure time" in this case refers to the effective duration of one second, rather than the effective duration of an exposure.« less
An adaptive segment method for smoothing lidar signal based on noise estimation
NASA Astrophysics Data System (ADS)
Wang, Yuzhao; Luo, Pingping
2014-10-01
An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.
Toroidally symmetric plasma vortex at tokamak divertor null point
Umansky, M. V.; Ryutov, D. D.
2016-03-09
Reduced MHD equations are used for studying toroidally symmetric plasma dynamics near the divertor null point. Numerical solution of these equations exhibits a plasma vortex localized at the null point with the time-evolution defined by interplay of the curvature drive, magnetic restoring force, and dissipation. Convective motion is easier to achieve for a second-order null (snowflake) divertor than for a regular x-point configuration, and the size of the convection zone in a snowflake configuration grows with plasma pressure at the null point. In conclusion, the trends in simulations are consistent with tokamak experiments which indicate the presence of enhanced transportmore » at the null point.« less
Stability of Dynamical Systems with Discontinuous Motions:
NASA Astrophysics Data System (ADS)
Michel, Anthony N.; Hou, Ling
In this paper we present a stability theory for discontinuous dynamical systems (DDS): continuous-time systems whose motions are not necessarily continuous with respect to time. We show that this theory is not only applicable in the analysis of DDS, but also in the analysis of continuous dynamical systems (continuous-time systems whose motions are continuous with respect to time), discrete-time dynamical systems (systems whose motions are defined at discrete points in time) and hybrid dynamical systems (HDS) (systems whose descriptions involve simultaneously continuous-time and discrete-time). We show that the stability results for DDS are in general less conservative than the corresponding well-known classical Lyapunov results for continuous dynamical systems and discrete-time dynamical systems. Although the DDS stability results are applicable to general dynamical systems defined on metric spaces (divorced from any kind of description by differential equations, or any other kinds of equations), we confine ourselves to finite-dimensional dynamical systems defined by ordinary differential equations and difference equations, to make this paper as widely accessible as possible. We present only sample results, namely, results for uniform asymptotic stability in the large.
Wardlaw, Bruce R.; Ellwood, Brooks B.; Lambert, Lance L.; Tomkin, Jonathan H.; Bell, Gordon L.; Nestell, Galina P.
2012-01-01
Here we establish a magnetostratigraphy susceptibility zonation for the three Middle Permian Global boundary Stratotype Sections and Points (GSSPs) that have recently been defined, located in Guadalupe Mountains National Park, West Texas, USA. These GSSPs, all within the Middle Permian Guadalupian Series, define (1) the base of the Roadian Stage (base of the Guadalupian Series), (2) the base of the Wordian Stage and (3) the base of the Capitanian Stage. Data from two additional stratigraphic successions in the region, equivalent in age to the Kungurian–Roadian and Wordian–Capitanian boundary intervals, are also reported. Based on low-field, mass specific magnetic susceptibility (χ) measurements of 706 closely spaced samples from these stratigraphic sections and time-series analysis of one of these sections, we (1) define the magnetostratigraphy susceptibility zonation for the three Guadalupian Series Global boundary Stratotype Sections and Points; (2) demonstrate that χ datasets provide a proxy for climate cyclicity; (3) give quantitative estimates of the time it took for some of these sediments to accumulate; (4) give the rates at which sediments were accumulated; (5) allow more precise correlation to equivalent sections in the region; (6) identify anomalous stratigraphic horizons; and (7) give estimates for timing and duration of geological events within sections.
Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Xu, Chenyang; Ayache, Nicholas
2010-07-01
We propose a framework for the nonlinear spatiotemporal registration of 4D time-series of images based on the Diffeomorphic Demons (DD) algorithm. In this framework, the 4D spatiotemporal registration is decoupled into a 4D temporal registration, defined as mapping physiological states, and a 4D spatial registration, defined as mapping trajectories of physical points. Our contribution focuses more specifically on the 4D spatial registration that should be consistent over time as opposed to 3D registration that solely aims at mapping homologous points at a given time-point. First, we estimate in each sequence the motion displacement field, which is a dense representation of the point trajectories we want to register. Then, we perform simultaneously 3D registrations of corresponding time-points with the constraints to map the same physical points over time called the trajectory constraints. Under these constraints, we show that the 4D spatial registration can be formulated as a multichannel registration of 3D images. To solve it, we propose a novel version of the Diffeomorphic Demons (DD) algorithm extended to vector-valued 3D images, the Multichannel Diffeomorphic Demons (MDD). For evaluation, this framework is applied to the registration of 4D cardiac computed tomography (CT) sequences and compared to other standard methods with real patient data and synthetic data simulated from a physiologically realistic electromechanical cardiac model. Results show that the trajectory constraints act as a temporal regularization consistent with motion whereas the multichannel registration acts as a spatial regularization. Finally, using these trajectory constraints with multichannel registration yields the best compromise between registration accuracy, temporal and spatial smoothness, and computation times. A prospective example of application is also presented with the spatiotemporal registration of 4D cardiac CT sequences of the same patient before and after radiofrequency ablation (RFA) in case of atrial fibrillation (AF). The intersequence spatial transformations over a cardiac cycle allow to analyze and quantify the regression of left ventricular hypertrophy and its impact on the cardiac function.
A Novel Health Evaluation Strategy for Multifunctional Self-Validating Sensors
Shen, Zhengguang; Wang, Qi
2013-01-01
The performance evaluation of sensors is very important in actual application. In this paper, a theory based on multi-variable information fusion is studied to evaluate the health level of multifunctional sensors. A novel conception of health reliability degree (HRD) is defined to indicate a quantitative health level, which is different from traditional so-called qualitative fault diagnosis. To evaluate the health condition from both local and global perspectives, the HRD of a single sensitive component at multiple time points and the overall multifunctional sensor at a single time point are defined, respectively. The HRD methodology is emphasized by using multi-variable data fusion technology coupled with a grey comprehensive evaluation method. In this method, to acquire the distinct importance of each sensitive unit and the sensitivity of different time points, the information entropy and analytic hierarchy process method are used, respectively. In order to verify the feasibility of the proposed strategy, a health evaluating experimental system for multifunctional self-validating sensors was designed. The five different health level situations have been discussed. Successful results show that the proposed method is feasible, the HRD could be used to quantitatively indicate the health level and it does have a fast response to the performance changes of multifunctional sensors. PMID:23291576
NASA Astrophysics Data System (ADS)
Kim, Byung Chan; Park, Seong-Ook
In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.
De Los Ríos, F. A.; Paluszny, M.
2015-01-01
We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281
High School Grade Inflation from 2004 to 2011. ACT Research Report Series, 2013 (3)
ERIC Educational Resources Information Center
Zhang, Qian; Sanchez, Edgar I.
2013-01-01
This study explores inflation in high school grade point average (HSGPA), defined as trend over time in the conditional average of HSGPA, given ACT® Composite score. The time period considered is 2004 to 2011. Using hierarchical linear modeling, the study updates a previous analysis of Woodruff and Ziomek (2004). The study also investigates…
ERIC Educational Resources Information Center
Jauhiainen, Arto; Jauhiainen, Annukka; Laiho, Anne; Lehto, Reeta
2015-01-01
This article explores how the university workers of two Finnish universities experienced the range of neoliberal policymaking and governance reforms implemented in the 2000s. These reforms include quality assurance, system of defined annual working hours, outcome-based salary system and work time allocation system. Our point of view regarding…
NASA Technical Reports Server (NTRS)
West, M. E.
1992-01-01
A real-time estimation filter which reduces sensitivity to system variations and reduces the amount of preflight computation is developed for the instrument pointing subsystem (IPS). The IPS is a three-axis stabilized platform developed to point various astronomical observation instruments aboard the shuttle. Currently, the IPS utilizes a linearized Kalman filter (LKF), with premission defined gains, to compensate for system drifts and accumulated attitude errors. Since the a priori gains are generated for an expected system, variations result in a suboptimal estimation process. This report compares the performance of three real-time estimation filters with the current LKF implementation. An extended Kalman filter and a second-order Kalman filter are developed to account for the system nonlinearities, while a linear Kalman filter implementation assumes that the nonlinearities are negligible. The performance of each of the four estimation filters are compared with respect to accuracy, stability, settling time, robustness, and computational requirements. It is shown, that for the current IPS pointing requirements, the linear Kalman filter provides improved robustness over the LKF with less computational requirements than the two real-time nonlinear estimation filters.
Computer graphic visualization of orbiter lower surface boundary-layer transition
NASA Technical Reports Server (NTRS)
Throckmorton, D. A.; Hartung, L. C.
1984-01-01
Computer graphic techniques are applied to the processing of Shuttle Orbiter flight data in order to create a visual presentation of the extent and movement of the boundary-layer transition front over the orbiter lower surface during entry. Flight-measured surface temperature-time histories define the onset and completion of the boundary-layer transition process at any measurement location. The locus of points which define the spatial position of the boundary-layer transition front on the orbiter planform is plotted at each discrete time for which flight data are available. Displaying these images sequentially in real-time results in an animated simulation of the in-flight boundary-layer transition process.
NASA Astrophysics Data System (ADS)
Kärcher, Hans J.; Kunz, Nans; Temi, Pasquale; Krabbe, Alfred; Wagner, Jörg; Süß, Martin
2014-07-01
The original pointing accuracy requirement of the Stratospheric Observatory for Infrared Astronomy SOFIA was defined at the beginning of the program in the late 1980s as very challenging 0.2 arcsec rms. The early science flights of the observatory started in December 2010 and the observatory has reached in the mean time nearly 0.7 arcsec rms, which is sufficient for most of the SOFIA science instruments. NASA and DLR, the owners of SOFIA, are planning now a future 4 year program to bring the pointing down to the ultimate 0.2 arcsec rms. This may be the right time to recall the history of the pointing requirement and its verification and the possibility of its achievement via early computer models and wind tunnel tests, later computer aided end-to-end simulations up to the first commissioning flights some years ago. The paper recollects the tools used in the different project phases for the verification of the pointing performance, explains the achievements and may give hints for the planning of the upcoming final pointing improvement phase.
Characteristic Boundary Conditions for ARO-1
1983-05-01
I As shown in Fig. 3, the point designated II is the interior point that was used to define the barred coordinate system , evaluated at time t=. All...L. Jacocks Calspan Field Services, Inc. May 1983 Final Report for Period October 1981 - September 1982 r Approved for public release; destribut ...on unlimited I ARNOLD ENGINEERING DEVELOPMENT CENTER ARNOLD AIR FORCE STATION, TENNESSEE AIR FORCE SYSTEMS COMMAND UNITED STATES AIR FORCE N O T I
ERIC Educational Resources Information Center
Geber, Beverly
1991-01-01
Discusses changing nature of volunteers in Peter Drucker's book "Managing the Nonprofit Corporation." Points out that most volunteers have full-time jobs, families, very little leisure; they are not willing to do such routine work as stuffing envelopes; they want carefully defined projects with beginning and end. Discusses real…
47 CFR 51.331 - Notice of network changes: Timing of notice.
Code of Federal Regulations, 2012 CFR
2012-10-01
... changes at the make/buy point, as defined in paragraph (b) of this section, but at least 12 months before... change. (1) For purposes of this section, a product is any hardware r software for use in an incumbent...
Hazewinkel, Herman A W; van den Brom, Walter E; Theyse, Lars F H; Pollmeier, Matthias; Hanson, Peter D
2008-02-01
A randomized, placebo-controlled, four-period cross-over laboratory study involving eight dogs was conducted to confirm the effective analgesic dose of firocoxib, a selective COX-2 inhibitor, in a synovitis model of arthritis. Firocoxib was compared to vedaprofen and carprofen, and the effect, defined as a change in weight bearing measured via peak ground reaction, was evaluated at treatment dose levels. A lameness score on a five point scale was also assigned to the affected limb. Peak vertical ground reaction force was considered to be the most relevant measurement in this study. The firocoxib treatment group performed significantly better than placebo at the 3 h post-treatment time point and significantly better than placebo and carprofen at the 7 h post-treatment time point. Improvement in lameness score was also significantly better in the dogs treated with firocoxib than placebo and carprofen at both the 3 and 7 h post-treatment time points.
Tong, Allison; Sautenet, Benedicte; Poggio, Emilio D; Lentine, Krista L; Oberbauer, Rainer; Mannon, Roslyn; Murphy, Barbara; Padilla, Benita; Chow, Kai Ming; Marson, Lorna; Chadban, Steve; Craig, Jonathan C; Ju, Angela; Manera, Karine E; Hanson, Camilla S; Josephson, Michelle A; Knoll, Greg
2018-02-22
Graft loss, a critically important outcome for transplant recipients, is variably defined and measured, and incompletely reported in trials. We convened a consensus workshop on establishing a core outcome measure for graft loss for all trials in kidney transplantation. Twenty-five kidney transplant recipients/caregivers and 33 health professionals from eight countries participated. Transcripts were analyzed thematically. Five themes were identified. "Graft loss as a continuum" conceptualizes graft loss as a process, but requiring an endpoint defined as a discrete event. In "defining an event with precision and accuracy," loss of graft function requiring chronic dialysis (minimum 90 days) provided an objective and practical definition; re-transplant would capture preemptive transplantation; relisting was readily measured but would overestimate graft loss; and allograft nephrectomy was redundant in being preceded by dialysis. However, the thresholds for renal replacement therapy varied. Conservative management was regarded as too ambiguous and complex to use routinely. "Distinguishing death-censored graft loss" would ensure clarity and meaningfulness in interpreting results. "Consistent reporting for decision-making" by specifying time points and metrics (ie time to event) was suggested. "Ease of ascertainment and data collection" of the outcome from registries could support use of registry data to efficiently extend follow-up of trial participants. A practical and meaningful core outcome measure for graft loss may be defined as chronic dialysis or re-transplant, and distinguished from loss due to death. Consistent reporting of graft loss using standardized metrics and time points may improve the contribution of trials to decision-making in kidney transplantation.
Time of travel of solutes in selected reaches of the Sandusky River Basin, Ohio, 1972 and 1973
Westfall, Arthur O.
1976-01-01
A time of travel study of a 106-mile (171-kilometer) reach of the Sandusky River and a 39-mile (63-kilometer) reach of Tymochtee Creek was made to determine the time required for water released from Killdeer Reservoir on Tymochtee Creek to reach selected downstream points. In general, two dye sample runs were made through each subreach to define the time-discharge relation for approximating travel times at selected discharges within the measured range, and time-discharge graphs are presented for 38 subreaches. Graphs of dye dispersion and variation in relation to time are given for three selected sampling sites. For estimating travel time and velocities between points in the study reach, tables for selected flow durations are given. Duration curves of daily discharge for four index stations are presented to indicate the lo-flow characteristics and for use in shaping downward extensions of the time-discharge curves.
Fermion Systems in Discrete Space-Time Exemplifying the Spontaneous Generation of a Causal Structure
NASA Astrophysics Data System (ADS)
Diethert, A.; Finster, F.; Schiefeneder, D.
As toy models for space-time at the Planck scale, we consider examples of fermion systems in discrete space-time which are composed of one or two particles defined on two up to nine space-time points. We study the self-organization of the particles as described by a variational principle both analytically and numerically. We find an effect of spontaneous symmetry breaking which leads to the emergence of a discrete causal structure.
Okun, Michele L; Kline, Christopher E; Roberts, James M; Wettlaufer, Barbara; Glover, Khaleelah; Hall, Martica
2013-12-01
Sleep deficiency is an emerging concept denoting a deficit in the quantity or quality of sleep. This may be particularly salient for pregnant women since they report considerable sleep complaints. Sleep deficiency is linked with morbidity, including degradations in psychosocial functioning, (e.g., depression and stress), which are recognized risk factors for adverse pregnancy outcomes. We sought to describe the frequency of sleep deficiency across early gestation (10-20 weeks) and whether sleep deficiency is associated with reports of more depressive symptoms and stress. Pregnant women (N=160) with no self-reported sleep or psychological disorder provided sleep data collected via diary and actigraphy during early pregnancy: 10-12, 14-16, and 18-20 weeks' gestation. Sleep deficiency was defined as short sleep duration, insufficient sleep, or insomnia. Symptoms of depression and stress were collected at the same three time points. Linear mixed effects models were used to analyze the data. Approximately 28%-38% met criteria for sleep deficiency for at least one time point in early gestation. Women who were sleep deficient across all time points reported more perceived stress than those who were not sleep deficient (p<0.01). Depressive symptoms were higher among women with diary-defined sleep deficiency across all time points (p=0.02). Sleep deficiency is a useful concept to describe sleep recognized to be disturbed in pregnancy. Women with persistent sleep deficiency appear to be at greater risk for impairments in psychosocial functioning during early gestation. These associations are important since psychosocial functioning is a recognized correlate of adverse pregnancy outcomes. Sleep deficiency may be another important risk factor for adverse pregnancy outcomes.
Ferreira, Rodrigo Wiltgen; Varela, Andrea Ramirez; Monteiro, Luciana Zaranza; Häfele, César Augusto; Santos, Simone José Dos; Wendt, Andrea; Silva, Inácio Crochemore Mohnsam
2018-01-01
The objective of this study was to identify inequalities in leisure-time physical activity and active commuting to school in Brazilian adolescents, as well as trends according to gender, type of school, maternal schooling, and geographic region, from 2009 to 2015. This was a descriptive study based on data from the Brazilian National School Health Survey (PeNSE) in 2009, 2012, and 2015. Students were defined as active in their leisure time when they practiced at least 60 minutes of physical activity a day on five or more of the seven days prior to the interview. Active commuting to school was defined as walking or biking to school on the week prior to the interview. The outcomes were stratified by gender, type of school, maternal schooling, and geographic region. Inequalities were assessed by differences and ratios between the estimates, as well as summary inequality indices. The 2009, 2012, and 2015 surveys included 61,301, 61,145, and 51,192 schoolchildren, respectively. Prevalence of leisure-time physical activity was 13.8% in 2009, 15.9% in 2012, and 14.7% in 2015; the rates for active commuting to school were 70.6%, 61.7%, and 66.7%, respectively. Boys showed 10 percentage points higher prevalence of leisure-time physical activity and 5 points higher active commuting to school than girls. Children of mothers with more schooling showed a mean of 10 percentage points higher prevalence of leisure-time physical activity than children of mothers with the lowest schooling and some 30 percentage points lower in relation to active commuting to school. The observed inequalities remained constant over the course of the period. The study identified socioeconomic and gender inequalities that remained constant throughout the period and which were specific to each domain of physical activity.
1974-01-01
General agreement seems to be developing that the geophysical system should be defined in terms of a large number of points...34A Laser-Interferometer System for the Absolute Determination of the Acceleration due to Gravity," In Proc. Int. Conf. on Precision Measurement...MO %. The ratio of the plasmaspheric to the total time-delays due to free
Radikova, Z; Koska, J; Huckova, M; Ksinantova, L; Imrich, R; Vigas, M; Trnovec, T; Langer, P; Sebokova, E; Klimes, I
2006-05-01
Demanding measurement of insulin sensitivity using clamp methods does not simplify the identification of insulin resistant subjects in the general population. Other approaches such as fasting- or oral glucose tolerance test-derived insulin sensitivity indices were proposed and validated with the euglycemic clamp. Nevertheless, a lack of reference values for these indices prevents their wider use in epidemiological studies and clinical practice. The aim of our study was therefore to define the cut-off points of insulin resistance indices as well as the ranges of the most frequently obtained values for selected indices. A standard 75 g oral glucose tolerance test was carried out in 1156 subjects from a Caucasian rural population with no previous evidence of diabetes or other dysglycemias. Insulin resistance/sensitivity indices (HOMA-IR, HOMA-IR2, ISI Cederholm, and ISI Matsuda) were calculated. The 75th percentile value as the cut-off point to define IR corresponded with a HOMA-IR of 2.29, a HOMA-IR2 of 1.21, a 25th percentile for ISI Cederholm, and ISI Matsuda of 57 and 5.0, respectively. For the first time, the cut-off points for selected indices and their most frequently obtained values were established for groups of subjects as defined by glucose homeostasis and BMI. Thus, insulin-resistant subjects can be identified using this simple approach.
NASA Astrophysics Data System (ADS)
Moskal, P.; Zoń, N.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kamińska, D.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kowalski, P.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Raczyński, L.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Wiślicki, W.; Zieliński, M.
2015-03-01
A novel method of hit time and hit position reconstruction in scintillator detectors is described. The method is based on comparison of detector signals with results stored in a library of synchronized model signals registered for a set of well-defined positions of scintillation points. The hit position is reconstructed as the one corresponding to the signal from the library which is most similar to the measurement signal. The time of the interaction is determined as a relative time between the measured signal and the most similar one in the library. A degree of similarity of measured and model signals is defined as the distance between points representing the measurement- and model-signal in the multi-dimensional measurement space. Novelty of the method lies also in the proposed way of synchronization of model signals enabling direct determination of the difference between time-of-flights (TOF) of annihilation quanta from the annihilation point to the detectors. The introduced method was validated using experimental data obtained by means of the double strip prototype of the J-PET detector and 22Na sodium isotope as a source of annihilation gamma quanta. The detector was built out from plastic scintillator strips with dimensions of 5 mm×19 mm×300 mm, optically connected at both sides to photomultipliers, from which signals were sampled by means of the Serial Data Analyzer. Using the introduced method, the spatial and TOF resolution of about 1.3 cm (σ) and 125 ps (σ) were established, respectively.
Cosmic infinity: a dynamical system approach
NASA Astrophysics Data System (ADS)
Bouhmadi-López, Mariam; Marto, João; Morais, João; Silva, César M.
2017-03-01
Dynamical system techniques are extremely useful to study cosmology. It turns out that in most of the cases, we deal with finite isolated fixed points corresponding to a given cosmological epoch. However, it is equally important to analyse the asymptotic behaviour of the universe. On this paper, we show how this can be carried out for 3-form models. In fact, we show that there are fixed points at infinity mainly by introducing appropriate compactifications and defining a new time variable that washes away any potential divergence of the system. The richness of 3-form models allows us as well to identify normally hyperbolic non-isolated fixed points. We apply this analysis to three physically interesting situations: (i) a pre-inflationary era; (ii) an inflationary era; (iii) the late-time dark matter/dark energy epoch.
ERIC Educational Resources Information Center
Rizkianto, Ilham; Zulkardi; Darmawijaya
2013-01-01
Previous studies have provided that when learning shapes for the first time, young children tend to use the prototype as the reference point for comparisons, but often fail when doing so since they do not yet think about the defining attributes or the geometric properties of the shapes. Most of the time, elementary students learn geometric…
Scaled Runge-Kutta algorithms for handling dense output
NASA Technical Reports Server (NTRS)
Horn, M. K.
1981-01-01
Low order Runge-Kutta algorithms are developed which determine the solution of a system of ordinary differential equations at any point within a given integration step, as well as at the end of each step. The scaled Runge-Kutta methods are designed to be used with existing Runge-Kutta formulas, using the derivative evaluations of these defining algorithms as the core of the system. For a slight increase in computing time, the solution may be generated within the integration step, improving the efficiency of the Runge-Kutta algorithms, since the step length need no longer be severely reduced to coincide with the desired output point. Scaled Runge-Kutta algorithms are presented for orders 3 through 5, along with accuracy comparisons between the defining algorithms and their scaled versions for a test problem.
Precise determination of time to reach viral load set point after acute HIV-1 infection.
Huang, Xiaojie; Chen, Hui; Li, Wei; Li, Haiying; Jin, Xia; Perelson, Alan S; Fox, Zoe; Zhang, Tong; Xu, Xiaoning; Wu, Hao
2012-12-01
The HIV viral load set point has long been used as a prognostic marker of disease progression and more recently as an end-point parameter in HIV vaccine clinical trials. The definition of set point, however, is variable. Moreover, the earliest time at which the set point is reached after the onset of infection has never been clearly defined. In this study, we obtained sequential plasma viral load data from 60 acutely HIV-infected Chinese patients among a cohort of men who have sex with men, mathematically determined viral load set point levels, and estimated time to attain set point after infection. We also compared the results derived from our models and that obtained from an empirical method. With novel uncomplicated mathematic model, we discovered that set points may vary from 21 to 119 days dependent on the patients' initial viral load trajectory. The viral load set points were 4.28 ± 0.86 and 4.25 ± 0.87 log10 copies per milliliter (P = 0.08), respectively, as determined by our model and an empirical method, suggesting an excellent agreement between the old and new methods. We provide a novel method to estimate viral load set point at the very early stage of HIV infection. Application of this model can accurately and reliably determine the set point, thus providing a new tool for physicians to better monitor early intervention strategies in acutely infected patients and scientists to rationally design preventative vaccine studies.
How should Fitts' Law be applied to human-computer interaction?
NASA Technical Reports Server (NTRS)
Gillan, D. J.; Holden, K.; Adam, S.; Rudisill, M.; Magee, L.
1992-01-01
The paper challenges the notion that any Fitts' Law model can be applied generally to human-computer interaction, and proposes instead that applying Fitts' Law requires knowledge of the users' sequence of movements, direction of movement, and typical movement amplitudes as well as target sizes. Two experiments examined a text selection task with sequences of controlled movements (point-click and point-drag). For the point-click sequence, a Fitts' Law model that used the diagonal across the text object in the direction of pointing (rather than the horizontal extent of the text object) as the target size provided the best fit for the pointing time data, whereas for the point-drag sequence, a Fitts' Law model that used the vertical size of the text object as the target size gave the best fit. Dragging times were fitted well by Fitts' Law models that used either the vertical or horizontal size of the terminal character in the text object. Additional results of note were that pointing in the point-click sequence was consistently faster than in the point-drag sequence, and that pointing in either sequence was consistently faster than dragging. The discussion centres around the need to define task characteristics before applying Fitts' Law to an interface design or analysis, analyses of pointing and of dragging, and implications for interface design.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
Exploring the bulk in AdS /CFT : A covariant approach
NASA Astrophysics Data System (ADS)
Engelhardt, Netta
2017-03-01
I propose a general, covariant way of defining when one region is "deeper in the bulk" than another. This definition is formulated outside of an event horizon (or in the absence thereof) in generic geometries; it may be applied to both points and surfaces, and it may be used to compare the depth of bulk points or surfaces relative to a particular boundary subregion or relative to the entire boundary. Using the recently proposed "light-cone cut" formalism, the comparative depth between two bulk points can be determined from the singularity structure of Lorentzian correlators in the dual field theory. I prove that, by this definition, causal wedges of progressively larger regions probe monotonically deeper in the bulk. The definition furthermore matches expectations in pure AdS and in static AdS black holes with isotropic spatial slices, where a well-defined holographic coordinate exists. In terms of holographic renormalization group flow, this new definition of bulk depth makes contact with coarse graining over both large distances and long time scales.
Time delay of critical images in the vicinity of cusp point of gravitational-lens systems
NASA Astrophysics Data System (ADS)
Alexandrov, A.; Zhdanov, V.
2016-12-01
We consider approximate analytical formulas for time-delays of critical images of a point source in the neighborhood of a cusp-caustic. We discuss zero, first and second approximations in powers of a parameter that defines the proximity of the source to the cusp. These formulas link the time delay with characteristics of the lens potential. The formula of zero approximation was obtained by Congdon, Keeton & Nordgren (MNRAS, 2008). In case of a general lens potential we derived first order correction thereto. If the potential is symmetric with respect to the cusp axis, then this correction is identically equal to zero. For this case, we obtained second order correction. The relations found are illustrated by a simple model example.
Space and time renormalization in phase transition dynamics
Francuz, Anna; Dziarmaga, Jacek; Gardas, Bartłomiej; ...
2016-02-18
Here, when a system is driven across a quantum critical point at a constant rate, its evolution must become nonadiabatic as the relaxation time τ diverges at the critical point. According to the Kibble-Zurek mechanism (KZM), the emerging post-transition excited state is characterized by a finite correlation length ξˆ set at the time tˆ=τˆ when the critical slowing down makes it impossible for the system to relax to the equilibrium defined by changing parameters. This observation naturally suggests a dynamical scaling similar to renormalization familiar from the equilibrium critical phenomena. We provide evidence for such KZM-inspired spatiotemporal scaling by investigatingmore » an exact solution of the transverse field quantum Ising chain in the thermodynamic limit.« less
Lost in Virtual Reality: Pathfinding Algorithms Detect Rock Fractures and Contacts in Point Clouds
NASA Astrophysics Data System (ADS)
Thiele, S.; Grose, L.; Micklethwaite, S.
2016-12-01
UAV-based photogrammetric and LiDAR techniques provide high resolution 3D point clouds and ortho-rectified photomontages that can capture surface geology in outstanding detail over wide areas. Automated and semi-automated methods are vital to extract full value from these data in practical time periods, though the nuances of geological structures and materials (natural variability in colour and geometry, soft and hard linkage, shadows and multiscale properties) make this a challenging task. We present a novel method for computer assisted trace detection in dense point clouds, using a lowest cost path solver to "follow" fracture traces and lithological contacts between user defined end points. This is achieved by defining a local neighbourhood network where each point in the cloud is linked to its neighbours, and then using a least-cost path algorithm to search this network and estimate the trace of the fracture or contact. A variety of different algorithms can then be applied to calculate the best fit plane, produce a fracture network, or map properties such as roughness, curvature and fracture intensity. Our prototype of this method (Fig. 1) suggests the technique is feasible and remarkably good at following traces under non-optimal conditions such as variable-shadow, partial occlusion and complex fracturing. Furthermore, if a fracture is initially mapped incorrectly, the user can easily provide further guidance by defining intermediate waypoints. Future development will include optimization of the algorithm to perform well on large point clouds and modifications that permit the detection of features such as step-overs. We also plan on implementing this approach in an interactive graphical user environment.
Albergotti, William G.; Gooding, William E.; Kubik, Mark W.; Geltzeiler, Mathew; Kim, Seungwon; Duvvuri, Umamaheswar; Ferris, Robert L.
2017-01-01
IMPORTANCE Transoral robotic surgery (TORS) is increasingly employed as a treatment option for squamous cell carcinoma of the oropharynx (OPSCC). Measures of surgical learning curves are needed particularly as clinical trials using this technology continue to evolve. OBJECTIVE To assess learning curves for the oncologic TORS surgeon and to identify the number of cases needed to identify the learning phase. DESIGN, SETTING, AND PARTICIPANTS A retrospective review of all patients who underwent TORS for OPSCC at the University of Pittsburgh Medical Center between March 2010 and March 2016. Cases were excluded for involvement of a subsite outside of the oropharynx, for nonmalignant abnormality or nonsquamous histology, unknown primary, no tumor in the main specimen, free flap reconstruction, and for an inability to define margin status. EXPOSURES Transoral robotic surgery for OPSCC. MAIN OUTCOMES AND MEASURES Primary learning measures defined by the authors include the initial and final margin status and time to resection of main surgical specimen. A cumulative sum learning curve was developed for each surgeon for each of the study variables. The inflection point of each surgeon’s curve was considered to be the point signaling the completion of the learning phase. RESULTS There were 382 transoral robotic procedures identified. Of 382 cases, 160 met our inclusion criteria: 68 for surgeon A, 37 for surgeon B, and 55 for surgeon C. Of the 160 included patients, 125 were men and 35 were women. The mean (SD) age of participants was 59.4 (9.5) years. Mean (SD) time to resection including robot set-up was 79 (36) minutes. The inflection points for the final margin status learning curves were 27 cases (surgeon A) and 25 cases (surgeon C). There was no inflection point for surgeon B for final margin status. Inflection points for mean time to resection were: 39 cases (surgeon A), 30 cases (surgeon B), and 27 cases (surgeon C). CONCLUSIONS AND RELEVANCE Using metrics of positive margin rate and time to resection of the main surgical specimen, the learning curve for TORS for OPSCC is surgeon-specific. Inflection points for most learning curves peak between 20 and 30 cases. PMID:28196200
Mandel, Micha; Gauthier, Susan A; Guttmann, Charles R G; Weiner, Howard L; Betensky, Rebecca A
2007-12-01
The expanded disability status scale (EDSS) is an ordinal score that measures progression in multiple sclerosis (MS). Progression is defined as reaching EDSS of a certain level (absolute progression) or increasing of one point of EDSS (relative progression). Survival methods for time to progression are not adequate for such data since they do not exploit the EDSS level at the end of follow-up. Instead, we suggest a Markov transitional model applicable for repeated categorical or ordinal data. This approach enables derivation of covariate-specific survival curves, obtained after estimation of the regression coefficients and manipulations of the resulting transition matrix. Large sample theory and resampling methods are employed to derive pointwise confidence intervals, which perform well in simulation. Methods for generating survival curves for time to EDSS of a certain level, time to increase of EDSS of at least one point, and time to two consecutive visits with EDSS greater than three are described explicitly. The regression models described are easily implemented using standard software packages. Survival curves are obtained from the regression results using packages that support simple matrix calculation. We present and demonstrate our method on data collected at the Partners MS center in Boston, MA. We apply our approach to progression defined by time to two consecutive visits with EDSS greater than three, and calculate crude (without covariates) and covariate-specific curves.
South Dakota Student Learning Objectives Handbook
ERIC Educational Resources Information Center
Gill, Matt; Outka, Janeen; McCorkle, Mary
2015-01-01
Student growth is one of two essential components of South Dakota's Teacher and Principal Effectiveness Systems. In the state systems, student growth is defined as a positive change in student achievement between two or more points in time. "The South Dakota SLO Handbook" provides support and guidance to public schools and school…
Least-mean-square spatial filter for IR sensors.
Takken, E H; Friedman, D; Milton, A F; Nitzberg, R
1979-12-15
A new least-mean-square filter is defined for signal-detection problems. The technique is proposed for scanning IR surveillance systems operating in poorly characterized but primarily low-frequency clutter interference. Near-optimal detection of point-source targets is predicted both for continuous-time and sampled-data systems.
Winning in Time: Enabling Naturalistic Decision Making in Command and Control
2000-11-01
non-linear with non-linearity defined as a condition master chess player , the NBA basketball player , the in which a system disobeys principles of great...are made up of basic others identified in the successive sectors, are feedback structures which have known behavioral points of leverage for policy
George, Darren; Dixon, Sinikka; Stansal, Emory; Gelb, Shannon Lund; Pheri, Tabitha
2008-01-01
A sample of 231 students attending a private liberal arts university in central Alberta, Canada, completed a 5-day time diary and a 71-item questionnaire assessing the influence of personal, cognitive, and attitudinal factors on success. The authors used 3 success measures: cumulative grade point average (GPA), Personal Success--each participant's rating of congruence between stated goals and progress toward those goals--and Total Success--a measure that weighted GPA and Personal Success equally. The greatest predictors of GPA were time-management skills, intelligence, time spent studying, computer ownership, less time spent in passive leisure, and a healthy diet. Predictors of Personal Success scores were clearly defined goals, overall health, personal spirituality, and time-management skills. Predictors of Total Success scores were clearly defined goals, time-management skills, less time spent in passive leisure, healthy diet, waking up early, computer ownership, and less time spent sleeping. Results suggest alternatives to traditional predictors of academic success.
The frontotemporal syndrome of ALS is associated with poor survival.
Govaarts, Rosanne; Beeldman, Emma; Kampelmacher, Mike J; van Tol, Marie-Jose; van den Berg, Leonard H; van der Kooi, Anneke J; Wijkstra, Peter J; Zijnen-Suyker, Marianne; Cobben, Nicolle A M; Schmand, Ben A; de Haan, Rob J; de Visser, Marianne; Raaphorst, Joost
2016-12-01
Thirty percent of ALS patients have a frontotemporal syndrome (FS), defined as behavioral changes or cognitive impairment. Despite previous studies, there are no firm conclusions on the effect of the FS on survival and the use of non-invasive ventilation (NIV) in ALS. We examined the effect of the FS on survival and the start and duration of NIV in ALS. Behavioral changes were defined as >22 points on the ALS-Frontotemporal-Dementia-Questionnaire or ≥3 points on ≥2 items of the Neuropsychiatric Inventory. Cognitive impairment was defined as below the fifth percentile on ≥2 tests of executive function, memory or language. Classic ALS was defined as ALS without the frontotemporal syndrome. We performed survival analyses from symptom onset and time from NIV initiation, respectively, to death. The impact of the explanatory variables on survival and NIV initiation were examined using Cox proportional hazards models. We included 110 ALS patients (76 men) with a mean age of 62 years. Median survival time was 4.3 years (95 % CI 3.53-5.13). Forty-seven patients (43 %) had an FS. Factors associated with shorter survival were FS, bulbar onset, older age at onset, short time to diagnosis and a C9orf72 repeat expansion. The adjusted hazard ratio (HR) for the FS was 2.29 (95 % CI 1.44-3.65, p < 0.001) in a multivariate model. Patients with an FS had a shorter survival after NIV initiation (adjusted HR 2.70, 95 % CI 1.04-4.67, p = 0.04). In conclusion, there is an association between the frontotemporal syndrome and poor survival in ALS, which remains present after initiation of NIV.
The Role of Astro-Geodetic in Precise Guidance of Long Tunnels
NASA Astrophysics Data System (ADS)
Mirghasempour, M.; Jafari, A. Y.
2015-12-01
One of prime aspects of surveying projects is guidance of paths of a long tunnel from different directions and finally ending all paths in a specific place. This kind of underground surveying, because of particular condition, has some different points in relation to the ground surveying, including Improper geometry in underground transverse, low precise measurement in direction and length due to condition such as refraction, distinct gravity between underground point and corresponding point on the ground (both value and direction of gravity) and etc. To solve this problems, astro-geodetic that is part of geodesy science, can help surveying engineers. In this article, the role of astronomy is defined in two subjects: 1- Azimuth determination of directions from entrance and exit nets of tunnel and also calibration of gyro-theodolite to use them in Underground transvers: By astronomical methods, azimuth of directions can be determine with an accuracy of 0.5 arcsecond, whereas, nowadays, no gyroscope can measure the azimuth in this accuracy; For instance, accuracy of the most precise gyroscope (Gyromat 5000) is 1.2 cm over a distance of one kilometre (2.4 arcsecond). Furthermore, the calibration methods that will be mention in this article, have significance effects on underground transverse. 2- Height relation between entrance point and exit point is problematic and time consuming; For example, in a 3 km long tunnel ( in Arak- Khoram Abad freeway), to relate entrance point to exit point, it is necessary to perform levelling about 90 km. Other example of this boring and time consuming levelling is in Kerman tunnel. This tunnel is 36 km length, but to transfer the entrance point height to exit point, 150 km levelling is needed. According to this paper, The solution for this difficulty is application of astro-geodetic and determination of vertical deflection by digital zenith camera system TZK2-D. These two elements make possible to define geoid profile in terms of tunnel azimuth in entrance and exit of tunnel; So by doing this, surveying engineers are able to transfer entrance point height to exit point of tunnels in easiest way.
van de Glind, Esther M M; Willems, Hanna C; Eslami, Saeid; Abu-Hanna, Ameen; Lems, Willem F; Hooft, Lotty; de Rooij, Sophia E; Black, Dennis M; van Munster, Barbara C
2016-05-01
For physicians dealing with patients with a limited life expectancy, knowing the time to benefit (TTB) of preventive medication is essential to support treatment decisions. The aim of this study was to investigate the usefulness of statistical process control (SPC) for determining the TTB in relation to fracture risk with alendronate versus placebo in postmenopausal women. We performed a post hoc analysis of the Fracture Intervention Trial (FIT), a randomized, controlled trial that investigated the effect of alendronate versus placebo on fracture risk in postmenopausal women. We used SPC, a statistical method used for monitoring processes for quality control, to determine if and when the intervention group benefited significantly more than the control group. SPC discriminated between the normal variations over time in the numbers of fractures in both groups and the variations that were attributable to alendronate. The TTB was defined as the time point from which the cumulative difference in the number of clinical fractures remained greater than the upper control limit on the SPC chart. For the total group, the TTB was defined as 11 months. For patients aged ≥70 years, the TTB was 8 months [absolute risk reduction (ARR) = 1.4%]; for patients aged <70 years, it was 19 months (ARR = 0.7%). SPC is a clear and understandable graphical method to determine the TTB. Its main advantage is that there is no need to define a prespecified time point, as is the case in traditional survival analyses. Prescribing alendronate to patients who are aged ≥70 years is useful because the TTB shows that they will benefit after 8 months. Investigators should report the TTB to simplify clinical decision making.
Fundamentals of continuum mechanics – classical approaches and new trends
NASA Astrophysics Data System (ADS)
Altenbach, H.
2018-04-01
Continuum mechanics is a branch of mechanics that deals with the analysis of the mechanical behavior of materials modeled as a continuous manifold. Continuum mechanics models begin mostly by introducing of three-dimensional Euclidean space. The points within this region are defined as material points with prescribed properties. Each material point is characterized by a position vector which is continuous in time. Thus, the body changes in a way which is realistic, globally invertible at all times and orientation-preserving, so that the body cannot intersect itself and as transformations which produce mirror reflections are not possible in nature. For the mathematical formulation of the model it is also assumed to be twice continuously differentiable, so that differential equations describing the motion may be formulated. Finally, the kinematical relations, the balance equations, the constitutive and evolution equations and the boundary and/or initial conditions should be defined. If the physical fields are non-smooth jump conditions must be taken into account. The basic equations of continuum mechanics are presented following a short introduction. Additionally, some examples of solid deformable continua will be discussed within the presentation. Finally, advanced models of continuum mechanics will be introduced. The paper is dedicated to Alexander Manzhirov’s 60th birthday.
Pinkawa, Michael; Piroth, Marc D; Holy, Richard; Klotz, Jens; Djukic, Victoria; Corral, Nuria Escobar; Caffaro, Mariana; Winz, Oliver H; Krohn, Thomas; Mottaghy, Felix M; Eble, Michael J
2012-01-30
In comparison to the conventional whole-prostate dose escalation, an integrated boost to the macroscopic malignant lesion might potentially improve tumor control rates without increasing toxicity. Quality of life after radiotherapy (RT) with vs. without (18)F-choline PET-CT detected simultaneous integrated boost (SIB) was prospectively evaluated in this study. Whole body image acquisition in supine patient position followed 1 h after injection of 178-355MBq (18)F-choline. SIB was defined by a tumor-to-background uptake value ratio > 2 (GTV(PET)). A dose of 76Gy was prescribed to the prostate (PTV(prostate)) in 2Gy fractions, with or without SIB up to 80Gy. Patients treated with (n = 46) vs. without (n = 21) SIB were surveyed prospectively before (A), at the last day of RT (B) and a median time of two (C) and 19 month (D) after RT to compare QoL changes applying a validated questionnaire (EPIC - expanded prostate cancer index composite). With a median cut-off standard uptake value (SUV) of 3, a median GTV(PET) of 4.0 cm(3) and PTV(boost) (GTV(PET) with margins) of 17.3 cm(3) was defined. No significant differences were found for patients treated with vs. without SIB regarding urinary and bowel QoL changes at times B, C and D (mean differences ≤3 points for all comparisons). Significantly decreasing acute urinary and bowel score changes (mean changes > 5 points in comparison to baseline level at time A) were found for patients with and without SIB. However, long-term urinary and bowel QoL (time D) did not differ relative to baseline levels - with mean urinary and bowel function score changes < 3 points in both groups (median changes = 0 points). Only sexual function scores decreased significantly (> 5 points) at time D. Treatment planning with (18)F-choline PET-CT allows a dose escalation to a macroscopic intraprostatic lesion without significantly increasing toxicity.
Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian
2018-03-01
In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.
Riemannian multi-manifold modeling and clustering in brain networks
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Salsabilian, Shiva; Wack, David S.; Muldoon, Sarah F.; Baidoo-Williams, Henry E.; Vettel, Jean M.; Cieslak, Matthew; Grafton, Scott T.
2017-08-01
This paper introduces Riemannian multi-manifold modeling in the context of brain-network analytics: Brainnetwork time-series yield features which are modeled as points lying in or close to a union of a finite number of submanifolds within a known Riemannian manifold. Distinguishing disparate time series amounts thus to clustering multiple Riemannian submanifolds. To this end, two feature-generation schemes for brain-network time series are put forth. The first one is motivated by Granger-causality arguments and uses an auto-regressive moving average model to map low-rank linear vector subspaces, spanned by column vectors of appropriately defined observability matrices, to points into the Grassmann manifold. The second one utilizes (non-linear) dependencies among network nodes by introducing kernel-based partial correlations to generate points in the manifold of positivedefinite matrices. Based on recently developed research on clustering Riemannian submanifolds, an algorithm is provided for distinguishing time series based on their Riemannian-geometry properties. Numerical tests on time series, synthetically generated from real brain-network structural connectivity matrices, reveal that the proposed scheme outperforms classical and state-of-the-art techniques in clustering brain-network states/structures.
Defining the IEEE-854 floating-point standard in PVS
NASA Technical Reports Server (NTRS)
Miner, Paul S.
1995-01-01
A significant portion of the ANSI/IEEE-854 Standard for Radix-Independent Floating-Point Arithmetic is defined in PVS (Prototype Verification System). Since IEEE-854 is a generalization of the ANSI/IEEE-754 Standard for Binary Floating-Point Arithmetic, the definition of IEEE-854 in PVS also formally defines much of IEEE-754. This collection of PVS theories provides a basis for machine checked verification of floating-point systems. This formal definition illustrates that formal specification techniques are sufficiently advanced that is is reasonable to consider their use in the development of future standards.
[Proposal of a costing method for the provision of sterilization in a public hospital].
Bauler, S; Combe, C; Piallat, M; Laurencin, C; Hida, H
2011-07-01
To refine the billing to institutions whose operations of sterilization are outsourced, a sterilization cost approach was developed. The aim of the study is to determine the value of a sterilization unit (one point "S") evolving according to investments, quantities processed, types of instrumentation or packaging. The time of preparation has been selected from all sub-processes of sterilization to determine the value of one point S. The time of preparation of sterilized large and small containers and pouches were raised. The reference time corresponds to one bag (equal to one point S). Simultaneously, the annual operating cost of sterilization was defined and divided into several areas of expenditure: employees, equipments and building depreciation, supplies, and maintenance. A total of 136 crossing times of containers were measured. Time to prepare a pouch has been estimated at one minute (one S). A small container represents four S and a large container represents 10S. By dividing the operating cost of sterilization by the total number of points of sterilization over a given period, the cost of one S can be determined. This method differs from traditional costing method in sterilizing services, considering each item of expenditure. This point S will be the base for billing of subcontracts to other institutions. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Cosmic infinity: a dynamical system approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouhmadi-López, Mariam; Marto, João; Morais, João
2017-03-01
Dynamical system techniques are extremely useful to study cosmology. It turns out that in most of the cases, we deal with finite isolated fixed points corresponding to a given cosmological epoch. However, it is equally important to analyse the asymptotic behaviour of the universe. On this paper, we show how this can be carried out for 3-form models. In fact, we show that there are fixed points at infinity mainly by introducing appropriate compactifications and defining a new time variable that washes away any potential divergence of the system. The richness of 3-form models allows us as well to identifymore » normally hyperbolic non-isolated fixed points. We apply this analysis to three physically interesting situations: (i) a pre-inflationary era; (ii) an inflationary era; (iii) the late-time dark matter/dark energy epoch.« less
A Point Rainfall Generator With Internal Storm Structure
NASA Astrophysics Data System (ADS)
Marien, J. L.; Vandewiele, G. L.
1986-04-01
A point rainfall generator is a probabilistic model for the time series of rainfall as observed in one geographical point. The main purpose of such a model is to generate long synthetic sequences of rainfall for simulation studies. The present generator is a continuous time model based on 13.5 years of 10-min point rainfalls observed in Belgium and digitized with a resolution of 0.1 mm. The present generator attempts to model all features of the rainfall time series which are important for flood studies as accurately as possible. The original aspects of the model are on the one hand the way in which storms are defined and on the other hand the theoretical model for the internal storm characteristics. The storm definition has the advantage that the important characteristics of successive storms are fully independent and very precisely modelled, even on time bases as small as 10 min. The model of the internal storm characteristics has a strong theoretical structure. This fact justifies better the extrapolation of this model to severe storms for which the data are very sparse. This can be important when using the model to simulate severe flood events.
Kyle, Simon D; Miller, Christopher B; Rogers, Zoe; Siriwardena, A Niroshan; Macmahon, Kenneth M; Espie, Colin A
2014-02-01
To investigate whether sleep restriction therapy (SRT) is associated with reduced objective total sleep time (TST), increased daytime somnolence, and impaired vigilance. Within-subject, noncontrolled treatment investigation. Sleep research laboratory. Sixteen patients [10 female, mean age = 47.1 (10.8) y] with well-defined psychophysiological insomnia (PI), reporting TST ≤ 6 h. Patients were treated with single-component SRT over a 4-w protocol, sleeping in the laboratory for 2 nights prior to treatment initiation and for 3 nights (SRT night 1, 8, 22) during the acute interventional phase. The psychomotor vigilance task (PVT) was completed at seven defined time points [day 0 (baseline), day 1,7,8,21,22 (acute treatment) and day 84 (3 mo)]. The Epworth Sleepiness Scale (ESS) was completed at baseline, w 1-4, and 3 mo. Subjective sleep outcomes and global insomnia severity significantly improved before and after SRT. There was, however, a robust decrease in PSG-defined TST during acute implementation of SRT, by an average of 91 min on night 1, 78 min on night 8, and 69 min on night 22, relative to baseline (P < 0.001; effect size range = 1.60-1.80). During SRT, PVT lapses were significantly increased from baseline (at three of five assessment points, all P < 0.05; effect size range = 0.69-0.78), returning to baseline levels by 3 mo (P = 0.43). A similar pattern was observed for RT, with RTs slowing during acute treatment (at four of five assessment points, all P < 0.05; effect size range = 0.57-0.89) and returning to pretreatment levels at 3 mo (P = 0.78). ESS scores were increased at w 1, 2, and 3 (relative to baseline; all P < 0.05); by 3 mo, sleepiness had returned to baseline (normative) levels (P = 0.65). For the first time we show that acute sleep restriction therapy is associated with reduced objective total sleep time, increased daytime sleepiness, and objective performance impairment. Our data have important implications for implementation guidelines around the safe and effective delivery of cognitive behavioral therapy for insomnia.
Split delivery vehicle routing problem with time windows: a case study
NASA Astrophysics Data System (ADS)
Latiffianti, E.; Siswanto, N.; Firmandani, R. A.
2018-04-01
This paper aims to implement an extension of VRP so called split delivery vehicle routing problem (SDVRP) with time windows in a case study involving pickups and deliveries of workers from several points of origin and several destinations. Each origin represents a bus stop and the destination represents either site or office location. An integer linear programming of the SDVRP problem is presented. The solution was generated using three stages of defining the starting points, assigning busses, and solving the SDVRP with time windows using an exact method. Although the overall computational time was relatively lengthy, the results indicated that the produced solution was better than the existing routing and scheduling that the firm used. The produced solution was also capable of reducing fuel cost by 9% that was obtained from shorter total distance travelled by the shuttle buses.
Raised BMI cut-off for overweight in Greenland Inuit--a review.
Andersen, Stig; Fleischer Rex, Karsten; Noahsen, Paneeraq; Sørensen, Hans Christian Florian; Mulvad, Gert; Laurberg, Peter
2013-01-01
Obesity is associated with increased morbidity and premature death. Obesity rates have increased worldwide and the WHO recommends monitoring. A steep rise in body mass index (BMI), a measure of adiposity, was detected in Greenland from 1963 to 1998. Interestingly, the BMI starting point was in the overweight range. This is not conceivable in a disease-free, physically active, pre-western hunter population. This led us to reconsider the cut-off point for overweight among Inuit in Greenland. We found 3 different approaches to defining the cut-off point of high BMI in Inuit. First, the contribution to the height by the torso compared to the legs is relatively high. This causes relatively more kilograms per centimetre of height that increases the BMI by approximately 10% compared to Caucasian whites. Second, defining the cut-off by the upper 90-percentile of BMI from height and weight in healthy young Inuit surveyed in 1963 estimated the cut-off point to be around 10% higher compared to Caucasians. Third, if similar LDL-cholesterol and triglycerides are assumed for a certain BMI in Caucasians, the corresponding BMI in Inuit in both Greenland and Canada is around 10% higher. However, genetic admixture of Greenland Inuit and Caucasian Danes will influence this difference and hamper a clear distinction with time. Defining overweight according to the WHO cut-off of a BMI above 25 kg/m(2) in Greenland Inuit may overestimate the number of individuals with elevated BMI.
Raised BMI cut-off for overweight in Greenland Inuit – a review
Andersen, Stig; Fleischer Rex, Karsten; Noahsen, Paneeraq; Sørensen, Hans Christian Florian; Mulvad, Gert; Laurberg, Peter
2013-01-01
Background Obesity is associated with increased morbidity and premature death. Obesity rates have increased worldwide and the WHO recommends monitoring. A steep rise in body mass index (BMI), a measure of adiposity, was detected in Greenland from 1963 to 1998. Interestingly, the BMI starting point was in the overweight range. This is not conceivable in a disease-free, physically active, pre-western hunter population. Objective This led us to reconsider the cut-off point for overweight among Inuit in Greenland. Design and findings We found 3 different approaches to defining the cut-off point of high BMI in Inuit. First, the contribution to the height by the torso compared to the legs is relatively high. This causes relatively more kilograms per centimetre of height that increases the BMI by approximately 10% compared to Caucasian whites. Second, defining the cut-off by the upper 90-percentile of BMI from height and weight in healthy young Inuit surveyed in 1963 estimated the cut-off point to be around 10% higher compared to Caucasians. Third, if similar LDL-cholesterol and triglycerides are assumed for a certain BMI in Caucasians, the corresponding BMI in Inuit in both Greenland and Canada is around 10% higher. However, genetic admixture of Greenland Inuit and Caucasian Danes will influence this difference and hamper a clear distinction with time. Conclusion Defining overweight according to the WHO cut-off of a BMI above 25 kg/m2 in Greenland Inuit may overestimate the number of individuals with elevated BMI. PMID:23986904
Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora
2017-12-01
Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to ( a ) catalog feasibility measures/metrics and ( b ) propose a framework. For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization.
Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora
2017-01-01
Objective Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to (a) catalog feasibility measures/metrics and (b) propose a framework. Methods For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. Findings We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Conclusions Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization. PMID:29333105
Sosic-Vasic, Zrinka; Hille, Katrin; Kröner, Julia; Spitzer, Manfred; Kornmeier, Jürgen
2018-01-01
Introduction: Consolidation is defined as the time necessary for memory stabilization after learning. In the present study we focused on effects of interference during the first 12 consolidation minutes after learning. Participants had to learn a set of German – Japanese word pairs in an initial learning task and a different set of German – Japanese word pairs in a subsequent interference task. The interference task started in different experimental conditions at different time points (0, 3, 6, and 9 min) after the learning task and was followed by subsequent cued recall tests. In a control experiment the interference periods were replaced by rest periods without any interference. Results: The interference task decreased memory performance by up to 20%, with negative effects at all interference time points and large variability between participants concerning both the time point and the size of maximal interference. Further, fast learners seem to be more affected by interference than slow learners. Discussion: Our results indicate that the first 12 min after learning are highly important for memory consolidation, without a general pattern concerning the precise time point of maximal interference across individuals. This finding raises doubts about the generalized learning recipes and calls for individuality of learning schedules. PMID:29503621
Academic Success of Suspended Students. AIR 2001 Annual Forum Paper.
ERIC Educational Resources Information Center
Howard, Richard D.; Borland, Ken; Johnson, Cel; Baker, Larry J.
Cohorts of entering freshmen were tracked over time to determine whether the suspension policy at Montana State University (MSU), Bozeman was having the intended effect on academic success, defined as degree completion. The university's current policy requires students to be suspended after receiving grade point averages (GPAs) of lower than 2.00…
Methods for measuring populations of small, diurnal forest birds.
D.A. Manuwal; A.B. Carey
1991-01-01
Before a bird population is measured, the objectives of the study should be clearly defined. Important factors to be considered in designing a study are study site selection, plot size or transect length, distance between sampling points, duration of counts, and frequency and timing of sampling. Qualified field personnel are especially important. Assumptions applying...
NASA Technical Reports Server (NTRS)
Varaiya, P. P.
1972-01-01
General discussion of the theory of differential games with two players and zero sum. Games starting at a fixed initial state and ending at a fixed final time are analyzed. Strategies for the games are defined. The existence of saddle values and saddle points is considered. A stochastic version of a differential game is used to examine the synthesis problem.
1984-03-20
E. Anderson reviewed what was known about the dehydrations of gypsum, smectite, halloysite , vermiculite, and the zeolite minerals. Simple...dehydrations such as those of gypsum and halloysite occur at sharply-defined temperatures and thus contribute a time-limited fluid pulse at a given point. The
NASA Astrophysics Data System (ADS)
Benedek, Judit; Papp, Gábor; Kalmár, János
2018-04-01
Beyond rectangular prism polyhedron, as a discrete volume element, can also be used to model the density distribution inside 3D geological structures. The calculation of the closed formulae given for the gravitational potential and its higher-order derivatives, however, needs twice more runtime than that of the rectangular prism computations. Although the more detailed the better principle is generally accepted it is basically true only for errorless data. As soon as errors are present any forward gravitational calculation from the model is only a possible realization of the true force field on the significance level determined by the errors. So if one really considers the reliability of input data used in the calculations then sometimes the "less" can be equivalent to the "more" in statistical sense. As a consequence the processing time of the related complex formulae can be significantly reduced by the optimization of the number of volume elements based on the accuracy estimates of the input data. New algorithms are proposed to minimize the number of model elements defined both in local and in global coordinate systems. Common gravity field modelling programs generate optimized models for every computation points ( dynamic approach), whereas the static approach provides only one optimized model for all. Based on the static approach two different algorithms were developed. The grid-based algorithm starts with the maximum resolution polyhedral model defined by 3-3 points of each grid cell and generates a new polyhedral surface defined by points selected from the grid. The other algorithm is more general; it works also for irregularly distributed data (scattered points) connected by triangulation. Beyond the description of the optimization schemes some applications of these algorithms in regional and local gravity field modelling are presented too. The efficiency of the static approaches may provide even more than 90% reduction in computation time in favourable situation without the loss of reliability of the calculated gravity field parameters.
A Flexible Toolkit Supporting Knowledge-based Tactical Planning for Ground Forces
2011-06-01
assigned to each of the Special Areas to model its temporal behaviour . In Figure 5 an optimal path going over two defined intermediate points is...which area can be reached by an armoured infantry platoon within a given time interval, which path should be taken by a support unit to minimize...al. 2008]. Although trained commanders and staff personnel may achieve very accurate planning results, time consuming procedures are excluded when
Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras
Wu, Dewen; Chen, Ruizhi; Chen, Liang
2017-01-01
Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm. PMID:29144420
Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras.
Wu, Dewen; Chen, Ruizhi; Chen, Liang
2017-11-16
Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm.
On global solutions of the random Hamilton-Jacobi equations and the KPZ problem
NASA Astrophysics Data System (ADS)
Bakhtin, Yuri; Khanin, Konstantin
2018-04-01
In this paper, we discuss possible qualitative approaches to the problem of KPZ universality. Throughout the paper, our point of view is based on the geometrical and dynamical properties of minimisers and shocks forming interlacing tree-like structures. We believe that the KPZ universality can be explained in terms of statistics of these structures evolving in time. The paper is focussed on the setting of the random Hamilton-Jacobi equations. We formulate several conjectures concerning global solutions and discuss how their properties are connected to the KPZ scalings in dimension 1 + 1. In the case of general viscous Hamilton-Jacobi equations with non-quadratic Hamiltonians, we define generalised directed polymers. We expect that their behaviour is similar to the behaviour of classical directed polymers, and present arguments in favour of this conjecture. We also define a new renormalisation transformation defined in purely geometrical terms and discuss conjectural properties of the corresponding fixed points. Most of our conjectures are widely open, and supported by only partial rigorous results for particular models.
NASA Astrophysics Data System (ADS)
Phillips, Nicholas G.; Hu, B. L.
2000-10-01
We present calculations of the variance of fluctuations and of the mean of the energy momentum tensor of a massless scalar field for the Minkowski and Casimir vacua as a function of an intrinsic scale defined by a smeared field or by point separation. We point out that, contrary to prior claims, the ratio of variance to mean-squared being of the order unity is not necessarily a good criterion for measuring the invalidity of semiclassical gravity. For the Casimir topology we obtain expressions for the variance to mean-squared ratio as a function of the intrinsic scale (defined by a smeared field) compared to the extrinsic scale (defined by the separation of the plates, or the periodicity of space). Our results make it possible to identify the spatial extent where negative energy density prevails which could be useful for studying quantum field effects in worm holes and baby universes, and for examining the design feasibility of real-life ``time machines.'' For the Minkowski vacuum we find that the ratio of the variance to the mean-squared, calculated from the coincidence limit, is identical to the value of the Casimir case at the same limit for spatial point separation while identical to the value of a hot flat space result with a temporal point separation. We analyze the origin of divergences in the fluctuations of the energy density and discuss choices in formulating a procedure for their removal, thus raising new questions about the uniqueness and even the very meaning of regularization of the energy momentum tensor for quantum fields in curved or even flat spacetimes when spacetime is viewed as having an extended structure.
1983-01-01
The resolution of the compu- and also leads to an expression for "dz,"*. tational grid is thereby defined according to e the actual requirements of...computational economy are achieved simultaneously by redistributing the computational grid points according to the physical requirements of the problem...computational Eulerian grid points according to implemented using a two-dimensionl time- the physical requirements of the nonlinear dependent finite
Lambert, Thomas; Nahler, Alexander; Rohla, Miklos; Reiter, Christian; Grund, Michael; Kammler, Jürgen; Blessberger, Hermann; Kypta, Alexander; Kellermair, Jörg; Schwarz, Stefan; Starnawski, Jennifer A; Lichtenauer, Michael; Weiss, Thomas W; Huber, Kurt; Steinwender, Clemens
2016-10-01
Defining an adequate endpoint for renal denervation trials represents a major challenge. A high inter-individual and intra-individual variability of blood pressure levels as well as a partial or total non-adherence on antihypertensive drugs hamper treatment evaluations after renal denervation. Blood pressure measurements at a single point in time as used as primary endpoint in most clinical trials on renal denervation, might not be sufficient to discriminate between patients who do or do not respond to renal denervation. We compared the traditional responder classification (defined as systolic 24-hour blood pressure reduction of -5mmHg six months after renal denervation) with a novel definition of an ideal respondership (based on a 24h blood pressure reduction at no point in time, one, or all follow-up timepoints). We were able to re-classify almost a quarter of patients. Blood pressure variability was substantial in patients traditionally defined as responders. On the other hand, our novel classification of an ideal respondership seems to be clinically superior in discriminating sustained from pseudo-response to renal denervation. Based on our observations, we recommend that the traditional response classification should be reconsidered and possibly strengthened by using a composite endpoint of 24h-BP reductions at different follow-up-visits. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vasić, M.; Radojević, Z.
2017-08-01
One of the main disadvantages of the recently reported method, for setting up the drying regime based on the theory of moisture migration during drying, lies in a fact that it is based on a large number of isothermal experiments. In addition each isothermal experiment requires the use of different drying air parameters. The main goal of this paper was to find a way how to reduce the number of isothermal experiments without affecting the quality of the previously proposed calculation method. The first task was to define the lower and upper inputs as well as the output of the “black box” which will be used in the Box-Wilkinson’s orthogonal multi-factorial experimental design. Three inputs (drying air temperature, humidity and velocity) were used within the experimental design. The output parameter of the model represents the time interval between any two chosen characteristic points presented on the Deff - t. The second task was to calculate the output parameter for each planed experiments. The final output of the model is the equation which can predict the time interval between any two chosen characteristic points as a function of the drying air parameters. This equation is valid for any value of the drying air parameters which are within the defined area designated with lower and upper limiting values.
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.; Scheeres, Daniel J.
2018-06-01
The observation to observation measurement association problem for dynamical systems can be addressed by determining if the uncertain admissible regions produced from each observation have one or more points of intersection in state space. An observation association method is developed which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions. A binary hypothesis test with a selected false alarm rate is used to assess the probability that an intersection exists at the point(s) of minimum distance. The systemic uncertainties, such as measurement uncertainties, timing errors, and other parameter errors, define a distribution about a state estimate located at the local Mahalanobis distance minima. If local minima do not exist, then the observations are not associated. The proposed method utilizes an optimization approach defined on a reduced dimension state space to reduce the computational load of the algorithm. The efficacy and efficiency of the proposed method is demonstrated on observation data collected from the Georgia Tech Space Object Research Telescope.
Slew maneuvers on the SCOLE Laboratory Facility
NASA Technical Reports Server (NTRS)
Williams, Jeffrey P.
1987-01-01
The Spacecraft Control Laboratory Experiment (SCOLE) was conceived to provide a physical test bed for the investigation of control techniques for large flexible spacecraft. The control problems studied are slewing maneuvers and pointing operations. The slew is defined as a minimum time maneuver to bring the antenna line-of-sight (LOS) pointing to within an error limit of the pointing target. The second objective is to rotate about the LOS within the 0.02 degree error limit. The SCOLE problem is defined as two design challenges: control laws for a mathematical model of a large antenna attached to the Space Shuttle by a long flexible mast; and a control scheme on a laboratory representation of the structure modelled on the control laws. Control sensors and actuators are typical of those which the control designer would have to deal with on an actual spacecraft. Computational facilities consist of microcomputer based central processing units with appropriate analog interfaces for implementation of the primary control system, and the attitude estimation algorithm. Preliminary results of some slewing control experiments are given.
The contribution of dynamic visual cues to audiovisual speech perception.
Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador
2015-08-01
Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hadžibajramović, Emina; Ahlborg, Gunnar; Håkansson, Carita; Lundgren-Nilsson, Åsa; Grimby-Ekman, Anna
2015-12-01
Psychosocial stress at work is one of the most important factors behind increasing sick-leave rates. In addition to work stressors, it is important to account for non-work-related stressors when assessing stress responses. In this study, a modified version of the Stress-Energy Questionnaire (SEQ), the SEQ during leisure time (SEQ-LT) was introduced for assessing the affective stress response during leisure time. The aim of this study was to investigate the internal construct validity of the SEQ-LT. A second aim was to define the cut-off points for the scales, which could indicate high and low levels of leisure-time stress and energy, respectively. Internal construct validity of the SEQ-LT was evaluated using a Rasch analysis. We examined the unidimensionality and other psychometric properties of the scale by the fit to the Rasch model. A criterion-based approach was used for classification into high and low stress/energy levels. The psychometric properties of the stress and energy scales of the SEQ-LT were satisfactory, having accommodated for local dependency. The cut-off point for low stress was proposed to be in the interval between 2.45 and 3.02 on the Rasch metric score; while for high stress, it was between 3.65 and 3.90. The suggested cut-off points for the low and high energy levels were values between 1.73-1.97 and 2.66-3.08, respectively. The stress and energy scale of the SEQ-LT satisfied the measurement criteria defined by the Rasch analysis and it provided a useful tool for non-work-related assessment of stress responses. We provide guidelines on how to interpret the scale values. © 2015 the Nordic Societies of Public Health.
The Ever-Changing Meanings of Retirement
ERIC Educational Resources Information Center
McVittie, Chris; Goodall, Karen
2012-01-01
Shultz and Wang (April 2011) drew attention to the ways in which understandings of retirement have changed over time, both in terms of the place of retirement in the lives of individuals and in terms of how retirement can no longer usefully be taken to comprise a single defining event. As the authors pointed out, psychological research has…
Stability of Language Performance at 4 and 5 Years: Measurement and Participant Variability
ERIC Educational Resources Information Center
Eadie, Patricia; Nguyen, Cattram; Carlin, John; Bavin, Edith; Bretherton, Lesley; Reilly, Sheena
2014-01-01
Background: Language impairment (LI) in the preschool years is known to vary over time. Stability in the diagnosis of LI may be influenced by children's individual variability, the measurement error of commonly used assessment instruments and the cut-points used to define impairment. Aims: To investigate the agreement between two different…
Decay of Correlations, Quantitative Recurrence and Logarithm Law for Contracting Lorenz Attractors
NASA Astrophysics Data System (ADS)
Galatolo, Stefano; Nisoli, Isaia; Pacifico, Maria Jose
2018-03-01
In this paper we prove that a class of skew products maps with non uniformly hyperbolic base has exponential decay of correlations. We apply this to obtain a logarithm law for the hitting time associated to a contracting Lorenz attractor at all the points having a well defined local dimension, and a quantitative recurrence estimation.
Tracking Subpixel Targets with Critically Sampled Optical Sensors
2012-09-01
5 [32]. The Viterbi algorithm is a dynamic programming method for calculating the MAP in O(tn2) time . The most common use of this algorithm is in the... method to detect subpixel point targets using the sensor’s PSF as an identifying characteristic. Using matched filtering theory, a measure is defined to...ocean surface beneath the cloud will have a different distribution. While the basic methods will adapt to changes in cloud cover over time , it is also
Instrument Pointing Capabilities: Past, Present, and Future
NASA Technical Reports Server (NTRS)
Blackmore, Lars; Murray, Emmanuell; Scharf, Daniel P.; Aung, Mimi; Bayard, David; Brugarolas, Paul; Hadaegh, Fred; Lee, Allan; Milman, Mark; Sirlin, Sam;
2011-01-01
This paper surveys the instrument pointing capabilities of past, present and future space telescopes and interferometers. As an important aspect of this survey, we present a taxonomy for "apples-to-apples" comparisons of pointing performances. First, pointing errors are defined relative to either an inertial frame or a celestial target. Pointing error can then be further sub-divided into DC, that is, steady state, and AC components. We refer to the magnitude of the DC error relative to the inertial frame as absolute pointing accuracy, and we refer to the magnitude of the DC error relative to a celestial target as relative pointing accuracy. The magnitude of the AC error is referred to as pointing stability. While an AC/DC partition is not new, we leverage previous work by some of the authors to quantitatively clarify and compare varying definitions of jitter and time window averages. With this taxonomy and for sixteen past, present, and future missions, pointing accuracies and stabilities, both required and achieved, are presented. In addition, we describe the attitude control technologies used to and, for future missions, planned to achieve these pointing performances.
Liu, Zhenbang; Ng, Junxiang; Yuwono, Arianto; Lu, Yadong; Tan, Yung Khan
2017-01-01
To compare the staining intensity of the upper urinary tract (UUT) urothelium among three UUT delivery methods in an in vivo porcine model. A fluorescent dye solution (indigo carmine) was delivered to the UUT via three different methods: antegrade perfusion, vesico-ureteral reflux via indwelling ureteric stent and retrograde perfusion via a 5F open-ended ureteral catheter. Twelve renal units were tested with 4 in each method. After a 2-hour delivery time, the renal-ureter units were harvested en bloc. Time from harvesting to analysis was also standardised to be 2 hours in each arm. Three urothelium samples of the same weight and size were taken from each of the 6 pre-defined points (upper pole, mid pole, lower pole, renal pelvis, mid ureter and distal ureter) and the amount of fluorescence was measured with a spectrometer. The mean fluorescence detected at all 6 predefined points of the UUT urothelium was the highest for the retrograde method. This was statistically significant with p-value less than <0.05 at all 6 points. Retrograde infusion of UUT by an open ended ureteral catheter resulted in highest mean fluorescence detected at all 6 pre-defined points of the UUT urothelium compared to antegrade infusion and vesico-ureteral reflux via indwelling ureteric stents indicating retrograde method ideal for topical therapy throughout the UUT urothelium. More clinical studies are needed to demonstrate if retrograde method could lead to better clinical outcomes compared to the other two methods. Copyright® by the International Brazilian Journal of Urology.
Liu, Zhenbang; Ng, Junxiang; Yuwono, Arianto; Lu, Yadong; Tan, Yung Khan
2017-01-01
ABSTRACT Purpose: To compare the staining intensity of the upper urinary tract (UUT) urothelium among three UUT delivery methods in an in vivo porcine model. Materials and methods: A fluorescent dye solution (indigo carmine) was delivered to the UUT via three different methods: antegrade perfusion, vesico-ureteral reflux via in-dwelling ureteric stent and retrograde perfusion via a 5F open-ended ureteral catheter. Twelve renal units were tested with 4 in each method. After a 2-hour delivery time, the renal-ureter units were harvested en bloc. Time from harvesting to analysis was also standardised to be 2 hours in each arm. Three urothelium samples of the same weight and size were taken from each of the 6 pre-defined points (upper pole, mid pole, lower pole, renal pelvis, mid ureter and distal ureter) and the amount of fluorescence was measured with a spectrometer. Results: The mean fluorescence detected at all 6 predefined points of the UUT urothelium was the highest for the retrograde method. This was statistically significant with p-value less than <0.05 at all 6 points. Conclusions: Retrograde infusion of UUT by an open ended ureteral catheter resulted in highest mean fluorescence detected at all 6 pre-defined points of the UUT urothelium compared to antegrade infusion and vesico-ureteral reflux via indwelling ureteric stents indicating retrograde method ideal for topical therapy throughout the UUT urothelium. More clinical studies are needed to demonstrate if retrograde method could lead to better clinical outcomes compared to the other two methods. PMID:29039888
Joint classification and contour extraction of large 3D point clouds
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2017-08-01
We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.
Research in Knowledge Representation for Natural Language Communication and Planning Assistance
1987-10-01
elements of PFR Instants of time are represented as individuals where they form a continuum Let "seconds" map real numbers to instants where "seconds(n...34 denotes n seconds. Points in space form a 3-dimensional continuum. Changing relations are represented as functions on instants of time. Formulas and...occupies at time t. "occ.space(x)(t)" is defined iff x is a physical object, I is an instant of lime, and x exists at t Further, x must occupy a non
Economic and microbiologic evaluation of single-dose vial extension for hazardous drugs.
Rowe, Erinn C; Savage, Scott W; Rutala, William A; Weber, David J; Gergen-Teague, Maria; Eckel, Stephen F
2012-07-01
The update of US Pharmacopeia Chapter <797> in 2008 included guidelines stating that single-dose vials (SDVs) opened and maintained in an International Organization for Standardization Class 5 environment can be used for up to 6 hours after initial puncture. A study was conducted to evaluate the cost of discarding vials after 6 hours and to further test sterility of vials beyond this time point, subsequently defined as the beyond-use date (BUD). Financial determination of SDV waste included 2 months of retrospective review of all doses prescribed. Additionally, actual waste log data were collected. Active and control vials (prepared using sterilized trypticase soy broth) were recovered, instead of discarded, at the defined 6-hour BUD. The institution-specific waste of 19 selected SDV medications discarded at 6 hours was calculated at $766,000 annually, and tracking waste logs for these same medications was recorded at $770,000 annually. Microbiologic testing of vial extension beyond 6 hours showed that 11 (1.86%) of 592 samples had one colony-forming unit on one of two plates. Positive plates were negative at subsequent time points, and all positives were single isolates most likely introduced during the plating process. The cost of discarding vials at 6 hours was significant for hazardous medications in a large academic medical center. On the basis of microbiologic data, vial BUD extension demonstrated a contamination frequency of 1.86%, which likely represented exogenous contamination; vial BUD extension for the tested drugs showed no growth at subsequent time points and could provide an annual cost savings of more than $600,000.
Point-of-care ultrasonography by pediatric emergency physicians. Policy statement.
Marin, Jennifer R; Lewiss, Resa E
2015-04-01
Point-of-care ultrasonography is increasingly being used to facilitate accurate and timely diagnoses and to guide procedures. It is important for pediatric emergency physicians caring for patients in the emergency department to receive adequate and continued point-of-care ultrasonography training for those indications used in their practice setting. Emergency departments should have credentialing and quality assurance programs. Pediatric emergency medicine fellowships should provide appropriate training to physician trainees. Hospitals should provide privileges to physicians who demonstrate competency in point-of-care ultrasonography. Ongoing research will provide the necessary measures to define the optimal training and competency assessment standards. Requirements for credentialing and hospital privileges will vary and will be specific to individual departments and hospitals. As more physicians are trained and more research is completed, there should be one national standard for credentialing and privileging in point-of-care ultrasonography for pediatric emergency physicians.
Real-time optical multiple object recognition and tracking system and method
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Liu, Hua Kuang (Inventor)
1987-01-01
The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.
Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem
NASA Astrophysics Data System (ADS)
Omagari, Hiroki; Higashino, Shin-Ichiro
2018-04-01
In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.
Mihaylova, Milena; Manahilov, Velitchko
2010-11-24
Research has shown that the processing time for discriminating illusory contours is longer than for real contours. We know, however, little whether the visual processes, associated with detecting regions of illusory surfaces, are also slower as those responsible for detecting luminance-defined images. Using a speed-accuracy trade-off (SAT) procedure, we measured accuracy as a function of processing time for detecting illusory Kanizsa-type and luminance-defined squares embedded in 2D static luminance noise. The data revealed that the illusory images were detected at slower processing speed than the real images, while the points in time, when accuracy departed from chance, were not significantly different for both stimuli. The classification images for detecting illusory and real squares showed that observers employed similar detection strategies using surface regions of the real and illusory squares. The lack of significant differences between the x-intercepts of the SAT functions for illusory and luminance-modulated stimuli suggests that the detection of surface regions of both images could be based on activation of a single mechanism (the dorsal magnocellular visual pathway). The slower speed for detecting illusory images as compared to luminance-defined images could be attributed to slower processes of filling-in of regions of illusory images within the dorsal pathway.
Signal processing of anthropometric data
NASA Astrophysics Data System (ADS)
Zimmermann, W. J.
1983-09-01
The Anthropometric Measurements Laboratory has accumulated a large body of data from a number of previous experiments. The data is very noisy, therefore it requires the application of some signal processing schemes. Moreover, it was not regarded as time series measurements but as positional information; hence, the data is stored as coordinate points as defined by the motion of the human body. The accumulated data defines two groups or classes. Some of the data was collected from an experiment designed to measure the flexibility of the limbs, referred to as radial movement. The remaining data was collected from experiments designed to determine the surface of the reach envelope. An interactive signal processing package was designed and implemented. Since the data does not include time this package does not include a time series element. Presently the results is restricted to processing data obtained from those experiments designed to measure flexibility.
Signal processing of anthropometric data
NASA Technical Reports Server (NTRS)
Zimmermann, W. J.
1983-01-01
The Anthropometric Measurements Laboratory has accumulated a large body of data from a number of previous experiments. The data is very noisy, therefore it requires the application of some signal processing schemes. Moreover, it was not regarded as time series measurements but as positional information; hence, the data is stored as coordinate points as defined by the motion of the human body. The accumulated data defines two groups or classes. Some of the data was collected from an experiment designed to measure the flexibility of the limbs, referred to as radial movement. The remaining data was collected from experiments designed to determine the surface of the reach envelope. An interactive signal processing package was designed and implemented. Since the data does not include time this package does not include a time series element. Presently the results is restricted to processing data obtained from those experiments designed to measure flexibility.
Farsalinos, Konstantinos E; Voudris, Vassilis; Poulas, Konstantinos
2015-05-15
Studies have found that metals are emitted to the electronic cigarette (EC) aerosol. However, the potential health impact of exposure to such metals has not been adequately defined. The purpose of this study was to perform a risk assessment analysis, evaluating the exposure of electronic cigarette (EC) users to metal emissions based on findings from the published literature. Two studies were found in the literature, measuring metals emitted to the aerosol from 13 EC products. We estimated that users take on average 600 EC puffs per day, but we evaluated the daily exposure from 1200 puffs. Estimates of exposure were compared with the chronic Permissible Daily Exposure (PDE) from inhalational medications defined by the U.S. Pharmacopeia (cadmium, chromium, copper, lead and nickel), the Minimal Risk Level (MRL) defined by the Agency for Toxic Substances and Disease Registry (manganese) and the Recommended Exposure Limit (REL) defined by the National Institute of Occupational Safety and Health (aluminum, barium, iron, tin, titanium, zinc and zirconium). The average daily exposure from 13 EC products was 2.6 to 387 times lower than the safety cut-off point of PDEs, 325 times lower than the safety limit of MRL and 665 to 77,514 times lower than the safety cut-off point of RELs. Only one of the 13 products was found to result in exposure 10% higher than PDE for one metal (cadmium) at the extreme daily use of 1200 puffs. Significant differences in emissions between products were observed. Based on currently available data, overall exposure to metals from EC use is not expected to be of significant health concern for smokers switching to EC use, but is an unnecessary source of exposure for never-smokers. Metal analysis should be expanded to more products and exposure can be further reduced through improvements in product quality and appropriate choice of materials.
Numerical analysis of transient fields near thin-wire antennas and scatterers
NASA Astrophysics Data System (ADS)
Landt, J. A.
1981-11-01
Under the premise that `accelerated charge radiates,' one would expect radiation on wire structures to occur from driving points, ends of wires, bends in wires, or locations of lumped loading. Here, this premise is investigated in a series of numerical experiments. The numerical procedure is based on a moment-method solution of a thin-wire time-domain electric-field integral equation. The fields in the vicinity of wire structures are calculated for short impulsive-type excitations, and are viewed in a series of time sequences or snapshots. For these excitations, the fields are spatially limited in the radial dimension, and expand in spheres centered about points of radiation. These centers of radiation coincide with the above list of possible source regions. Time retardation permits these observations to be made clearly in the time domain, similar to time-range gating. In addition to providing insight into transient radiation processes, these studies show that the direction of energy flow is not always defined by Poynting's vector near wire structures.
Applied Time Domain Stability Margin Assessment for Nonlinear Time-Varying Systems
NASA Technical Reports Server (NTRS)
Kiefer, J. M.; Johnson, M. D.; Wall, J. H.; Dominguez, A.
2016-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation. This technique was implemented by using the Stability Aerospace Vehicle Analysis Tool (SAVANT) computer simulation to evaluate the stability of the SLS system with the Adaptive Augmenting Control (AAC) active and inactive along its ascent trajectory. The gains for which the vehicle maintains apparent time-domain stability defines the gain margins, and the time delay similarly defines the phase margin. This method of extracting the control stability margins from the time-domain simulation is relatively straightforward and the resultant margins can be compared to the linearized system results. The sections herein describe the techniques employed to extract the time-domain margins, compare the results between these nonlinear and the linear methods, and provide explanations for observed discrepancies. The SLS ascent trajectory was simulated with SAVANT and the classical linear stability margins were evaluated at one second intervals. The linear analysis was performed with the AAC algorithm disabled to attain baseline stability margins. At each time point, the system was linearized about the current operating point using Simulink's built-in solver. Each linearized system in time was evaluated for its rigid-body gain margin (high frequency gain margin), rigid-body phase margin, and aero gain margin (low frequency gain margin) for each control axis. Using the stability margins derived from the baseline linearization approach, the time domain derived stability margins were determined by executing time domain simulations in which axis-specific incremental gain and phase adjustments were made to the nominal system about the expected neutral stability point at specific flight times. The baseline stability margin time histories were used to shift the system gain to various values around the zero margin point such that a precise amount of expected gain margin was maintained throughout flight. When assessing the gain margins, the gain was applied starting at the time point under consideration, thereafter following the variation in the margin found in the linear analysis. When assessing the rigid-body phase margin, a constant time delay was applied to the system starting at the time point under consideration. If the baseline stability margins were correctly determined via the linear analysis, the time domain simulation results should contain unstable behavior at certain gain and phase values. Examples will be shown from repeated simulations with variable added gain and phase lag. Faithfulness of margins calculated from the linear analysis to the nonlinear system will be demonstrated.
ERIC Educational Resources Information Center
Mak, Wingyun; Sorensen, Silvia
2012-01-01
Purpose: This study examines the longitudinal patterns of Preparation for Future Care (PFC), defined as Awareness, Avoidance, Gathering Information, Decision Making, and Concrete Plans, in first-degree relatives of people with Alzheimer's disease (AD). Design and Methods: Eight time points across 6.5 years from a subsample of adults aged 70 years…
Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.
Zhao, Dongfang; Yang, Li
2009-01-01
Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.
A novel method for vaginal cylinder treatment planning: a seamless transition to 3D brachytherapy
Wu, Vincent; Wang, Zhou; Patil, Sachin
2012-01-01
Purpose Standard treatment plan libraries are often used to ensure a quick turn-around time for vaginal cylinder treatments. Recently there is increasing interest in transitioning from conventional 2D radiograph based brachytherapy to 3D image based brachytherapy, which has resulted in a substantial increase in treatment planning time and decrease in patient through-put. We describe a novel technique that significantly reduces the treatment planning time for CT-based vaginal cylinder brachytherapy. Material and methods Oncentra MasterPlan TPS allows multiple sets of data points to be classified as applicator points which has been harnessed in this method. The method relies on two hard anchor points: the first dwell position in a catheter and an applicator configuration specific dwell position as the plan origin and a soft anchor point beyond the last active dwell position to define the axis of the catheter. The spatial location of various data points on the applicator's surface and at 5 mm depth are stored in an Excel file that can easily be transferred into a patient CT data set using window operations and then used for treatment planning. The remainder of the treatment planning process remains unaffected. Results The treatment plans generated on the Oncentra MasterPlan TPS using this novel method yielded results comparable to those generated on the Plato TPS using a standard treatment plan library in terms of treatment times, dwell weights and dwell times for a given optimization method and normalization points. Less than 2% difference was noticed between the treatment times generated between both systems. Using the above method, the entire planning process, including CT importing, catheter reconstruction, multiple data point definition, optimization and dose prescription, can be completed in ~5–10 minutes. Conclusion The proposed method allows a smooth and efficient transition to 3D CT based vaginal cylinder brachytherapy planning. PMID:23349650
A rolling phenotype in Crohn's disease.
Irwin, James; Ferguson, Emma; Simms, Lisa A; Hanigan, Katherine; Carbonnel, Franck; Radford-Smith, Graham
2017-01-01
The Montreal classification of disease behaviour in Crohn's disease describes progression of disease towards a stricturing and penetrating phenotype. In the present paper, we propose an alternative representation of the long-term course of Crohn's disease complications, the rolling phenotype. As is commonly observed in clinical practice, this definition allows progression to a more severe phenotype (stricturing, penetrating) but also, regression to a less severe behaviour (inflammatory, or remission) over time. All patients diagnosed with Crohn's Disease between 01/01/1994 and 01/03/2008, managed at a single centre and observed for a minimum of 5 years, had development and resolution of all complications recorded. A rolling phenotype was defined at each time point based on all observed complications in the three years prior to the time point. Phenotype was defined as B1, B2, B3, or B23 (penetrating and stenotic). The progression over time of the rolling phenotype was compared to that of the cumulative Montreal phenotype. 305 patients were observed a median of 10.0 (Intraquartile range 7.3-13.7) years. Longitudinal progression of rolling phenotype demonstrated a consistent proportion of patients with B1 (70%), B2 (20%), B3 (5%) and B23 (5%) phenotypes. These proportions were observed regardless of initial phenotype. In contrast, the cumulative Montreal phenotype progressed towards a more severe phenotype with time (B1 (39%), B2 (26%), B3(35%) at 10 years). A rolling phenotype provides an alternative view of the longitudinal burden of intra-abdominal complications in Crohn's disease. From this viewpoint, 70% of patients have durable freedom from complication over time (>3 years).
NASA Astrophysics Data System (ADS)
Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael
2017-09-01
Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.
Human body motion capture from multi-image video sequences
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola
2003-01-01
In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.
A shape-based segmentation method for mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen
2013-07-01
Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.
Wi-Fi real time location systems
NASA Astrophysics Data System (ADS)
Doll, Benjamin A.
This thesis objective was to determine the viability of utilizing an untrained Wi-Fi. real time location system as a GPS alternative for indoor environments. Background. research showed that GPS is rarely able to penetrate buildings to provide reliable. location data. The benefit of having location information in a facility and how they might. be used for disaster or emergency relief personnel and their resources motivated this. research. A building was selected with a well-deployed Wi-Fi infrastructure and its. untrained location feature was used to determine the distance between the specified. test points and the system identified location. It was found that the average distance. from the test point throughout the facility was 14.3 feet 80% of the time. This fell within. the defined viable range and supported that an untrained Wi-Fi RTLS system could be a. viable solution for GPS's lack of availability indoors.
NASA Astrophysics Data System (ADS)
Benítez, Hernán D.; Ibarra-Castanedo, Clemente; Bendada, AbdelHakim; Maldague, Xavier; Loaiza, Humberto; Caicedo, Eduardo
2008-01-01
It is well known that the methods of thermographic non-destructive testing based on the thermal contrast are strongly affected by non-uniform heating at the surface. Hence, the results obtained from these methods considerably depend on the chosen reference point. The differential absolute contrast (DAC) method was developed to eliminate the need of determining a reference point that defined the thermal contrast with respect to an ideal sound area. Although, very useful at early times, the DAC accuracy decreases when the heat front approaches the sample rear face. We propose a new DAC version by explicitly introducing the sample thickness using the thermal quadrupoles theory and showing that the new DAC range of validity increases for long times while preserving the validity for short times. This new contrast is used for defect quantification in composite, Plexiglas™ and aluminum samples.
User and technical documentation
NASA Astrophysics Data System (ADS)
1988-09-01
The program LIBRATE calculates velocities for trajectories from low earth orbit (LEO) to four of the five libration points (L2, L3, L4, and L5), and from low lunar orbit (LLO) to libration points L1 and L2. The flight to be analyzed departs from a circular orbit of any altitude and inclination about the Earth or Moon and finishes in a circular orbit about the Earth at the desired libration point within a specified flight time. This program produces a matrix of the delta V's needed to complete the desired flight. The user specifies the departure orbit, and the maximum flight time. A matrix is then developed with 10 inclinations, ranging from 0 to 90 degrees, forming the columns, and 19 possible flight times, ranging from the flight time (input) to 36 hours less than the input value, in decrements of 2 hours, forming the rows. This matrix is presented in three different reports including the total delta V's, and both of the delta V components discussed. The input required from the user to define the flight is discussed. The contents of the three reports that are produced as outputs are also described. The instructions are also included which are needed to execute the program.
Proposal for a definition of lifelong premature ejaculation based on epidemiological stopwatch data.
Waldinger, Marcel D; Zwinderman, Aeilko H; Olivier, Berend; Schweitzer, Dave H
2005-07-01
Consensus on a definition of premature ejaculation has not yet been reached because of debates based on subjective authority opinions and nonstandardized assessment methods to measure ejaculation time and ejaculation control. To provide a definition for lifelong premature ejaculation that is based on epidemiological evidence including the neurobiological and psychological approach. We used the 0.5 and 2.5 percentiles as accepted standards of disease definition in a skewed distribution. We applied these percentiles in a stopwatch-determined intravaginal ejaculation latency time (IELT) distribution of 491 nonselected men from five different countries. The practical consequences of 0.5% and 2.5% cutoff points for disease definition were taken into consideration by reviewing current knowledge of feelings of control and satisfaction in relation to ejaculatory performance of the general male population. Literature arguments to be used in a proposed consensus on a definition of premature ejaculation. The stopwatch-determined IELT distribution is positively skewed. The 0.5 percentile equates to an IELT of 0.9 minute and the 2.5 percentile an IELT of 1.3 minutes. However, there are no available data in the literature on feelings of control or satisfaction in relation to ejaculatory latency time in the general male population. Random male cohort studies are needed to end all speculation on this subject. Exact stopwatch time assessment of IELT in a multinational study led us to propose that all men with an IELT of less than 1 minute (belonging to the 0.5 percentile) have "definite" premature ejaculation, while men with IELTs between 1 and 1.5 minutes (between 0.5 and 2.5 percentile) have "probable" premature ejaculation. Severity of premature ejaculation (nonsymptomatic, mild, moderate, severe) should be defined in terms of associated psychological problems. We define lifelong premature ejaculation as a neurobiological dysfunction with an unacceptable increase of risk to develop sexual and psychological problems anywhere in a lifetime. By defining premature ejaculation from an authority-defined disorder into a dysfunction based on epidemiological evidence it is possible to establish consensus based on epidemiological evidence. Additional epidemiological stopwatch studies are needed for a final decision of IELT values at both percentile cutoff points.
a Modeling Method of Fluttering Leaves Based on Point Cloud
NASA Astrophysics Data System (ADS)
Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.
2017-09-01
Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
Stone, William J.
1986-01-01
A zero-home locator includes a fixed phototransistor switch and a moveable actuator including two symmetrical, opposed wedges, each wedge defining a point at which switching occurs. The zero-home location is the average of the positions of the points defined by the wedges.
Stone, W.J.
1983-10-31
A zero-home locator includes a fixed phototransistor switch and a moveable actuator including two symmetrical, opposed wedges, each wedge defining a point at which switching occurs. The zero-home location is the average of the positions of the points defined by the wedges.
Scher, Howard I.; Halabi, Susan; Tannock, Ian; Morris, Michael; Sternberg, Cora N.; Carducci, Michael A.; Eisenberger, Mario A.; Higano, Celestia; Bubley, Glenn J.; Dreicer, Robert; Petrylak, Daniel; Kantoff, Philip; Basch, Ethan; Kelly, William Kevin; Figg, William D.; Small, Eric J.; Beer, Tomasz M.; Wilding, George; Martin, Alison; Hussain, Maha
2014-01-01
Purpose To update eligibility and outcome measures in trials that evaluate systemic treatment for patients with progressive prostate cancer and castrate levels of testosterone. Methods A committee of investigators experienced in conducting trials for prostate cancer defined new consensus criteria by reviewing previous criteria, Response Evaluation Criteria in Solid Tumors (RECIST), and emerging trial data. Results The Prostate Cancer Clinical Trials Working Group (PCWG2) recommends a two-objective paradigm: (1) controlling, relieving, or eliminating disease manifestations that are present when treatment is initiated and (2) preventing or delaying disease manifestations expected to occur. Prostate cancers progressing despite castrate levels of testosterone are considered castration resistant and not hormone refractory. Eligibility is defined using standard disease assessments to authenticate disease progression, prior treatment, distinct clinical subtypes, and predictive models. Outcomes are reported independently for prostate-specific antigen (PSA), imaging, and clinical measures, avoiding grouped categorizations such as complete or partial response. In most trials, early changes in PSA and/or pain are not acted on without other evidence of disease progression, and treatment should be continued for at least 12 weeks to ensure adequate drug exposure. Bone scans are reported as “new lesions” or “no new lesions,” changes in soft-tissue disease assessed by RECIST, and pain using validated scales. Defining eligibility for prevent/delay end points requires attention to estimated event frequency and/or random assignment to a control group. Conclusion PCWG2 recommends increasing emphasis on time-to-event end points (ie, failure to progress) as decision aids in proceeding from phase II to phase III trials. Recommendations will evolve as data are generated on the utility of intermediate end points to predict clinical benefit. PMID:18309951
Tracing Personalized Health Curves during Infections
Schneider, David S.
2011-01-01
It is difficult to describe host–microbe interactions in a manner that deals well with both pathogens and mutualists. Perhaps a way can be found using an ecological definition of tolerance, where tolerance is defined as the dose response curve of health versus parasite load. To plot tolerance, individual infections are summarized by reporting the maximum parasite load and the minimum health for a population of infected individuals and the slope of the resulting curve defines the tolerance of the population. We can borrow this method of plotting health versus microbe load in a population and make it apply to individuals; instead of plotting just one point that summarizes an infection in an individual, we can plot the values at many time points over the course of an infection for one individual. This produces curves that trace the course of an infection through phase space rather than over a more typical timeline. These curves highlight relationships like recovery and point out bifurcations that are difficult to visualize with standard plotting techniques. Only nine archetypical curves are needed to describe most pathogenic and mutualistic host–microbe interactions. The technique holds promise as both a qualitative and quantitative approach to dissect host–microbe interactions of all kinds. PMID:21957398
Elevation Difference and Bouguer Anomaly Analysis Tool (EDBAAT) User's Guide
Smittle, Aaron M.; Shoberg, Thomas G.
2017-06-16
This report describes a software tool that imports gravity anomaly point data from the Gravity Database of the United States (GDUS) of the National Geospatial-Intelligence Agency and University of Texas at El Paso along with elevation data from The National Map (TNM) of the U.S. Geological Survey that lie within a user-specified geographic area of interest. Further, the tool integrates these two sets of data spatially and analyzes the consistency of the elevation of each gravity station from the GDUS with TNM elevation data; it also evaluates the consistency of gravity anomaly data within the GDUS data repository. The tool bins the GDUS data based on user-defined criteria of elevation misfit between the GDUS and TNM elevation data. It also provides users with a list of points from the GDUS data, which have Bouguer anomaly values that are considered outliers (two standard deviations or greater) with respect to other nearby GDUS anomaly data. “Nearby” can be defined by the user at time of execution. These outputs should allow users to quickly and efficiently choose which points from the GDUS would be most useful in reconnaissance studies or in augmenting and extending the range of individual gravity studies.
Unsteady three-dimensional flow separation
NASA Technical Reports Server (NTRS)
Hui, W. H.
1988-01-01
A concise mathematical framework is constructed to study the topology of steady 3-D separated flows of an incompressible, or a compressible viscous fluid. Flow separation is defined by the existence of a stream surface which intersects with the body surface. The line of separation is itself a skin-friction line. Flow separation is classified as being either regular or singular, depending respectively on whether the line of separation contains only a finite number of singular points or is a singular line of the skin-friction field. The special cases of 2-D and axisymmetric flow separation are shown to be of singular type. In regular separation it is shown that a line of separation originates from a saddle point of separation of the skin-friction field and ends at nodal points of separation. Unsteady flow separation is defined relative to a coordinate system fixed to the body surface. It is shown that separation of an unsteady 3-D incompressible viscous flow at time t, when viewed from such a frame of reference, is topologically the same as that of the fictitious steady flow obtained by freezing the unsteady flow at the instant t. Examples are given showing effects of various forms of flow unsteadiness on flow separation.
Fast and robust shape diameter function.
Chen, Shuangmin; Liu, Taijun; Shu, Zhenyu; Xin, Shiqing; He, Ying; Tu, Changhe
2018-01-01
The shape diameter function (SDF) is a scalar function defined on a closed manifold surface, measuring the neighborhood diameter of the object at each point. Due to its pose oblivious property, SDF is widely used in shape analysis, segmentation and retrieval. However, computing SDF is computationally expensive since one has to place an inverted cone at each point and then average the penetration distances for a number of rays inside the cone. Furthermore, the shape diameters are highly sensitive to local geometric features as well as the normal vectors, hence diminishing their applications to real-world meshes which often contain rich geometric details and/or various types of defects, such as noise and gaps. In order to increase the robustness of SDF and promote it to a wide range of 3D models, we define SDF by offsetting the input object a little bit. This seemingly minor change brings three significant benefits: First, it allows us to compute SDF in a robust manner since the offset surface is able to give reliable normal vectors. Second, it runs many times faster since at each point we only need to compute the penetration distance along a single direction, rather than tens of directions. Third, our method does not require watertight surfaces as the input-it supports both point clouds and meshes with noise and gaps. Extensive experimental results show that the offset-surface based SDF is robust to noise and insensitive to geometric details, and it also runs about 10 times faster than the existing method. We also exhibit its usefulness using two typical applications including shape retrieval and shape segmentation, and observe a significant improvement over the existing SDF.
Organic-matter loading determines regime shifts and alternative states in an aquatic ecosystem
Sirota, Jennie; Baiser, Benjamin; Gotelli, Nicholas J.; Ellison, Aaron M.
2013-01-01
Slow changes in underlying state variables can lead to “tipping points,” rapid transitions between alternative states (“regime shifts”) in a wide range of complex systems. Tipping points and regime shifts routinely are documented retrospectively in long time series of observational data. Experimental induction of tipping points and regime shifts is rare, but could lead to new methods for detecting impending tipping points and forestalling regime shifts. By using controlled additions of detrital organic matter (dried, ground arthropod prey), we experimentally induced a shift from aerobic to anaerobic states in a miniature aquatic ecosystem: the self-contained pools that form in leaves of the carnivorous northern pitcher plant, Sarracenia purpurea. In unfed controls, the concentration of dissolved oxygen ([O2]) in all replicates exhibited regular diurnal cycles associated with daytime photosynthesis and nocturnal plant respiration. In low prey-addition treatments, the regular diurnal cycles of [O2] were disrupted, but a regime shift was not detected. In high prey-addition treatments, the variance of the [O2] time series increased until the system tipped from an aerobic to an anaerobic state. In these treatments, replicate [O2] time series predictably crossed a tipping point at ∼45 h as [O2] was decoupled from diurnal cycles of photosynthesis and respiration. Increasing organic-matter loading led to predictable changes in [O2] dynamics, with high loading consistently driving the system past a well-defined tipping point. The Sarracenia microecosystem functions as a tractable experimental system in which to explore the forecasting and management of tipping points and alternative regimes. PMID:23613583
Organic-matter loading determines regime shifts and alternative states in an aquatic ecosystem.
Sirota, Jennie; Baiser, Benjamin; Gotelli, Nicholas J; Ellison, Aaron M
2013-05-07
Slow changes in underlying state variables can lead to "tipping points," rapid transitions between alternative states ("regime shifts") in a wide range of complex systems. Tipping points and regime shifts routinely are documented retrospectively in long time series of observational data. Experimental induction of tipping points and regime shifts is rare, but could lead to new methods for detecting impending tipping points and forestalling regime shifts. By using controlled additions of detrital organic matter (dried, ground arthropod prey), we experimentally induced a shift from aerobic to anaerobic states in a miniature aquatic ecosystem: the self-contained pools that form in leaves of the carnivorous northern pitcher plant, Sarracenia purpurea. In unfed controls, the concentration of dissolved oxygen ([O2]) in all replicates exhibited regular diurnal cycles associated with daytime photosynthesis and nocturnal plant respiration. In low prey-addition treatments, the regular diurnal cycles of [O2] were disrupted, but a regime shift was not detected. In high prey-addition treatments, the variance of the [O2] time series increased until the system tipped from an aerobic to an anaerobic state. In these treatments, replicate [O2] time series predictably crossed a tipping point at ~45 h as [O2] was decoupled from diurnal cycles of photosynthesis and respiration. Increasing organic-matter loading led to predictable changes in [O2] dynamics, with high loading consistently driving the system past a well-defined tipping point. The Sarracenia microecosystem functions as a tractable experimental system in which to explore the forecasting and management of tipping points and alternative regimes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, R; Zhu, X; Li, S
Purpose: High Dose Rate (HDR) brachytherapy forward planning is principally an iterative process; hence, plan quality is affected by planners’ experiences and limited planning time. Thus, this may lead to sporadic errors and inconsistencies in planning. A statistical tool based on previous approved clinical treatment plans would help to maintain the consistency of planning quality and improve the efficiency of second checking. Methods: An independent dose calculation tool was developed from commercial software. Thirty-three previously approved cervical HDR plans with the same prescription dose (550cGy), applicator type, and treatment protocol were examined, and ICRU defined reference point doses (bladder, vaginalmore » mucosa, rectum, and points A/B) along with dwell times were collected. Dose calculation tool then calculated appropriate range with a 95% confidence interval for each parameter obtained, which would be used as the benchmark for evaluation of those parameters in future HDR treatment plans. Model quality was verified using five randomly selected approved plans from the same dataset. Results: Dose variations appears to be larger at the reference point of bladder and mucosa as compared with rectum. Most reference point doses from verification plans fell between the predicted range, except the doses of two points of rectum and two points of reference position A (owing to rectal anatomical variations & clinical adjustment in prescription points, respectively). Similar results were obtained for tandem and ring dwell times despite relatively larger uncertainties. Conclusion: This statistical tool provides an insight into clinically acceptable range of cervical HDR plans, which could be useful in plan checking and identifying potential planning errors, thus improving the consistency of plan quality.« less
Recombination of polynucleotide sequences using random or defined primers
Arnold, Frances H.; Shao, Zhixin; Affholter, Joseph A.; Zhao, Huimin H; Giver, Lorraine J.
2000-01-01
A method for in vitro mutagenesis and recombination of polynucleotide sequences based on polymerase-catalyzed extension of primer oligonucleotides is disclosed. The method involves priming template polynucleotide(s) with random-sequences or defined-sequence primers to generate a pool of short DNA fragments with a low level of point mutations. The DNA fragments are subjected to denaturization followed by annealing and further enzyme-catalyzed DNA polymerization. This procedure is repeated a sufficient number of times to produce full-length genes which comprise mutants of the original template polynucleotides. These genes can be further amplified by the polymerase chain reaction and cloned into a vector for expression of the encoded proteins.
Terahertz plasmonic Bessel beamformer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monnai, Yasuaki; Shinoda, Hiroyuki; Jahn, David
We experimentally demonstrate terahertz Bessel beamforming based on the concept of plasmonics. The proposed planar structure is made of concentric metallic grooves with a subwavelength spacing that couple to a point source to create tightly confined surface waves or spoof surface plasmon polaritons. Concentric scatterers periodically incorporated at a wavelength scale allow for launching the surface waves into free space to define a Bessel beam. The Bessel beam defined at 0.29 THz has been characterized through terahertz time-domain spectroscopy. This approach is capable of generating Bessel beams with planar structures as opposed to bulky axicon lenses and can be readily integratedmore » with solid-state terahertz sources.« less
Recombination of polynucleotide sequences using random or defined primers
Arnold, Frances H.; Shao, Zhixin; Affholter, Joseph A.; Zhao, Huimin; Giver, Lorraine J.
2001-01-01
A method for in vitro mutagenesis and recombination of polynucleotide sequences based on polymerase-catalyzed extension of primer oligonucleotides is disclosed. The method involves priming template polynucleotide(s) with random-sequences or defined-sequence primers to generate a pool of short DNA fragments with a low level of point mutations. The DNA fragments are subjected to denaturization followed by annealing and further enzyme-catalyzed DNA polymerization. This procedure is repeated a sufficient number of times to produce full-length genes which comprise mutants of the original template polynucleotides. These genes can be further amplified by the polymerase chain reaction and cloned into a vector for expression of the encoded proteins.
Economic and Microbiologic Evaluation of Single-Dose Vial Extension for Hazardous Drugs
Rowe, Erinn C.; Savage, Scott W.; Rutala, William A.; Weber, David J.; Gergen-Teague, Maria; Eckel, Stephen F.
2012-01-01
Purpose: The update of US Pharmacopeia Chapter <797> in 2008 included guidelines stating that single-dose vials (SDVs) opened and maintained in an International Organization for Standardization Class 5 environment can be used for up to 6 hours after initial puncture. A study was conducted to evaluate the cost of discarding vials after 6 hours and to further test sterility of vials beyond this time point, subsequently defined as the beyond-use date (BUD). Methods: Financial determination of SDV waste included 2 months of retrospective review of all doses prescribed. Additionally, actual waste log data were collected. Active and control vials (prepared using sterilized trypticase soy broth) were recovered, instead of discarded, at the defined 6-hour BUD. Results: The institution-specific waste of 19 selected SDV medications discarded at 6 hours was calculated at $766,000 annually, and tracking waste logs for these same medications was recorded at $770,000 annually. Microbiologic testing of vial extension beyond 6 hours showed that 11 (1.86%) of 592 samples had one colony-forming unit on one of two plates. Positive plates were negative at subsequent time points, and all positives were single isolates most likely introduced during the plating process. Conclusion: The cost of discarding vials at 6 hours was significant for hazardous medications in a large academic medical center. On the basis of microbiologic data, vial BUD extension demonstrated a contamination frequency of 1.86%, which likely represented exogenous contamination; vial BUD extension for the tested drugs showed no growth at subsequent time points and could provide an annual cost savings of more than $600,000. PMID:23180998
Entropic measures of individual mobility patterns
NASA Astrophysics Data System (ADS)
Gallotti, Riccardo; Bazzani, Armando; Degli Esposti, Mirko; Rambaldi, Sandro
2013-10-01
Understanding human mobility from a microscopic point of view may represent a fundamental breakthrough for the development of a statistical physics for cognitive systems and it can shed light on the applicability of macroscopic statistical laws for social systems. Even if the complexity of individual behaviors prevents a true microscopic approach, the introduction of mesoscopic models allows the study of the dynamical properties for the non-stationary states of the considered system. We propose to compute various entropy measures of the individual mobility patterns obtained from GPS data that record the movements of private vehicles in the Florence district, in order to point out new features of human mobility related to the use of time and space and to define the dynamical properties of a stochastic model that could generate similar patterns. Moreover, we can relate the predictability properties of human mobility to the distribution of time passed between two successive trips. Our analysis suggests the existence of a hierarchical structure in the mobility patterns which divides the performed activities into three different categories, according to the time cost, with different information contents. We show that a Markov process defined by using the individual mobility network is not able to reproduce this hierarchy, which seems the consequence of different strategies in the activity choice. Our results could contribute to the development of governance policies for a sustainable mobility in modern cities.
NASA Astrophysics Data System (ADS)
Park, Sun-Youp; Choi, Jin; Roh, Dong-Goo; Park, Maru; Jo, Jung Hyun; Yim, Hong-Suh; Park, Young-Sik; Bae, Young-Ho; Park, Jang-Hyun; Moon, Hong-Kyu; Choi, Young-Jun; Cho, Sungki; Choi, Eun-Jung
2016-09-01
As described in the previous paper (Park et al. 2013), the detector subsystem of optical wide-field patrol (OWL) provides many observational data points of a single artificial satellite or space debris in the form of small streaks, using a chopper system and a time tagger. The position and the corresponding time data are matched assuming that the length of a streak on the CCD frame is proportional to the time duration of the exposure during which the chopper blades do not obscure the CCD window. In the previous study, however, the length was measured using the diagonal of the rectangle of the image area containing the streak; the results were quite ambiguous and inaccurate, allowing possible matching error of positions and time data. Furthermore, because only one (position, time) data point is created from one streak, the efficiency of the observation decreases. To define the length of a streak correctly, it is important to locate the endpoints of a streak. In this paper, a method using a differential convolution mask pattern is tested. This method can be used to obtain the positions where the pixel values are changed sharply. These endpoints can be regarded as directly detected positional data, and the number of data points is doubled by this result.
Acoustic field in unsteady moving media
NASA Technical Reports Server (NTRS)
Bauer, F.; Maestrello, L.; Ting, L.
1995-01-01
In the interaction of an acoustic field with a moving airframe the authors encounter a canonical initial value problem for an acoustic field induced by an unsteady source distribution, q(t,x) with q equivalent to 0 for t less than or equal to 0, in a medium moving with a uniform unsteady velocity U(t)i in the coordinate system x fixed on the airframe. Signals issued from a source point S in the domain of dependence D of an observation point P at time t will arrive at point P more than once corresponding to different retarded times, Tau in the interval (0, t). The number of arrivals is called the multiplicity of the point S. The multiplicity equals 1 if the velocity U remains subsonic and can be greater when U becomes supersonic. For an unsteady uniform flow U(t)i, rules are formulated for defining the smallest number of I subdomains V(sub i) of D with the union of V(sub i) equal to D. Each subdomain has multiplicity 1 and a formula for the corresponding retarded time. The number of subdomains V(sub i) with nonempty intersection is the multiplicity m of the intersection. The multiplicity is at most I. Examples demonstrating these rules are presented for media at accelerating and/or decelerating supersonic speed.
Portal Surveys of Time-Out Drinking Locations: A Tool for Studying Binge Drinking and AOD Use
ERIC Educational Resources Information Center
Voas, Robert B.; Furr-Holden, Debra; Lauer, Elizabeth; Bright, Kristin; Johnson, Mark B.; Miller, Brenda
2006-01-01
Portal surveys, defined as assessments occurring proximal to the entry point to a high-risk locale and immediately on exit, can be used in different settings to measure characteristics and behavior of attendees at an event of interest. This methodology has been developed to assess alcohol and other drug (AOD) use at specific events and has…
26 CFR 1.6050H-2 - Time, form, and manner of reporting interest received on qualified mortgage.
Code of Federal Regulations, 2014 CFR
2014-04-01
... taxpayer identification number (TIN) (as defined in section 7701(a)(41)) of the payor of record; (ii) The name, address, and TIN of the interest recipient; (iii) The amount of interest (other than points... paragraph (a)(2)(ii) of this section by including on Form 1098 (and Form 1096) the name, address, and TIN of...
26 CFR 1.6050H-2 - Time, form, and manner of reporting interest received on qualified mortgage.
Code of Federal Regulations, 2010 CFR
2010-04-01
... taxpayer identification number (TIN) (as defined in section 7701(a)(41)) of the payor of record; (ii) The name, address, and TIN of the interest recipient; (iii) The amount of interest (other than points... paragraph (a)(2)(ii) of this section by including on Form 1098 (and Form 1096) the name, address, and TIN of...
26 CFR 1.6050H-2 - Time, form, and manner of reporting interest received on qualified mortgage.
Code of Federal Regulations, 2011 CFR
2011-04-01
... taxpayer identification number (TIN) (as defined in section 7701(a)(41)) of the payor of record; (ii) The name, address, and TIN of the interest recipient; (iii) The amount of interest (other than points... paragraph (a)(2)(ii) of this section by including on Form 1098 (and Form 1096) the name, address, and TIN of...
26 CFR 1.6050H-2 - Time, form, and manner of reporting interest received on qualified mortgage.
Code of Federal Regulations, 2013 CFR
2013-04-01
... taxpayer identification number (TIN) (as defined in section 7701(a)(41)) of the payor of record; (ii) The name, address, and TIN of the interest recipient; (iii) The amount of interest (other than points... paragraph (a)(2)(ii) of this section by including on Form 1098 (and Form 1096) the name, address, and TIN of...
26 CFR 1.6050H-2 - Time, form, and manner of reporting interest received on qualified mortgage.
Code of Federal Regulations, 2012 CFR
2012-04-01
... taxpayer identification number (TIN) (as defined in section 7701(a)(41)) of the payor of record; (ii) The name, address, and TIN of the interest recipient; (iii) The amount of interest (other than points... paragraph (a)(2)(ii) of this section by including on Form 1098 (and Form 1096) the name, address, and TIN of...
A Commentary on "The Role of the Unit in Physics and Psychometrics" by Stephen Humphry
ERIC Educational Resources Information Center
Andrich, David
2011-01-01
This commentary examines the role of the unit from the perspective of the definition of measurement in physics as the ratio of two magnitudes, one of which is defined as the unit; it is an important and timely contribution to measurement in the social sciences. There are many different points that could be commented upon, but the author will…
The Analysis of Design of Robust Nonlinear Estimators and Robust Signal Coding Schemes.
1982-09-16
b - )’/ 12. between uniform and nonuniform quantizers. For the nonuni- Proof: If b - acca then form quantizer we can expect the mean-square error to...in the window greater than or equal to the value at We define f7 ’(s) as the n-times filtered signal p + 1; consequently, point p + 1 is the median and
Process for structural geologic analysis of topography and point data
Eliason, Jay R.; Eliason, Valerie L. C.
1987-01-01
A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.
Kyle, Simon D.; Miller, Christopher B.; Rogers, Zoe; Siriwardena, A. Niroshan; MacMahon, Kenneth M.; Espie, Colin A.
2014-01-01
Study Objectives: To investigate whether sleep restriction therapy (SRT) is associated with reduced objective total sleep time (TST), increased daytime somnolence, and impaired vigilance. Design: Within-subject, noncontrolled treatment investigation. Setting: Sleep research laboratory. Participants: Sixteen patients [10 female, mean age = 47.1 (10.8) y] with well-defined psychophysiological insomnia (PI), reporting TST ≤ 6 h. Interventions: Patients were treated with single-component SRT over a 4-w protocol, sleeping in the laboratory for 2 nights prior to treatment initiation and for 3 nights (SRT night 1, 8, 22) during the acute interventional phase. The psychomotor vigilance task (PVT) was completed at seven defined time points [day 0 (baseline), day 1,7,8,21,22 (acute treatment) and day 84 (3 mo)]. The Epworth Sleepiness Scale (ESS) was completed at baseline, w 1-4, and 3 mo. Measurement and results: Subjective sleep outcomes and global insomnia severity significantly improved before and after SRT. There was, however, a robust decrease in PSG-defined TST during acute implementation of SRT, by an average of 91 min on night 1, 78 min on night 8, and 69 min on night 22, relative to baseline (P < 0.001; effect size range = 1.60-1.80). During SRT, PVT lapses were significantly increased from baseline (at three of five assessment points, all P < 0.05; effect size range = 0.69-0.78), returning to baseline levels by 3 mo (P = 0.43). A similar pattern was observed for RT, with RTs slowing during acute treatment (at four of five assessment points, all P < 0.05; effect size range = 0.57-0.89) and returning to pretreatment levels at 3 mo (P = 0.78). ESS scores were increased at w 1, 2, and 3 (relative to baseline; all P < 0.05); by 3 mo, sleepiness had returned to baseline (normative) levels (P = 0.65). Conclusion: For the first time we show that acute sleep restriction therapy is associated with reduced objective total sleep time, increased daytime sleepiness, and objective performance impairment. Our data have important implications for implementation guidelines around the safe and effective delivery of cognitive behavioral therapy for insomnia. Citation: Kyle SD; Miller CB; Rogers Z; Siriwardena AN; MacMahon KM; Espie CA. Sleep restriction therapy for insomnia is associated with reduced objective total sleep time, increased daytime somnolence, and objectively impaired vigilance: implications for the clinical management of insomnia disorder. SLEEP 2014;37(2):229-237. PMID:24497651
A medulloblastoma showing an unusually long doubling time: reflection of its singular nature.
Doron, Omer; Zauberman, Jacob; Feldman, Ze'ev
2016-06-01
In this paper, we present a case of a 4-year-old male diagnosed with a desmoplastic, SHH-type medulloblastoma. Retrospectively, we discovered that the patient underwent an MRI scan at 21 months for reasons unrelated, revealing a T1-enhanced lesion at the vermis, later recognized as the source of the tumor. This unique case provides us with a glimpse into the natural history of this tumor. Our ability to measure tumor volume at two defined time points, 31 months apart, enabled us to deduce the tumor's doubling time. This is defined as the time of one cell cycle divided by the amount of cycling cells, multiplied by cell loss factor. Potential doubling time (Tpot) and actual doubling time (Td), calculated using the Gompertzian model, are the most clinically relevant with regard to a tumor's response to radiotherapy. Here, we show an actual doubling-time (Td) of 78 days, and an extrapolated tumor diameter at the time of birth of 0.25 mm. These results both support the medulloblastoma's embryonic origin, and indicating a threefold longer actual doubling time when compared to previous studies. Taking into account the reported range of medulloblastoma potential doubling time, we deduced a cell loss factor of between 48.9 and 95.5 %. These percentages fall in line with other malignant tumors. Although limited due to the obvious reliance on only two points in time and using the Gompertzian model to complete the remainder, to the best of our knowledge, this is the longest follow-up period reported for medulloblastoma. We have described how a unique turn of events enabled us to get a glimpse into the in situ development of a medulloblastoma over a 31-month period. Regarded sometimes as an idiosyncratic tumor comprised of an array of molecular changes, the complexity of medulloblastoma is displayed here, by revealing for the first time an actual doubling time three- to fourfold the previously known length.
Programmable growth of branched silicon nanowires using a focused ion beam.
Jun, Kimin; Jacobson, Joseph M
2010-08-11
Although significant progress has been made in being able to spatially define the position of material layers in vapor-liquid-solid (VLS) grown nanowires, less work has been carried out in deterministically defining the positions of nanowire branching points to facilitate more complicated structures beyond simple 1D wires. Work to date has focused on the growth of randomly branched nanowire structures. Here we develop a means for programmably designating nanowire branching points by means of focused ion beam-defined VLS catalytic points. This technique is repeatable without losing fidelity allowing multiple rounds of branching point definition followed by branch growth resulting in complex structures. The single crystal nature of this approach allows us to describe resulting structures with linear combinations of base vectors in three-dimensional (3D) space. Finally, by etching the resulting 3D defined wire structures branched nanotubes were fabricated with interconnected nanochannels inside. We believe that the techniques developed here should comprise a useful tool for extending linear VLS nanowire growth to generalized 3D wire structures.
NASA Astrophysics Data System (ADS)
Rolfe, S. M.; Patel, M. R.; Gilmour, I.; Olsson-Francis, K.; Ringrose, T. J.
2016-06-01
Biomarker molecules, such as amino acids, are key to discovering whether life exists elsewhere in the Solar System. Raman spectroscopy, a technique capable of detecting biomarkers, will be on board future planetary missions including the ExoMars rover. Generally, the position of the strongest band in the spectra of amino acids is reported as the identifying band. However, for an unknown sample, it is desirable to define multiple characteristic bands for molecules to avoid any ambiguous identification. To date, there has been no definition of multiple characteristic bands for amino acids of interest to astrobiology. This study examined l-alanine, l-aspartic acid, l-cysteine, l-glutamine and glycine and defined several Raman bands per molecule for reference as characteristic identifiers. Per amino acid, 240 spectra were recorded and compared using established statistical tests including ANOVA. The number of characteristic bands defined were 10, 12, 12, 14 and 19 for l-alanine (strongest intensity band: 832 cm-1), l-aspartic acid (938 cm-1), l-cysteine (679 cm-1), l-glutamine (1090 cm-1) and glycine (875 cm-1), respectively. The intensity of bands differed by up to six times when several points on the crystal sample were rotated through 360 °; to reduce this effect when defining characteristic bands for other molecules, we find that spectra should be recorded at a statistically significant number of points per sample to remove the effect of sample rotation. It is crucial that sets of characteristic Raman bands are defined for biomarkers that are targets for future planetary missions to ensure a positive identification can be made.
Rolfe, S M; Patel, M R; Gilmour, I; Olsson-Francis, K; Ringrose, T J
2016-06-01
Biomarker molecules, such as amino acids, are key to discovering whether life exists elsewhere in the Solar System. Raman spectroscopy, a technique capable of detecting biomarkers, will be on board future planetary missions including the ExoMars rover. Generally, the position of the strongest band in the spectra of amino acids is reported as the identifying band. However, for an unknown sample, it is desirable to define multiple characteristic bands for molecules to avoid any ambiguous identification. To date, there has been no definition of multiple characteristic bands for amino acids of interest to astrobiology. This study examined L-alanine, L-aspartic acid, L-cysteine, L-glutamine and glycine and defined several Raman bands per molecule for reference as characteristic identifiers. Per amino acid, 240 spectra were recorded and compared using established statistical tests including ANOVA. The number of characteristic bands defined were 10, 12, 12, 14 and 19 for L-alanine (strongest intensity band: 832 cm(-1)), L-aspartic acid (938 cm(-1)), L-cysteine (679 cm(-1)), L-glutamine (1090 cm(-1)) and glycine (875 cm(-1)), respectively. The intensity of bands differed by up to six times when several points on the crystal sample were rotated through 360 °; to reduce this effect when defining characteristic bands for other molecules, we find that spectra should be recorded at a statistically significant number of points per sample to remove the effect of sample rotation. It is crucial that sets of characteristic Raman bands are defined for biomarkers that are targets for future planetary missions to ensure a positive identification can be made.
Trajectory Specification for Terminal Air Traffic: Pairwise Conflict Detection and Resolution
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Erzberger, Heinz
2017-01-01
Trajectory Specification is the explicit bounding and control of aircraft trajectories such that the position at any point in time is constrained to a precisely defined volume of space. The bounding space is defined by cross-track, along-track, and vertical tolerances relative to a reference trajectory that specifies position as a function of time. The tolerances are dynamic and will be based on the aircraft navigation capabilities and the current traffic situation. Assuming conformance, Trajectory Specification can guarantee safe separation for an arbitrary period of time even in the event of an air traffic control (ATC) system or datalink failure; hence it can help to achieve the high level of safety and reliability needed for ATC automation. It can also reduce the reliance on tactical backup systems during normal operation. This paper applies it to the terminal area around a major airport and presents algorithms and software for detecting and resolving conflicts. A representative set of pairwise conflicts was generated, and a fast-time simulation was run on them. All conflicts were successfully resolved in real time, demonstrating the computational feasibility of the concept.
Stiletto, R; Röthke, M; Schäfer, E; Lefering, R; Waydhas, Ch
2006-10-01
Patient security has become one of the major aspects of clinical management in recent years. The crucial point in research was focused on malpractice. In contradiction to the economic process in non medical fields, the analysis of errors during the in-patient treatment time was neglected. Patient risk management can be defined as a structured procedure in a clinical unit with the aim to reduce harmful events. A risk point model was created based on a Delphi process and founded on the DIVI data register. The risk point model was evaluated in clinically working ICU departments participating in the register data base. The results of the risk point evaluation will be integrated in the next data base update. This might be a step to improve the reliability of the register to measure quality assessment in the ICU.
Investigation of the Parameters of Sealed Triple-Point Cells for Cryogenic Gases
NASA Astrophysics Data System (ADS)
Fellmuth, B.; Wolber, L.
2011-01-01
An overview of the parameters of a large number of sealed triple-point cells for the cryogenic gases hydrogen, oxygen, neon, and argon is given that have been determined within the framework of an international star intercomparison to optimize the measurement of melting curves as well as to establish complete and reliable uncertainty budgets for the realization of temperature fixed points. Special emphasis is given to the question, whether the parameters are primarily influenced by the cell design or the properties of the fixed-point samples. For explaining surprisingly large periods of the thermal recovery after the heat pulses of the intermittent heating through the melting range, a simple model is developed based on a newly defined heat-capacity equivalent, which considers the heat of fusion and a melting-temperature inhomogeneity. The analysis of the recovery using a graded set of exponential functions containing different time constants is also explained in detail.
Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?
Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D
2018-02-01
Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology
On the spectrum of inhomogeneous turbulence
NASA Technical Reports Server (NTRS)
Trevino, G.
1979-01-01
Inhomogeneous turbulence is defined as turbulence whose statistics are functions of spatial position. The turbulence spectrum, and particularly how the shape of the spectrum varies, from point to point in space, as a consequence of well defined spatial variations in the turbulence intensity and/or integral scale is investigated.
Implementing system simulation of C3 systems using autonomous objects
NASA Technical Reports Server (NTRS)
Rogers, Ralph V.
1987-01-01
The basis of all conflict recognition in simulation is a common frame of reference. Synchronous discrete-event simulation relies on the fixed points in time as the basic frame of reference. Asynchronous discrete-event simulation relies on fixed-points in the model space as the basic frame of reference. Neither approach provides sufficient support for autonomous objects. The use of a spatial template as a frame of reference is proposed to address these insufficiencies. The concept of a spatial template is defined and an implementation approach offered. Discussed are the uses of this approach to analyze the integration of sensor data associated with Command, Control, and Communication systems.
Extending the time window for endovascular procedures according to collateral pial circulation.
Ribo, Marc; Flores, Alan; Rubiera, Marta; Pagola, Jorge; Sargento-Freitas, Joao; Rodriguez-Luna, David; Coscojuela, Pilar; Maisterra, Olga; Piñeiro, Socorro; Romero, Francisco J; Alvarez-Sabin, Jose; Molina, Carlos A
2011-12-01
Good collateral pial circulation (CPC) predicts a favorable outcome in patients undergoing intra-arterial procedures. We aimed to determine if CPC status may be used to decide about pursuing recanalization efforts. Pial collateral score (0-5) was determined on initial angiogram. We considered good CPC when pial collateral score<3, defined total time of ischemia (TTI) as onset-to-recanalization time, and clinical improvement>4-point decline in admission-discharge National Institutes of Health Stroke Scale. We studied CPC in 61 patients (31 middle cerebral artery, 30 internal carotid artery). Good CPC patients (n=21 [34%]) had lower discharge National Institutes of Health Stroke Scale score (7 versus 21; P=0.02) and smaller infarcts (56 mL versus 238 mL; P<0.001). In poor CPC patients, a receiver operating characteristic curve defined a TTI cutoff point<300 minutes (sensitivity 67%, specificity 75%) that better predicted clinical improvement (TTI<300: 66.7% versus TTI>300: 25%; P=0.05). For good CPC patients, no temporal cutoff point could be defined. Although clinical improvement was similar for patients recanalizing within 300 minutes (poor CPC: 60% versus good CPC: 85.7%; P=0.35), the likelihood of clinical improvement was 3-fold higher after 300 minutes only in good CPC patients (23.1% versus 90.1%; P=0.01). Similarly, infarct volume was reduced 7-fold in good as compared with poor CPC patients only when TTI>300 minutes (TTI<300: poor CPC: 145 mL versus good CPC: 93 mL; P=0.56 and TTI>300: poor CPC: 217 mL versus good CPC: 33 mL; P<0.01). After adjusting for age and baseline National Institutes of Health Stroke Scale score, TTI<300 emerged as an independent predictor of clinical improvement in poor CPC patients (OR, 6.6; 95% CI, 1.01-44.3; P=0.05) but not in good CPC patients. In a logistic regression, good CPC independently predicted clinical improvement after adjusting for TTI, admission National Institutes of Health Stroke Scale score, and age (OR, 12.5; 95% CI, 1.6-74.8; P=0.016). Good CPC predicts better clinical response to intra-arterial treatment beyond 5 hours from onset. In patients with stroke receiving endovascular treatment, identification of good CPC may help physicians when considering pursuing recanalization efforts in late time windows.
Alkhawaldeh, Khaled; Biersack, Hans-J; Henke, Anna; Ezziddin, Samer
2011-06-01
The aim of this study was to assess the utility of dual-time-point F-18 fluorodeoxyglucose positron emission tomography (F-18 FDG PET) in differentiating benign from malignant pleural disease, in patients with non-small-cell lung cancer. A total of 61 patients with non-small-cell lung cancer and pleural effusion were included in this retrospective study. All patients had whole-body FDG PET/CT imaging at 60 ± 10 minutes post-FDG injection, whereas 31 patients had second-time delayed imaging repeated at 90 ± 10 minutes for the chest. Maximum standardized uptake values (SUV(max)) and the average percent change in SUV(max) (%SUV) between time point 1 and time point 2 were calculated. Malignancy was defined using the following criteria: (1) visual assessment using 3-points grading scale; (2) SUV(max) ≥2.4; (3) %SUV ≥ +9; and (4) SUV(max) ≥2.4 and/or %SUV ≥ +9. Analysis of variance test and receiver operating characteristic analysis were used in statistical analysis. P < 0.05 was considered significant. Follow-up revealed 29 patient with malignant pleural disease and 31 patients with benign pleural effusion. The average SUV(max) in malignant effusions was 6.5 ± 4 versus 2.2 ± 0.9 in benign effusions (P < 0.0001). The average %SUV in malignant effusions was +13 ± 10 versus -8 ± 11 in benign effusions (P < 0.0004). Sensitivity, specificity, and accuracy for the 5 criteria were as follows: (1) 86%, 72%, and 79%; (2) 93%, 72%, and 82%; (3) 67%, 94%, and 81%; (4) 100%, 94%, and 97%. Dual-time-point F-18 FDG PET can improve the diagnostic accuracy in differentiating benign from malignant pleural disease, with high sensitivity and good specificity.
NASA Astrophysics Data System (ADS)
Skala, Vaclav
2016-06-01
There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.
An unjustified benefit: immortal time bias in the analysis of time-dependent events.
Gleiss, Andreas; Oberbauer, Rainer; Heinze, Georg
2018-02-01
Immortal time bias is a problem arising from methodologically wrong analyses of time-dependent events in survival analyses. We illustrate the problem by analysis of a kidney transplantation study. Following patients from transplantation to death, groups defined by the occurrence or nonoccurrence of graft failure during follow-up seemingly had equal overall mortality. Such naive analysis assumes that patients were assigned to the two groups at time of transplantation, which actually are a consequence of occurrence of a time-dependent event later during follow-up. We introduce landmark analysis as the method of choice to avoid immortal time bias. Landmark analysis splits the follow-up time at a common, prespecified time point, the so-called landmark. Groups are then defined by time-dependent events having occurred before the landmark, and outcome events are only considered if occurring after the landmark. Landmark analysis can be easily implemented with common statistical software. In our kidney transplantation example, landmark analyses with landmarks set at 30 and 60 months clearly identified graft failure as a risk factor for overall mortality. We give further typical examples from transplantation research and discuss strengths and limitations of landmark analysis and other methods to address immortal time bias such as Cox regression with time-dependent covariables. © 2017 Steunstichting ESOT.
Arnold, Lesley M; Emir, Birol; Pauer, Lynne; Resnick, Malca; Clair, Andrew
2015-01-01
To determine the time to immediate and sustained clinical improvement in pain and sleep quality with pregabalin in patients with fibromyalgia. A post hoc analysis of four 8- to 14-week phase 2-3, placebo-controlled trials of fixed-dose pregabalin (150-600 mg/day) for fibromyalgia, comprising 12 pregabalin and four placebo treatment arms. A total of 2,747 patients with fibromyalgia, aged 18-82 years. Pain and sleep quality scores, recorded daily on 11-point numeric rating scales (NRSs), were analyzed to determine time to immediate improvement with pregabalin, defined as the first of ≥2 consecutive days when the mean NRS score was significantly lower for pregabalin vs placebo in those treatment arms with a significant improvement at endpoint, and time to sustained clinical improvement with pregabalin, defined as a ≥1-point reduction of the baseline NRS score of patient responders who had a ≥30% improvement on the pain NRS, sleep NRS, or Fibromyalgia Impact Questionnaire (FIQ) from baseline to endpoint, or who reported "much improved" or "very much improved" on the Patient Global Impression of Change (PGIC) at endpoint. Significant improvements in pain and sleep quality scores at endpoint vs placebo were seen in 8/12 and 11/12 pregabalin treatment arms, respectively (P < 0.05). In these arms, time to immediate improvements in pain or sleep occurred by day 1 or 2. Time to sustained clinical improvement occurred significantly earlier in pain, sleep, PGIC, and FIQ responders (P < 0.02) with pregabalin vs placebo. Both immediate and sustained clinical improvements in pain and sleep quality occurred faster with pregabalin vs placebo. Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biasotto, G.; Simoes, A.Z., E-mail: alezipo@yahoo.com; Foschini, C.R.
Highlights: Black-Right-Pointing-Pointer BiFeO{sub 3} (BFO) nanoparticles were grown by hydrothermal microwave method (HTMW). Black-Right-Pointing-Pointer The soaking time is effective in improving phase formation. Black-Right-Pointing-Pointer Rietveld refinement reveals an orthorhombic structure. Black-Right-Pointing-Pointer The observed magnetism of the BFO crystallites is a consequence of particle size. Black-Right-Pointing-Pointer The HTMW is a genuine technique for low temperatures and short times of synthesis. -- Abstract: Hydrothermal microwave method (HTMW) was used to synthesize crystalline bismuth ferrite (BiFeO{sub 3}) nanoparticles (BFO) in the temperature of 180 Degree-Sign C with times ranging from 5 min to 1 h. BFO nanoparticles were characterized by means of X-raymore » analyses, FT-IR, Raman spectroscopy, TG-DTA and FE-SEM. X-ray diffraction results indicated that longer soaking time was benefit to refraining the formation of any impurity phases and growing BFO crystallites into almost single-phase perovskites. Typical FT-IR spectra for BFO nanoparticles presented well defined bands, indicating a substantial short-range order in the system. TG-DTA analyses confirmed the presence of lattice OH{sup -} groups, commonly found in materials obtained by HTMW process. Compared with the conventional solid-state reaction process, submicron BFO crystallites with better homogeneity could be produced at the temperature as low as 180 Degree-Sign C. These results show that the HTMW synthesis route is rapid, cost effective, and could be used as an alternative to obtain BFO nanoparticles in the temperature of 180 Degree-Sign C for 1 h.« less
Adolescent health-risk behavior and community disorder.
Wiehe, Sarah E; Kwan, Mei-Po; Wilson, Jeff; Fortenberry, J Dennis
2013-01-01
Various forms of community disorder are associated with health outcomes but little is known about how dynamic context where an adolescent spends time relates to her health-related behaviors. Assess whether exposure to contexts associated with crime (as a marker of community disorder) correlates with self-reported health-related behaviors among adolescent girls. Girls (N = 52), aged 14-17, were recruited from a single geographic urban area and monitored for 1 week using a GPS-enabled cell phone. Adolescents completed an audio computer-assisted self-administered interview survey on substance use (cigarette, alcohol, or marijuana use) and sexual intercourse in the last 30 days. In addition to recorded home and school address, phones transmitted location data every 5 minutes (path points). Using ArcGIS, we defined community disorder as aggregated point-level Unified Crime Report data within a 200-meter Euclidian buffer from home, school and each path point. Using Stata, we analyzed how exposures to areas of higher crime prevalence differed among girls who reported each behavior or not. Participants lived and spent time in areas with variable crime prevalence within 200 meters of their home, school and path points. Significant differences in exposure occurred based on home location among girls who reported any substance use or not (p 0.04) and sexual intercourse or not (p 0.01). Differences in exposure by school and path points were only significant among girls reporting any substance use or not (p 0.03 and 0.02, respectively). Exposure also varied by school/non-school day as well as time of day. Adolescent travel patterns are not random. Furthermore, the crime context where an adolescent spends time relates to her health-related behavior. These data may guide policy relating to crime control and inform time- and space-specific interventions to improve adolescent health.
a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud
NASA Astrophysics Data System (ADS)
Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng
2016-06-01
This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.
DOT National Transportation Integrated Search
1978-04-01
Volume 2 defines a new algorithm for the network equilibrium model that works in the space of path flows and is based on the theory of fixed point method. The goals of the study were broadly defined as the identification of aggregation practices and ...
Rajtmajer, Sarah M; Roy, Arnab; Albert, Reka; Molenaar, Peter C M; Hillary, Frank G
2015-01-01
Despite exciting advances in the functional imaging of the brain, it remains a challenge to define regions of interest (ROIs) that do not require investigator supervision and permit examination of change in networks over time (or plasticity). Plasticity is most readily examined by maintaining ROIs constant via seed-based and anatomical-atlas based techniques, but these approaches are not data-driven, requiring definition based on prior experience (e.g., choice of seed-region, anatomical landmarks). These approaches are limiting especially when functional connectivity may evolve over time in areas that are finer than known anatomical landmarks or in areas outside predetermined seeded regions. An ideal method would permit investigators to study network plasticity due to learning, maturation effects, or clinical recovery via multiple time point data that can be compared to one another in the same ROI while also preserving the voxel-level data in those ROIs at each time point. Data-driven approaches (e.g., whole-brain voxelwise approaches) ameliorate concerns regarding investigator bias, but the fundamental problem of comparing the results between distinct data sets remains. In this paper we propose an approach, aggregate-initialized label propagation (AILP), which allows for data at separate time points to be compared for examining developmental processes resulting in network change (plasticity). To do so, we use a whole-brain modularity approach to parcellate the brain into anatomically constrained functional modules at separate time points and then apply the AILP algorithm to form a consensus set of ROIs for examining change over time. To demonstrate its utility, we make use of a known dataset of individuals with traumatic brain injury sampled at two time points during the first year of recovery and show how the AILP procedure can be applied to select regions of interest to be used in a graph theoretical analysis of plasticity.
NASA Technical Reports Server (NTRS)
Schlegel, E.; Norris, Jay P. (Technical Monitor)
2002-01-01
This project was awarded funding from the CGRO program to support ROSAT and ground-based observations of unidentified sources from data obtained by the EGRET instrument on the Compton Gamma-Ray Observatory. The critical items in the project are the individual ROSAT observations that are used to cover the 99% error circle of the unidentified EGRET source. Each error circle is a degree or larger in diameter. Each ROSAT field is about 30 deg in diameter. Hence, a number (>4) of ROSAT pointings must be obtained for each EGRET source to cover the field. The scheduling of ROSAT observations is carried out to maximize the efficiency of the total schedule. As a result, each pointing is broken into one or more sub-pointings of various exposure times. This project was awarded ROSAT observing time for four unidentified EGRET sources, summarized in the table. The column headings are defined as follows: 'Coverings' = number of observations to cover the error circle; 'SubPtg' = total number of sub-pointings to observe all of the coverings; 'Rec'd' = number of individual sub-pointings received to date; 'CompFlds' = number of individual coverings for which the requested complete exposure has been received. Processing of the data can not occur until a complete exposure has been accumulated for each covering.
Cundy, Thomas P; Rowland, Simon P; Gattas, Nicholas E; White, Alan D; Najmaldin, Azad S
2015-06-01
Fundoplication is a leading application of robotic surgery in children, yet the learning curve for this procedure (RF) remains ill-defined. This study aims to identify various learning curve transition points, using cumulative summation (CUSUM) analysis. A prospective database was examined to identify RF cases undertaken during 2006-2014. Time-based surgical process outcomes were evaluated, as well as clinical outcomes. A total of 57 RF cases were included. Statistically significant transitions beyond the learning phase were observed at cases 42, 34 and 37 for docking, console and total operating room times, respectively. A steep early learning phase for docking time was overcome after 12 cases. There were three Clavien-Dindo grade ≥ 3 complications, with two patients requiring redo fundoplication. We identified numerous well-defined learning curve trends to affirm that experience confers significant temporal improvements. Our findings highlight the value of the CUSUM method for learning curve evaluation. Copyright © 2014 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Hannan, Michael T.
This technical document, part of a series of chapters described in SO 011 759, describes a basic model of panel analysis used in a study of the causes of institutional and structural change in nations. Panel analysis is defined as a record of state occupancy of a sample of units at two or more points in time; for example, voters disclose voting…
On Fixed Points of Strictly Causal Functions
2013-04-08
were defined to be the functions that are strictly contracting with respect to the Cantor metric (also called the Baire distance) on signals over non...in Computer Science, pages 447–484. Springer Berlin / Heidelberg, 1992. [36] George Markowsky. Chain-complete posets and directed sets with...Journal of Logic Programming, 42(2):59–70, 2000. [53] George M. Reed and A. William Roscoe. A timed model for communicating sequential processes. In
No chiral truncation of quantum log gravity?
NASA Astrophysics Data System (ADS)
Andrade, Tomás; Marolf, Donald
2010-03-01
At the classical level, chiral gravity may be constructed as a consistent truncation of a larger theory called log gravity by requiring that left-moving charges vanish. In turn, log gravity is the limit of topologically massive gravity (TMG) at a special value of the coupling (the chiral point). We study the situation at the level of linearized quantum fields, focussing on a unitary quantization. While the TMG Hilbert space is continuous at the chiral point, the left-moving Virasoro generators become ill-defined and cannot be used to define a chiral truncation. In a sense, the left-moving asymptotic symmetries are spontaneously broken at the chiral point. In contrast, in a non-unitary quantization of TMG, both the Hilbert space and charges are continuous at the chiral point and define a unitary theory of chiral gravity at the linearized level.
Stable, semi-stable populations and growth potential.
Bourgeois-Pichat, J
1971-07-01
Abstract Starting from the definition of a Malthusian population given by Alfred J. Lotka, the author recalls how the concept of stable population is introduced in demography, first as a particular case of stable populations, and secondly as a limit of a demographic evolutionary process in which female age-specific fertility rates and age-specific mortality rates remain constant. Then he defines a new concept: the semi-stable population which is a population with a constant age distribution. He shows that such a population coincides at any point of time with the stable population corresponding to the mortality and the fertility at this point of time. In the remaining part of the paper it is shown how the concept of a stable population can be used for defining a coefficient of inertia which measures the resistance of a population to modification of its course as a consequence of changing fertility and mortality. Some formulae are established to calculate this coefficient first for an arbitrary population, and secondly for a semistable population. In this second case the formula is particularly simple. It appears as a product of three terms: the expectation of life at birth in years, the crude birth rate, and a coefficient depending on the rate of growth and for which a numerical table is easy to establish.
Interactive-cut: Real-time feedback segmentation for translational research.
Egger, Jan; Lüddemann, Tobias; Schwarzenberg, Robert; Freisleben, Bernd; Nimsky, Christopher
2014-06-01
In this contribution, a scale-invariant image segmentation algorithm is introduced that "wraps" the algorithm's parameters for the user by its interactive behavior, avoiding the definition of "arbitrary" numbers that the user cannot really understand. Therefore, we designed a specific graph-based segmentation method that only requires a single seed-point inside the target-structure from the user and is thus particularly suitable for immediate processing and interactive, real-time adjustments by the user. In addition, color or gray value information that is needed for the approach can be automatically extracted around the user-defined seed point. Furthermore, the graph is constructed in such a way, so that a polynomial-time mincut computation can provide the segmentation result within a second on an up-to-date computer. The algorithm presented here has been evaluated with fixed seed points on 2D and 3D medical image data, such as brain tumors, cerebral aneurysms and vertebral bodies. Direct comparison of the obtained automatic segmentation results with costlier, manual slice-by-slice segmentations performed by trained physicians, suggest a strong medical relevance of this interactive approach. Copyright © 2014 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-07-01
... defined by a series of points of contact, with the boat structure, by straight lines at 45 degree angles... the line defined by a series of points of contact with the boat structure, by straight lines at 45 degree angles to the horizontal and contained in a vertical plane normal to the outside edge of the boat...
Real-time services in IP network architectures
NASA Astrophysics Data System (ADS)
Gilardi, Antonella
1996-12-01
The worldwide internet system seems to be the success key for the provision of real time multimedia services to both residential and business users and someone says that in such a way broadband networks will have a reason to exist. This new class of applications that use multiple media (voice, video and data) impose constraints to the global network nowadays consisting of subnets with various data links. The attention will be focused on the interconnection of IP non ATM and ATM networks. IETF and ATM forum are currently involved in the developing specifications suited to adapt the connectionless IP protocol to the connection oriented ATM protocol. First of all the link between the ATM and the IP service model has to be set in order to match the QoS and traffic requirements defined in the relative environment. A further significant topic is represented by the mapping of IP resource reservation model onto the ATM signalling and in the end it is necessary to define how the routing works when there are QoS parameters associated. This paper, considering only unicast applications, will examine the above issues taking as a starting point the situation where an host launches as call set up request with the relevant QoS and traffic descriptor and at some point a router at the edge of the ATM network has to decide how forwarding and request in order to establish an end to end link with the right capabilities. The aim is to compare the proposals emerging from different standard bodies to point out convergency or incompatibility.
Muntner, Paul; Joyce, Cara; Holt, Elizabeth; He, Jiang; Morisky, Donald; Webber, Larry S; Krousel-Wood, Marie
2011-05-01
Self-report scales are used to assess medication adherence. Data on how to discriminate change in self-reported adherence over time from random variability are limited. To determine the minimal detectable change for scores on the 8-item Morisky Medication Adherence Scale (MMAS-8). The MMAS-8 was administered twice, using a standard telephone script, with administration separated by 14-22 days, to 210 participants taking antihypertensive medication in the CoSMO (Cohort Study of Medication Adherence among Older Adults). MMAS-8 scores were calculated and participants were grouped into previously defined categories (<6, 6 to <8, and 8 for low, medium, and high adherence). The mean (SD) age of participants was 78.1 (5.8) years, 43.8% were black, and 68.1% were women. Overall, 8.1% (17/210), 16.2% (34/210), and 51.0% (107/210) of participants had low, medium, and high MMAS-8 scores, respectively, at both survey administrations (overall agreement 75.2%; 158/210). The weighted κ statistic was 0.63 (95% CI 0.53 to 0.72). The intraclass correlation coefficient was 0.78. The within-person standard error of the mean for change in MMAS-8 scores was 0.81, which equated to a minimal detectable change of 1.98 points. Only 4.3% (9/210) of the participants had a change in MMAS-8 of 2 or more points between survey administrations. Within-person changes in MMAS-8 scores of 2 or more points over time may represent a real change in antihypertensive medication adherence.
NASA Astrophysics Data System (ADS)
Fischer, P.; Jardani, A.; Cardiff, M.; Lecoq, N.; Jourde, H.
2018-04-01
In a karstic field, the flow paths are very complex as they globally follow the conduit network. The responses generated from an investigation in this type of aquifer can be spatially highly variable. Therefore, the aim of the investigation in this case is to define a degree of connectivity between points of the field, in order to understand these flow paths. Harmonic pumping tests represent a possible investigation method for characterizing the subsurface flow of groundwater. They have several advantages compared to a constant-rate pumping (more signal possibilities, ease of extracting the signal in the responses and possibility of closed loop investigation). We show in this work that interpreting the responses from a harmonic pumping test is very useful for delineating a degree of connectivity between measurement points. We have firstly studied the amplitude and phase offset of responses from a harmonic pumping test in a theoretical synthetic modeling case in order to define a qualitative interpretation method in the time and frequency domains. Three different type of responses have been separated: a conduit connectivity response, a matrix connectivity, and a dual connectivity (response of a point in the matrix, but close to a conduit). We have then applied this method to measured responses at a field research site. Our interpretation method permits a quick and easy reconstruction of the main flow paths, and the whole set of field responses appear to give a similar range of responses to those seen in the theoretical synthetic case.
NASA Astrophysics Data System (ADS)
Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin
2017-06-01
Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.
Kim, Boong-Nyun; Kim, Jae-Won; Kim, Hyo-Won; Shin, Min-Sup; Cho, Soo-Churl; Choi, Nam Hee; Ahn, Hyunnie; Lee, Seung-Yeon; Ryu, Jeong; Yun, Myoung-Joo
2009-08-01
The aims of this study were to examine the symptoms of posttraumatic stress and anxiety/depression in Korean children after direct or indirect exposure to a single incident of trauma during a fire-escape drill and to assess the incidence of psychiatric disorders in this population. A total of 1,394 students who attended the elementary school at which the traumatic event took place were evaluated using self-administered questionnaires (the Child Posttraumatic Stress Disorder-Reaction Index [CPTSD-RI], State Anxiety Scale of the State-Trait Anxiety Inventory for Children [STAIC], and Children's Depression Inventory [CDI]), as well as structured diagnostic interviews (Diagnostic Interview Schedule for Children, Version-IV [DISC-IV]) at 2 days (time point 1), 2 months (time point 2), and 6 months (time point 3) after the incident. The 335 students who witnessed the accident were defined as the direct-exposure group, and the remaining students (n = 1,059) were defined as the indirect-exposure group. The study was conducted from May to November 2007. At time point 1, the prevalence of severe posttraumatic stress disorder (PTSD), anxiety, and depressive symptoms was 18.2%, 5.5%, and 3.4%, respectively. The prevalence of severe PTSD symptoms, as measured by the CPTSD-RI, was significantly higher in the direct-exposure group than in the indirect-exposure group (36.6% vs 12.7%, respectively; P < .001). At time point 2, the prevalence of severe PTSD symptoms was 7.4% (14.0% in the direct-exposure group and 4.9% in the indirect-exposure group, P < .001). The mean total CPTSD-RI score was significantly higher (P < .001) in the direct-exposure group than in the indirect-exposure group. At time point 3, thirty-eight of the 58 subjects (65.5%) evaluated with the DISC-IV in the direct-exposure group had 1 or more of the 7 anxiety/depressive disorders assessed, including subthreshold diagnoses. Among the diagnoses meeting full DSM-IV criteria for each disorder, agoraphobia was the most prevalent (22.4%), followed by generalized anxiety disorder (13.8%), separation anxiety disorder (6.9%), PTSD (5.2%), and social phobia (5.2%). When the subthreshold diagnoses were considered along with the full syndrome diagnoses, separation anxiety disorder was the most common diagnosis (41.4%), followed by agoraphobia (34.5%), obsessive-compulsive disorder (22.4%), PTSD (20.7%), and social phobia (20.7%). The results of this study provide important evidence that various anxiety/depressive disorders, in addition to PTSD, might follow after direct or indirect exposure to trauma. Our findings highlight the importance of comprehensive screening for psychiatric problems in children exposed to trauma of any scale. ©Copyright 2009 Physicians Postgraduate Press, Inc.
Substance P signalling in primary motor cortex facilitates motor learning in rats.
Hertler, Benjamin; Hosp, Jonas Aurel; Blanco, Manuel Buitrago; Luft, Andreas Rüdiger
2017-01-01
Among the genes that are up-regulated in response to a reaching training in rats, Tachykinin 1 (Tac1)-a gene that encodes the neuropeptide Substance P (Sub P)-shows an especially strong expression. Using Real-Time RT-PCR, a detailed time-course of Tac1 expression could be defined: a significant peak occurs 7 hours after training ended at the first and second training session, whereas no up-regulation could be detected at a later time-point (sixth training session). To assess the physiological role of Sub P during movement acquisition, microinjections into the primary motor cortex (M1) contralateral to the trained paw were performed. When Sub P was injected before the first three sessions of a reaching training, effectiveness of motor learning became significantly increased. Injections at a time-point when rats already knew the task (i.e. training session ten and eleven) had no effect on reaching performance. Sub P injections did not influence the improvement of performance within a single training session, but retention of performance between sessions became strengthened at a very early stage (i.e. between baseline-training and first training session). Thus, Sub P facilitates motor learning in the very early phase of skill acquisition by supporting memory consolidation. In line with these findings, learning related expression of the precursor Tac1 occurs at early but not at later time-points during reaching training.
Potanos, Kristina; Fullington, Nora; Cauley, Ryan; Purcell, Patricia; Zurakowski, David; Fishman, Steven; Vakili, Khashayar; Kim, Heung Bae
2016-04-01
We examine the mechanism of aortic lengthening in a novel rodent model of tissue expander stimulated lengthening of arteries (TESLA). A rat model of TESLA was examined with a single stretch stimulus applied at the time of tissue expander insertion with evaluation of the aorta at 2, 4 and 7day time points. Measurements as well as histology and proliferation assays were performed and compared to sham controls. The aortic length was increased at all time points without histologic signs of tissue injury. Nuclear density remained unchanged despite the increase in length suggesting cellular hyperplasia. Cellular proliferation was confirmed in endothelial cell layer by Ki-67 stain. Aortic lengthening may be achieved using TESLA. The increase in aortic length can be achieved without tissue injury and results at least partially from cellular hyperplasia. Further studies are required to define the mechanisms involved in the growth of arteries under increased longitudinal stress. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Monnier, F.; Vallet, B.; Paparoditis, N.; Papelard, J.-P.; David, N.
2013-10-01
This article presents a generic and efficient method to register terrestrial mobile data with imperfect location on a geographic database with better overall accuracy but less details. The registration method proposed in this paper is based on a semi-rigid point to plane ICP ("Iterative Closest Point"). The main applications of such registration is to improve existing geographic databases, particularly in terms of accuracy, level of detail and diversity of represented objects. Other applications include fine geometric modelling and fine façade texturing, object extraction such as trees, poles, road signs marks, facilities, vehicles, etc. The geopositionning system of mobile mapping systems is affected by GPS masks that are only partially corrected by an Inertial Navigation System (INS) which can cause an important drift. As this drift varies non-linearly, but slowly in time, it will be modelled by a translation defined as a piecewise linear function of time which variation over time will be minimized (rigidity term). For each iteration of the ICP, the drift is estimated in order to minimise the distance between laser points and planar model primitives (data attachment term). The method has been tested on real data (a scan of the city of Paris of 3.6 million laser points registered on a 3D model of approximately 71,400 triangles).
Time perception in patients with major depressive disorder during vagus nerve stimulation.
Biermann, T; Kreil, S; Groemer, T W; Maihöfner, C; Richter-Schmiedinger, T; Kornhuber, J; Sperling, W
2011-07-01
Affective disorders may affect patients' time perception. Several studies have described time as a function of the frontal lobe. The activating eff ects of vagus nerve stimulation on the frontal lobe might also modulate time perception in patients with major depressive disorder (MDD). Time perception was investigated in 30 patients with MDD and in 7 patients with therapy-resistant MDD. In these 7 patients, a VNS system was implanted and time perception was assessed before and during stimulation. A time estimation task in which patients were asked "How many seconds have passed?" tested time perception at 4 defined time points (34 s, 77 s, 192 s and 230 s). The differences between the estimated and actual durations were calculated and used for subsequent analysis. Patients with MDD and healthy controls estimated the set time points relatively accurately. A general linear model revealed a significant main eff ect of group but not of age or sex. The passing of time was perceived as significantly slower in patients undergoing VNS compared to patients with MDD at all time points (T34: t = − 4.2; df = 35; p < 0.001; T77: t = − 4.8; df = 35; p < 0.001; T192: t = − 2.0; df = 35; p = 0.059; T230 t = −2.2; df = 35; p = 0.039) as well as compared to healthy controls (at only T77: t = 4.1; df = 35; p < 0.001). There were no differences in time perception with regard to age, sex or polarity of depression (uni- or bipolar). VNS is capable of changing the perception of time. This discovery furthers the basic research on circadian rhythms in patients with psychiatric disorders.
Chen, Su-liang; Bai, Guang-yi; Li, Qiao-min; Li, Bao-jun; Hui, Yan-liang; Liang, Liang; Wang, Wei; Chen, Zhi-qiang; Lu, Xin-li; Wang, Xiao-feng; Zhang, Yu-qi; Zhao, Hong-ru
2012-04-01
To examine the state of incubation period and survival time of former commercial plasma donors (FCPDs) infected with HIV. All objects infected with HIV were from Hebei province and found from general investigation for FCPDs in 1995. The infector cohort by 142 cases was used to estimate incubation period. In the infector cohort, the time which infectors entered the cohort was their infection time, which was the middle value of the origin date, which was January 1, 1995. The onset of AIDS was defined as an outcome event. End point of observation was Dec 31, 2010. There were 192 months in all from beginning to end. The AIDS cohort by 57 cases was used to estimate the survival of the patients. In the patient cohort, the time of AIDS onset was defined as the time entering the cohort, and death of AIDS was defined as an outcome event. The cumulative incidence ratio, cumulative mortality, illness intensity and mortality intensity were analyzed through Kaplan-Meier. During the observation period, 123 cases of 142 infectors developed into AIDS, the cumulative incidence was 86.42% (123/142) and the intensity was 8.53/100 person-years and the median time of incubation period was 112.0 months (95%CI: 108.8 - 115.2). The death dates of 57 patients were from 1 to 24 months after onset. The cumulative mortality was 100%, and the intensity was 250.66/100 person-years and the median survival time was 3.0 months (95%CI: 1.8 - 4.2). It was estimated that the median time was 115.0 months (9.6 years) from infection to death. The median times of incubation and median survival time were 112.0 and 3.0 months, respectively.
Brown, Evans K H; Harder, Kathleen A; Apostolidou, Ioanna; Wahr, Joyce A; Shook, Douglas C; Farivar, R Saeid; Perry, Tjorvi E; Konia, Mojca R
2017-07-01
The cardiac operating room is a complex environment requiring efficient and effective communication between multiple disciplines. The objectives of this study were to identify and rank critical time points during the perioperative care of cardiac surgical patients, and to assess variability in responses, as a correlate of a shared mental model, regarding the importance of these time points between and within disciplines. Using Delphi technique methodology, panelists from 3 institutions were tasked with developing a list of critical time points, which were subsequently assigned to pause point (PP) categories. Panelists then rated these PPs on a 100-point visual analog scale. Descriptive statistics were expressed as percentages, medians, and interquartile ranges (IQRs). We defined low response variability between panelists as an IQR ≤ 20, moderate response variability as an IQR > 20 and ≤ 40, and high response variability as an IQR > 40. Panelists identified a total of 12 PPs. The PPs identified by the highest number of panelists were (1) before surgical incision, (2) before aortic cannulation, (3) before cardiopulmonary bypass (CPB) initiation, (4) before CPB separation, and (5) at time of transfer of care from operating room (OR) to intensive care unit (ICU) staff. There was low variability among panelists' ratings of the PP "before surgical incision," moderate response variability for the PPs "before separation from CPB," "before transfer from OR table to bed," and "at time of transfer of care from OR to ICU staff," and high response variability for the remaining 8 PPs. In addition, the perceived importance of each of these PPs varies between disciplines and between institutions. Cardiac surgical providers recognize distinct critical time points during cardiac surgery. However, there is a high degree of variability within and between disciplines as to the importance of these times, suggesting an absence of a shared mental model among disciplines caring for cardiac surgical patients during the perioperative period. A lack of a shared mental model could be one of the factors contributing to preventable errors in cardiac operating rooms.
Long term economic relationships from cointegration maps
NASA Astrophysics Data System (ADS)
Vicente, Renato; Pereira, Carlos de B.; Leite, Vitor B. P.; Caticha, Nestor
2007-07-01
We employ the Bayesian framework to define a cointegration measure aimed to represent long term relationships between time series. For visualization of these relationships we introduce a dissimilarity matrix and a map based on the sorting points into neighborhoods (SPIN) technique, which has been previously used to analyze large data sets from DNA arrays. We exemplify the technique in three data sets: US interest rates (USIR), monthly inflation rates and gross domestic product (GDP) growth rates.
Simulation of Extreme Arctic Cyclones in IPCC AR5 Experiments
2014-05-15
atmospheric fields, including sea level pressure ( SLP ), on daily and sub-daily time scales at 2° horizontal resolution. A higher-resolution and more...its 21st-century simulation. Extreme cyclones were defined as occurrences of daily mean SLP at least 40 hPa below the climatological annual-average... SLP at a grid point. As such, no cyclone-tracking algorithm was employed, because the purpose here is to identify instances of extremely strong
The Fixed-Point Theory of Strictly Causal Functions
2013-06-09
functions were defined to be the functions that are strictly contracting with respect to the Cantor metric (also called the Baire distance) on signals...of Lecture Notes in Computer Science, pages 447–484. Springer Berlin / Heidelberg, 1992. [36] George Markowsky. Chain-complete posets and directed...Journal of Logic Programming, 42(2):59–70, 2000. [52] George M. Reed and A. William Roscoe. A timed model for communicating sequential processes. In Laurent
Note on the displacement of a trajectory of hyperbolic motion in curved space-time
NASA Astrophysics Data System (ADS)
Krikorian, R. A.
2012-04-01
The object of this note is to present a physical application of the theory of the infinitesimal deformations or displacements of curves developed by Yano using the concept of Lie derivative. It is shown that an infinitesimal point transformation which carries a given trajectory of hyperbolic motion into a trajectory of the same type, and preserves the affine parametrization of the trajectory, defines a homothetic motion.
NASA Astrophysics Data System (ADS)
Curtright, Thomas
2011-04-01
Continuous interpolates are described for classical dynamical systems defined by discrete time-steps. Functional conjugation methods play a central role in obtaining the interpolations. The interpolates correspond to particle motion in an underlying potential, V. Typically, V has no lower bound and can exhibit switchbacks wherein V changes form when turning points are encountered by the particle. The Beverton-Holt and Skellam models of population dynamics, and particular cases of the logistic map are used to illustrate these features.
ERIC Educational Resources Information Center
Limond, David
2008-01-01
This piece concerns the controversy engendered by Martin Cole's 1971 film Growing Up, an attempt to start a revolution in school sex education practices by showing explicit scenes of unsimulated sexual acts. Although Cole's film was never widely shown it marked a turning point in English school sex education by defining the limits of the…
ERIC Educational Resources Information Center
Kadhi, T.; Rudley, D.; Holley, D.; Krishna, K.; Ogolla, C.; Rene, E.; Green, T.
2010-01-01
The following report of descriptive statistics addresses the attendance of the 2012 class and the average Actual and Predicted 1L Grade Point Averages (GPAs). Correlational and Inferential statistics are also run on the variables of Attendance (Y/N), Attendance Number of Times, Actual GPA, and Predictive GPA (Predictive GPA is defined as the Index…
NASA Technical Reports Server (NTRS)
Pomey, Jacques
1952-01-01
From the practical point of view, this analysis shows that each problem of friction or wear requires its particular solution. There is no universal solution; one or other of the factors predominates and defines the choice of the solution. In certain cases, copper alloys of great thermal conductivity are preferred; in others, plastics abundantly supplied with water. Sometimes, soft antifriction metals are desirable to distribute the load; at other times, hard metals with high resistance to abrasion or heat.
Nondestructive Vibratory Testing and Evaluation Procedure for Military Roads and Streets.
1984-07-01
the addition of an auto- matic data acquisition system to the instrumentation control panel. This system , presently available, would automatically ...the data used to further develop and define the basic correlations. c. Consideration be given to installing an automatic data acquisi- tion system to...glows red any time the force generator is not fully elevated. Depressing this switch will stop the automatic cycle at any point and clear all system
Development of a piecewise linear omnidirectional 3D image registration method
NASA Astrophysics Data System (ADS)
Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo
2016-12-01
This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.
A generalized computer code for developing dynamic gas turbine engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.
1984-01-01
This paper describes DIGTEM (digital turbofan engine model), a computer program that simulates two spool, two stream (turbofan) engines. DIGTEM was developed to support the development of a real time multiprocessor based engine simulator being designed at the Lewis Research Center. The turbofan engine model in DIGTEM contains steady state performance maps for all the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. DIGTEM features an implicit integration scheme for integrating stiff systems and trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off design points and iterates to a balanced engine condition. Transients are generated by defining the engine inputs as functions of time in a user written subroutine (TMRSP). Closed loop controls can also be simulated. DIGTEM is generalized in the aerothermodynamic treatment of components. This feature, along with DIGTEM's trimming at a design point, make it a very useful tool for developing a model of a specific turbofan engine.
A generalized computer code for developing dynamic gas turbine engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.
1983-01-01
This paper describes DIGTEM (digital turbofan engine model), a computer program that simulates two spool, two stream (turbofan) engines. DIGTEM was developed to support the development of a real time multiprocessor based engine simulator being designed at the Lewis Research Center. The turbofan engine model in DIGTEM contains steady state performance maps for all the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. DIGTEM features an implicit integration scheme for integrating stiff systems and trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off design points and iterates to a balanced engine condition. Transients are generated by defining the engine inputs as functions of time in a user written subroutine (TMRSP). Closed loop controls can also be simulated. DIGTEM is generalized in the aerothermodynamic treatment of components. This feature, along with DIGTEM's trimming at a design point, make it a very useful tool for developing a model of a specific turbofan engine.
Anatomy-driven multiple trajectory planning (ADMTP) of intracranial electrodes for epilepsy surgery.
Sparks, Rachel; Vakharia, Vejay; Rodionov, Roman; Vos, Sjoerd B; Diehl, Beate; Wehner, Tim; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sebastien
2017-08-01
Epilepsy is potentially curable with resective surgery if the epileptogenic zone (EZ) can be identified. If non-invasive imaging is unable to elucidate the EZ, intracranial electrodes may be implanted to identify the EZ as well as map cortical function. In current clinical practice, each electrode trajectory is determined by time-consuming manual inspection of preoperative imaging to find a path that avoids blood vessels while traversing appropriate deep and superficial regions of interest (ROIs). We present anatomy-driven multiple trajectory planning (ADMTP) to find safe trajectories from a list of user-defined ROIs within minutes rather than the hours required for manual planning. Electrode trajectories are automatically computed in three steps: (1) Target Point Selection to identify appropriate target points within each ROI; (2) Trajectory Risk Scoring to quantify the cumulative distance to critical structures (blood vessels) along each trajectory, defined as the skull entry point to target point. (3) Implantation Plan Computation: to determine a feasible combination of low-risk trajectories for all electrodes. ADMTP was evaluated on 20 patients (190 electrodes). ADMTP lowered the quantitative risk score in 83% of electrodes. Qualitative results show ADMTP found suitable trajectories for 70% of electrodes; a similar portion of manual trajectories were considered suitable. Trajectory suitability for ADMTP was 95% if traversing sulci was not included in the safety criteria. ADMTP is computationally efficient, computing between 7 and 12 trajectories in 54.5 (17.3-191.9) s. ADMTP efficiently compute safe and surgically feasible electrode trajectories.
Medical Lasers At The Crossroads: Directions For The Next Five Years
NASA Astrophysics Data System (ADS)
Brauer, Fritz A.
1988-09-01
Of course, much can be attributed to our relative youth - the sheer number and scope of the opportunities have distorted focus and strained resources. However, I believe that we have reached a point - a crossroads - where the topography is more clearly defined and where some discernible trends point to the direction this industry will take over the next five years. These will be important years - investors, especially, expect signs of maturity to replace unbounded youthful optimism. How many of us can look back on the business plans we wrote five years ago and not feel chastened (or depressed). Our excuse is that we got everything right except the timing. Well, the "timing" is the next five years! So my talk today will center upon my personal view of these next five years. I wish to emphasize the personal aspect of my discussion: this is my prescription for future happiness.
Seismic precursory patterns before a cliff collapse and critical point phenomena
Amitrano, D.; Grasso, J.-R.; Senfaute, G.
2005-01-01
We analyse the statistical pattern of seismicity before a 1-2 103 m3 chalk cliff collapse on the Normandie ocean shore, Western France. We show that a power law acceleration of seismicity rate and energy in both 40 Hz-1.5 kHz and 2 Hz-10kHz frequency range, is defined on 3 orders of magnitude, within 2 hours from the collapse time. Simultaneously, the average size of the seismic events increases toward the time to failure. These in situ results are derived from the only station located within one rupture length distance from the rock fall rupture plane. They mimic the "critical point" like behavior recovered from physical and numerical experiments before brittle failures and tertiary creep failures. Our analysis of this first seismic monitoring data of a cliff collapse suggests that the thermodynamic phase transition models for failure may apply for cliff collapse. Copyright 2005 by the American Geophysical Union.
Jones, Matthew; Lewis, Sarah; Parrott, Steve; Wormall, Stephen; Coleman, Tim
2016-06-01
In pregnant smoking cessation trial participants, to estimate (1) among women abstinent at the end of pregnancy, the proportion who re-start smoking at time-points afterwards (primary analysis) and (2) among all trial participants, the proportion smoking at the end of pregnancy and at selected time-points during the postpartum period (secondary analysis). Trials identified from two Cochrane reviews plus searches of Medline and EMBASE. Twenty-seven trials were included. The included trials were randomized or quasi-randomized trials of within-pregnancy cessation interventions given to smokers who reported abstinence both at end of pregnancy and at one or more defined time-points after birth. Outcomes were validated biochemically and self-reported continuous abstinence from smoking and 7-day point prevalence abstinence. The primary random-effects meta-analysis used longitudinal data to estimate mean pooled proportions of re-starting smoking; a secondary analysis used cross-sectional data to estimate the mean proportions smoking at different postpartum time-points. Subgroup analyses were performed on biochemically validated abstinence. The pooled mean proportion re-starting at 6 months postpartum was 43% [95% confidence interval (CI) = 16-72%, I(2) = 96.7%] (11 trials, 571 abstinent women). The pooled mean proportion smoking at the end of pregnancy was 87% (95% CI = 84-90%, I(2) = 93.2%) and 94% (95% CI = 92-96%, I(2) = 88%) at 6 months postpartum (23 trials, 9262 trial participants). Findings were similar when using biochemically validated abstinence. In clinical trials of smoking cessation interventions during pregnancy only 13% are abstinent at term. Of these, 43% re-start by 6 months postpartum. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minniti, Giuseppe, E-mail: gminniti@ospedalesantandrea.it; Department of Neurological Sciences, Neuromed Institute, Pozzilli; Scaringi, Claudia
2013-06-01
Purpose: To describe the quality of life (QOL) in elderly patients with glioblastoma (GBM) treated with an abbreviated course of radiation therapy (RT; 40 Gy in 15 fractions) plus concomitant and adjuvant temozolomide (TMZ). Methods and Materials: Health-related QOL (HRQOL) was assessed by European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire Core-30 (QLQ-C30, version 3) and EORTC Quality of Life Questionnaire Brain Cancer Module (QLQ-BN20). Changes from baseline in the score of 9 preselected domains (global QLQ, social functioning, cognitive functioning, emotional functioning, physical functioning, motor dysfunction, communication deficit, fatigue, insomnia) were determined 4 weeksmore » after RT and thereafter every 8 weeks during the treatment until disease progression. The proportion of patients with improved HRQOL scores, defined as a change of 10 points or more, and duration of changes were recorded. Results: Sixty-five patients completed the questionnaires at baseline. The treatment was consistently associated with improvement or stability in most of the preselected HRQOL domains. Global health improved over time; mean score differed by 9.6 points between baseline and 6-month follow-up (P=.03). For social functioning and cognitive functioning, mean scores improved over time, with a maximum difference of 10.4 points and 9.5 points between baseline and 6-month follow-up (P=.01 and P=.02), respectively. By contrast, fatigue worsened over time, with a difference in mean score of 5.6 points between baseline and 4-month follow-up (P=.02). Conclusions: A short course of RT in combination with TMZ in elderly patients with GBM was associated with survival benefit without a negative effect on HRQOL until the time of disease progression.« less
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
Quantification and Compensation of Eddy-Current-Induced Magnetic Field Gradients
Spees, William M.; Buhl, Niels; Sun, Peng; Ackerman, Joseph J.H.; Neil, Jeffrey J.; Garbow, Joel R.
2011-01-01
Two robust techniques for quantification and compensation of eddy-current-induced magnetic-field gradients and static magnetic-field shifts (ΔB0) in MRI systems are described. Purpose-built 1-D or 6-point phantoms are employed. Both procedures involve measuring the effects of a prior magnetic-field-gradient test pulse on the phantom’s free induction decay (FID). Phantom-specific analysis of the resulting FID data produces estimates of the time-dependent, eddy-current-induced magnetic field gradient(s) and ΔB0 shift. Using Bayesian methods, the time dependencies of the eddy-current-induced decays are modeled as sums of exponentially decaying components, each defined by an amplitude and time constant. These amplitudes and time constants are employed to adjust the scanner’s gradient pre-emphasis unit and eliminate undesirable eddy-current effects. Measurement with the six-point sample phantom allows for simultaneous, direct estimation of both on-axis and cross-term eddy-current-induced gradients. The two methods are demonstrated and validated on several MRI systems with actively-shielded gradient coil sets. PMID:21764614
Quantification and compensation of eddy-current-induced magnetic-field gradients.
Spees, William M; Buhl, Niels; Sun, Peng; Ackerman, Joseph J H; Neil, Jeffrey J; Garbow, Joel R
2011-09-01
Two robust techniques for quantification and compensation of eddy-current-induced magnetic-field gradients and static magnetic-field shifts (ΔB0) in MRI systems are described. Purpose-built 1-D or six-point phantoms are employed. Both procedures involve measuring the effects of a prior magnetic-field-gradient test pulse on the phantom's free induction decay (FID). Phantom-specific analysis of the resulting FID data produces estimates of the time-dependent, eddy-current-induced magnetic field gradient(s) and ΔB0 shift. Using Bayesian methods, the time dependencies of the eddy-current-induced decays are modeled as sums of exponentially decaying components, each defined by an amplitude and time constant. These amplitudes and time constants are employed to adjust the scanner's gradient pre-emphasis unit and eliminate undesirable eddy-current effects. Measurement with the six-point sample phantom allows for simultaneous, direct estimation of both on-axis and cross-term eddy-current-induced gradients. The two methods are demonstrated and validated on several MRI systems with actively-shielded gradient coil sets. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lopes, Artur O.; Neumann, Adriana
2015-05-01
In the present paper, we consider a family of continuous time symmetric random walks indexed by , . For each the matching random walk take values in the finite set of states ; notice that is a subset of , where is the unitary circle. The infinitesimal generator of such chain is denoted by . The stationary probability for such process converges to the uniform distribution on the circle, when . Here we want to study other natural measures, obtained via a limit on , that are concentrated on some points of . We will disturb this process by a potential and study for each the perturbed stationary measures of this new process when . We disturb the system considering a fixed potential and we will denote by the restriction of to . Then, we define a non-stochastic semigroup generated by the matrix , where is the infinifesimal generator of . From the continuous time Perron's Theorem one can normalized such semigroup, and, then we get another stochastic semigroup which generates a continuous time Markov Chain taking values on . This new chain is called the continuous time Gibbs state associated to the potential , see (Lopes et al. in J Stat Phys 152:894-933, 2013). The stationary probability vector for such Markov Chain is denoted by . We assume that the maximum of is attained in a unique point of , and from this will follow that . Thus, here, our main goal is to analyze the large deviation principle for the family , when . The deviation function , which is defined on , will be obtained from a procedure based on fixed points of the Lax-Oleinik operator and Aubry-Mather theory. In order to obtain the associated Lax-Oleinik operator we use the Varadhan's Lemma for the process . For a careful analysis of the problem we present full details of the proof of the Large Deviation Principle, in the Skorohod space, for such family of Markov Chains, when . Finally, we compute the entropy of the invariant probabilities on the Skorohod space associated to the Markov Chains we analyze.
Micromixer-based time-resolved NMR: applications to ubiquitin protein conformation.
Kakuta, Masaya; Jayawickrama, Dimuthu A; Wolters, Andrew M; Manz, Andreas; Sweedler, Jonathan V
2003-02-15
Time-resolved NMR spectroscopy is used to studychanges in protein conformation based on the elapsed time after a change in the solvent composition of a protein solution. The use of a micromixer and a continuous-flow method is described where the contents of two capillary flows are mixed rapidly, and then the NMR spectra of the combined flow are recorded at precise time points. The distance after mixing the two fluids and flow rates define the solvent-protein interaction time; this method allows the measurement of NMR spectra at precise mixing time points independent of spectral acquisition time. Integration of a micromixer and a microcoil NMR probe enables low-microliter volumes to be used without losing significant sensitivity in the NMR measurement. Ubiquitin, the model compound, changes its conformation from native to A-state at low pH and in 40% or higher methanol/water solvents. Proton NMR resonances of the His-68 and the Tyr-59 of ubiquitin are used to probe the conformational changes. Mixing ubiquitin and methanol solutions under low pH at microliter per minute flow rates yields both native and A-states. As the flow rate decreases, yielding longer reaction times, the population of the A-state increases. The micromixer-NMR system can probe reaction kinetics on a time scale of seconds.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5, 20, 30, 45, and 60 degrees angle of attack, using the NASA 1A control law. Each maneuver is to be realized by the pilot applying square wave inputs to specific pilot station controls. Maneuver descriptions and complete specifications of the time/amplitude points defining each input are included, along with plots of the input time histories.
Detecting recurrence domains of dynamical systems by symbolic dynamics.
beim Graben, Peter; Hutt, Axel
2013-04-12
We propose an algorithm for the detection of recurrence domains of complex dynamical systems from time series. Our approach exploits the characteristic checkerboard texture of recurrence domains exhibited in recurrence plots. In phase space, recurrence plots yield intersecting balls around sampling points that could be merged into cells of a phase space partition. We construct this partition by a rewriting grammar applied to the symbolic dynamics of time indices. A maximum entropy principle defines the optimal size of intersecting balls. The final application to high-dimensional brain signals yields an optimal symbolic recurrence plot revealing functional components of the signal.
Unified dead-time compensation structure for SISO processes with multiple dead times.
Normey-Rico, Julio E; Flesch, Rodolfo C C; Santos, Tito L M
2014-11-01
This paper proposes a dead-time compensation structure for processes with multiple dead times. The controller is based on the filtered Smith predictor (FSP) dead-time compensator structure and it is able to control stable, integrating, and unstable processes with multiple input/output dead times. An equivalent model of the process is first computed in order to define the predictor structure. Using this equivalent model, the primary controller and the predictor filter are tuned to obtain an internally stable closed-loop system which also attempts some closed-loop specifications in terms of set-point tracking, disturbance rejection, and robustness. Some simulation case studies are used to illustrate the good properties of the proposed approach. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Duran, Volkan; Gençten, Azmi
2016-03-01
In this research the aim is to analyze quantum qutrit entanglements in a new perspective in terms of the reflection of n-dimensional sphere which can be depicted as the set of points equidistant from a fixed central point in three dimensional Euclidian Space which has also real and imaginary dimensions, that can also be depicted similarly as a two unit spheres having same centre in a dome-shaped projection. In order to analyze quantum qutrit entanglements: i- a new type of n dimensional hyper-sphere which is the extend version of Bloch Sphere to hyper-space, is defined ii- new operators and products such as rotation operator, combining and gluing products in this space are defined, iii-the entangled states are analyzed in terms of those products in order to reach a general formula to depict qutrit entanglements and some new patterns between spheres for the analysis of entanglement for different routes in a more simple way in a four dimensional time independent hypersphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, Volkan, E-mail: volkan.duran8@gmail.com; Gençten, Azmi, E-mail: gencten@omu.edu.tr
In this research the aim is to analyze quantum qutrit entanglements in a new perspective in terms of the reflection of n-dimensional sphere which can be depicted as the set of points equidistant from a fixed central point in three dimensional Euclidian Space which has also real and imaginary dimensions, that can also be depicted similarly as a two unit spheres having same centre in a dome-shaped projection. In order to analyze quantum qutrit entanglements: i- a new type of n dimensional hyper-sphere which is the extend version of Bloch Sphere to hyper-space, is defined ii- new operators and productsmore » such as rotation operator, combining and gluing products in this space are defined, iii-the entangled states are analyzed in terms of those products in order to reach a general formula to depict qutrit entanglements and some new patterns between spheres for the analysis of entanglement for different routes in a more simple way in a four dimensional time independent hypersphere.« less
Evaluating Diagnostic Point-of-Care Tests in Resource-Limited Settings
Drain, Paul K; Hyle, Emily P; Noubary, Farzad; Freedberg, Kenneth A; Wilson, Douglas; Bishai, William; Rodriguez, William; Bassett, Ingrid V
2014-01-01
Diagnostic point-of-care (POC) testing is intended to minimize the time to obtain a test result, thereby allowing clinicians and patients to make an expeditious clinical decision. As POC tests expand into resource-limited settings (RLS), the benefits must outweigh the costs. To optimize POC testing in RLS, diagnostic POC tests need rigorous evaluations focused on relevant clinical outcomes and operational costs, which differ from evaluations of conventional diagnostic tests. Here, we reviewed published studies on POC testing in RLS, and found no clearly defined metric for the clinical utility of POC testing. Therefore, we propose a framework for evaluating POC tests, and suggest and define the term “test efficacy” to describe a diagnostic test’s capacity to support a clinical decision within its operational context. We also proposed revised criteria for an ideal diagnostic POC test in resource-limited settings. Through systematic evaluations, comparisons between centralized diagnostic testing and novel POC technologies can be more formalized, and health officials can better determine which POC technologies represent valuable additions to their clinical programs. PMID:24332389
Lorenz, Kevin S.; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.
2013-01-01
Digital image analysis is a fundamental component of quantitative microscopy. However, intravital microscopy presents many challenges for digital image analysis. In general, microscopy volumes are inherently anisotropic, suffer from decreasing contrast with tissue depth, lack object edge detail, and characteristically have low signal levels. Intravital microscopy introduces the additional problem of motion artifacts, resulting from respiratory motion and heartbeat from specimens imaged in vivo. This paper describes an image registration technique for use with sequences of intravital microscopy images collected in time-series or in 3D volumes. Our registration method involves both rigid and non-rigid components. The rigid registration component corrects global image translations, while the non-rigid component manipulates a uniform grid of control points defined by B-splines. Each control point is optimized by minimizing a cost function consisting of two parts: a term to define image similarity, and a term to ensure deformation grid smoothness. Experimental results indicate that this approach is promising based on the analysis of several image volumes collected from the kidney, lung, and salivary gland of living rodents. PMID:22092443
Modeling hard clinical end-point data in economic analyses.
Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V
2013-11-01
The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (<7). Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are more appropriate to accurately reflect the trial data.
Mohrs, Oliver K; Petersen, Steffen E; Voigtlaender, Thomas; Peters, Jutta; Nowak, Bernd; Heinemann, Markus K; Kauczor, Hans-Ulrich
2006-10-01
The aim of this study was to evaluate the diagnostic value of time-resolved contrast-enhanced MR angiography in adults with congenital heart disease. Twenty patients with congenital heart disease (mean age, 38 +/- 14 years; range, 16-73 years) underwent contrast-enhanced turbo fast low-angle shot MR angiography. Thirty consecutive coronal 3D slabs with a frame rate of 1-second duration were acquired. The mask defined as the first data set was subtracted from subsequent images. Image quality was evaluated using a 5-point scale (from 1, not assessable, to 5, excellent image quality). Twelve diagnostic parameters yielded 1 point each in case of correct diagnosis (binary analysis into normal or abnormal) and were summarized into three categories: anatomy of the main thoracic vessels (maximum, 5 points), sequential cardiac anatomy (maximum, 5 points), and shunt detection (maximum, 2 points). The results were compared with a combined clinical reference comprising medical or surgical reports and other imaging studies. Diagnostic accuracies were calculated for each of the parameters as well as for the three categories. The mean image quality was 3.7 +/- 1.0. Using a binary approach, 220 (92%) of the 240 single diagnostic parameters could be analyzed. The percentage of maximum diagnostic points, the sensitivity, the specificity, and the positive and the negative predictive values were all 100% for the anatomy of the main thoracic vessels; 97%, 87%, 100%, 100%, and 96% for sequential cardiac anatomy; and 93%, 93%, 92%, 88%, and 96% for shunt detection. Time-resolved contrast-enhanced MR angiography provides, in one breath-hold, anatomic and qualitative functional information in adult patients with congenital heart disease. The high diagnostic accuracy allows the investigator to tailor subsequent specific MR sequences within the same session.
NASA Astrophysics Data System (ADS)
Martinez, Ralph; Nam, Jiseung
1992-07-01
Picture Archiving and Communication Systems (PACS) is an integration of digital image formation in a hospital, which encompasses various imaging equipment, image viewing workstations, image databases, and a high speed network. The integration requires a standardization of communication protocols to connect devices from different vendors. The American College of Radiology and the National Electrical Manufacturers Association (ACR- NEMA) standard Version 2.0 provides a point-to-point hardware interface, a set of software commands, and a consistent set of data formats for PACS. But, it is inadequate for PACS networking environments, because of its point-to-point nature and its inflexibility to allow other services and protocols in the future. Based on previous experience of PACS developments in The University of Arizona, a new communication protocol for PACS networks and an approach were proposed to ACR-NEMA Working Group VI. The defined PACS protocol is intended to facilitate the development of PACS''s capable of interfacing with other hospital information systems. Also, it is intended to allow the creation of diagnostic information data bases which can be interrogated by a variety of distributed devices. A particularly important goal is to support communications in a multivendor environment. The new protocol specifications are defined primarily as a combination of the International Organization for Standardization/Open Systems Interconnection (ISO/OSI), TCP/IP protocols, and the data format portion of ACR-NEMA standard. This paper addresses the specification and implementation of the ISO-based protocol into a PACS prototype. The protocol specification, which covers Presentation, Session, Transport, and Network layers, is summarized briefly. The protocol implementation is discussed based on our implementation efforts in the UNIX Operating System Environment. At the same time, results of performance comparison between the ISO and TCP/IP implementations are presented to demonstrate the implementation of defined protocol. The testing of performance analysis is done by prototyping PACS on available platforms, which are Micro VAX II, DECstation and SUN Workstation.
NASA Astrophysics Data System (ADS)
Wu, Bin; Yin, Hongxi; Qin, Jie; Liu, Chang; Liu, Anliang; Shao, Qi; Xu, Xiaoguang
2016-09-01
Aiming at the increasing demand of the diversification services and flexible bandwidth allocation of the future access networks, a flexible passive optical network (PON) scheme combining time and wavelength division multiplexing (TWDM) with point-to-point wavelength division multiplexing (PtP WDM) overlay is proposed for the next-generation optical access networks in this paper. A novel software-defined optical distribution network (ODN) structure is designed based on wavelength selective switches (WSS), which can implement wavelength and bandwidth dynamical allocations and suits for the bursty traffic. The experimental results reveal that the TWDM-PON can provide 40 Gb/s downstream and 10 Gb/s upstream data transmission, while the PtP WDM-PON can support 10 GHz point-to-point dedicated bandwidth as the overlay complement system. The wavelengths of the TWDM-PON and PtP WDM-PON are allocated dynamically based on WSS, which verifies the feasibility of the proposed structure.
Sampled control stability of the ESA instrument pointing system
NASA Astrophysics Data System (ADS)
Thieme, G.; Rogers, P.; Sciacovelli, D.
Stability analysis and simulation results are presented for the ESA Instrument Pointing System (IPS) that is to be used in Spacelab's second launch. Of the two IPS plant dynamic models used in the ESA and NASA activities, one is based on six interconnected rigid bodies that represent the IPS and plant dynamic models used in the ESA and NASA activities, one is based on six interconnected rigid bodies that represent the IPS and its payload, while the other follows the NASA practice of defining an IPS-Spacelab 2 plant configuration through a structural finite element model, which is then used to generate modal data for various pointing directions. In both cases, the IPS dynamic plant model is truncated, then discretized at the sampling frequency and interfaces to a PID-based control law. A stability analysis has been carried out in discrete domain for various instrument pointing directions, taking into account suitable parameter variation ranges. A number of time simulations are presented.
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Litt, Jonathan S.
2010-01-01
This paper presents an algorithm that automatically identifies and extracts steady-state engine operating points from engine flight data. It calculates the mean and standard deviation of select parameters contained in the incoming flight data stream. If the standard deviation of the data falls below defined constraints, the engine is assumed to be at a steady-state operating point, and the mean measurement data at that point are archived for subsequent condition monitoring purposes. The fundamental design of the steady-state data filter is completely generic and applicable for any dynamic system. Additional domain-specific logic constraints are applied to reduce data outliers and variance within the collected steady-state data. The filter is designed for on-line real-time processing of streaming data as opposed to post-processing of the data in batch mode. Results of applying the steady-state data filter to recorded helicopter engine flight data are shown, demonstrating its utility for engine condition monitoring applications.
Optical flow versus retinal flow as sources of information for flight guidance
NASA Technical Reports Server (NTRS)
Cutting, James E.
1991-01-01
The appropriate description is considered of visual information for flight guidance, optical flow vs. retinal flow. Most descriptions in the psychological literature are based on the optical flow. However, human eyes move and this movement complicates the issues at stake, particularly when movement of the observer is involved. The question addressed is whether an observer, whose eyes register only retinal flow, use information in optical flow. It is suggested that the observer cannot and does not reconstruct the image in optical flow; instead they use retinal flow. Retinal array is defined as the projections of a three space onto a point and beyond to a movable, nearly hemispheric sensing device, like the retina. Optical array is defined as the projection of a three space environment to a point within that space. And flow is defined as global motion as a field of vectors, best placed on a spherical projection surface. Specifically, flow is the mapping of the field of changes in position of corresponding points on objects in three space onto a point, where that point has moved in position.
Real-Time Tropospheric Delay Estimation using IGS Products
NASA Astrophysics Data System (ADS)
Stürze, Andrea; Liu, Sha; Söhne, Wolfgang
2014-05-01
The Federal Agency for Cartography and Geodesy (BKG) routinely provides zenith tropospheric delay (ZTD) parameter for the assimilation in numerical weather models since more than 10 years. Up to now the results flowing into the EUREF Permanent Network (EPN) or E-GVAP (EUMETNET EIG GNSS water vapour programme) analysis are based on batch processing of GPS+GLONASS observations in differential network mode. For the recently started COST Action ES1206 about "Advanced Global Navigation Satellite Systems tropospheric products for monitoring severe weather events and climate" (GNSS4SWEC), however, rapid updates in the analysis of the atmospheric state for nowcasting applications require changing the processing strategy towards real-time. In the RTCM SC104 (Radio Technical Commission for Maritime Services, Special Committee 104) a format combining the advantages of Precise Point Positioning (PPP) and Real-Time Kinematic (RTK) is under development. The so-called State Space Representation approach is defining corrections, which will be transferred in real-time to the user e.g. via NTRIP (Network Transport of RTCM via Internet Protocol). Meanwhile messages for precise orbits, satellite clocks and code biases compatible to the basic PPP mode using IGS products are defined. Consequently, the IGS Real-Time Service (RTS) was launched in 2013 in order to extend the well-known precise orbit and clock products by a real-time component. Further messages e.g. with respect to ionosphere or phase biases are foreseen. Depending on the level of refinement, so different accuracies up to the RTK level shall be reachable. In co-operation of BKG and the Technical University of Darmstadt the real-time software GEMon (GREF EUREF Monitoring) is under development. GEMon is able to process GPS and GLONASS observation and RTS product data streams in PPP mode. Furthermore, several state-of-the-art troposphere models, for example based on numerical weather prediction data, are implemented. Hence, it opens the possibility to evaluate the potential of troposphere parameter determination in real-time and its effect to Precise Point Positioning. Starting with an offline investigation of the influence of different RTS products and a priori troposphere models the configuration delivering the best results is used for a real-time processing of the GREF (German Geodetic Reference) network over a suitable period of time. The evaluation of the derived ZTD parameters and station heights is done with respect to well proven GREF, EUREF, IGS, and E-GVAP analysis results. Keywords: GNSS, Zenith Tropospheric Delay, Real-time Precise Point Positioning
Xu, Lin; Wang, Hai-Xiao; Xu, Ya-Dong; Chen, Huan-Yang; Jiang, Jian-Hua
2016-08-08
A simple core-shell two-dimensional photonic crystal is studied where the triangular lattice symmetry and the C6 point group symmetry give rich physics in accidental touching points of photonic bands. We systematically evaluate different types of accidental nodal points at the Brillouin zone center for transverse-magnetic harmonic modes when the geometry and permittivity of the core-shell material are continuously tuned. The accidental nodal points can have different dispersions and topological properties (i.e., Berry phases). These accidental nodal points can be the critical states lying between a topological phase and a normal phase of the photonic crystal. They are thus very important for the study of topological photonic states. We show that, without breaking time-reversal symmetry, by tuning the geometry of the core-shell material, a phase transition into the photonic quantum spin Hall insulator can be achieved. Here the "spin" is defined as the orbital angular momentum of a photon. We study the topological phase transition as well as the properties of the edge and bulk states and their application potentials in optics.
Scheid, Adam D; Van Keulen, Virginia P; Felts, Sara J; Neier, Steven C; Middha, Sumit; Nair, Asha A; Techentin, Robert W; Gilbert, Barry K; Jen, Jin; Neuhauser, Claudia; Zhang, Yuji; Pease, Larry R
2018-03-01
Human immunity exhibits remarkable heterogeneity among individuals, which engenders variable responses to immune perturbations in human populations. Population studies reveal that, in addition to interindividual heterogeneity, systemic immune signatures display longitudinal stability within individuals, and these signatures may reliably dictate how given individuals respond to immune perturbations. We hypothesize that analyzing relationships among these signatures at the population level may uncover baseline immune phenotypes that correspond with response outcomes to immune stimuli. To test this, we quantified global gene expression in peripheral blood CD4 + cells from healthy individuals at baseline and following CD3/CD28 stimulation at two time points 1 mo apart. Systemic CD4 + cell baseline and poststimulation molecular immune response signatures (MIRS) were defined by identifying genes expressed at levels that were stable between time points within individuals and differential among individuals in each state. Iterative differential gene expression analyses between all possible phenotypic groupings of at least three individuals using the baseline and stimulated MIRS gene sets revealed shared baseline and response phenotypic groupings, indicating the baseline MIRS contained determinants of immune responsiveness. Furthermore, significant numbers of shared phenotype-defining sets of determinants were identified in baseline data across independent healthy cohorts. Combining the cohorts and repeating the analyses resulted in identification of over 6000 baseline immune phenotypic groups, implying that the MIRS concept may be useful in many immune perturbation contexts. These findings demonstrate that patterns in complex gene expression variability can be used to define immune phenotypes and discover determinants of immune responsiveness. Copyright © 2018 by The American Association of Immunologists, Inc.
Vibration Pattern Imager (VPI): A control and data acquisition system for scanning laser vibrometers
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Brown, Donald E.; Shaffer, Thomas A.
1993-01-01
The Vibration Pattern Imager (VPI) system was designed to control and acquire data from scanning laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor, but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. The sensor itself is not part of the VPI system. A graphical interface program, which runs on a PC under the MS-DOS operating system, functions in an interactive mode and communicates with the DSP and I/O boards in a user-friendly fashion through the aid of pop-up menus. Two types of data may be acquired with the VPI system: single point or 'full field.' In the single point mode, time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and is stored by the PC. The position of the measuring point (adjusted by mirrors in the sensor) is controlled via a mouse input. The mouse input is translated to output voltages by the D/A converter on the I/O board to control the mirror servos. In the 'full field' mode, the measurement point is moved over a user-selectable rectangular area. The time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and converted to a root-mean-square (rms) value by the DSP board. The rms 'full field' velocity distribution is then uploaded for display and storage on the PC.
Figure/ground segregation from temporal delay is best at high spatial frequencies.
Kojima, H
1998-12-01
Two experiments investigated the role of spatial frequency in performance of a figure/ground segregation task based on temporal cues. Figure orientation was much easier to judge when figure and ground portions of the target were defined exclusively by random texture composed entirely of high spatial frequencies. When target components were defined by low spatial frequencies only, the task was nearly impossible except with long temporal delay between figure and ground. These results are inconsistent with the hypothesis that M-cell activity is primarily responsible for figure/ground segregation from temporal delay. Instead, these results point to a distinction between temporal integration and temporal differentiation. Additionally, the present results can be related to recent work on the binding of spatial features over time.
Drosophila Heartless Acts with Heartbroken/Dof in Muscle Founder Differentiation
Dutta, Devkanya; Shaw, Sanjeev; Maqbool, Tariq; Pandya, Hetal
2005-01-01
The formation of a multi-nucleate myofibre is directed, in Drosophila, by a founder cell. In the embryo, founders are selected by Notch-mediated lateral inhibition, while during adult myogenesis this mechanism of selection does not appear to operate. We show, in the muscles of the adult abdomen, that the Fibroblast growth factor pathway mediates founder cell choice in a novel manner. We suggest that the developmental patterns of Heartbroken/Dof and Sprouty result in defining the domain and timing of activation of the Fibroblast growth factor receptor Heartless in specific myoblasts, thereby converting them into founder cells. Our results point to a way in which muscle differentiation could be initiated and define a critical developmental function for Heartbroken/Dof in myogenesis. PMID:16207075
A definition of depletion of fish stocks
Van Oosten, John
1949-01-01
Attention was focused on the need of a common and better understanding of the term depletion as applied to the fisheries in order to eliminate if possible the existing inexactness of thought on the subject. Depletion has been confused at various times with at least ten different ideas associated with it but which, as has has heen pointed out, are not synonymous at all. In defining depletion we must recognize that the term represents a condition and must not he confounded with the cause (overfishing) that leads to this condition or with the symptoms that identify it. Depletion was defined as a reduction, through overfishing, in the level of abundance of the exploitable segment of a stock that prevents the realization of the maximum productive capacity.
Pulsar timing and general relativity
NASA Technical Reports Server (NTRS)
Backer, D. C.; Hellings, R. W.
1986-01-01
Techniques are described for accounting for relativistic effects in the analysis of pulsar signals. Design features of instrumentation used to achieve millisecond accuracy in the signal measurements are discussed. The accuracy of the data permits modeling the pulsar physical characteristics from the natural glitches in the emissions. Relativistic corrections are defined for adjusting for differences between the pulsar motion in its spacetime coordinate system relative to the terrestrial coordinate system, the earth's motion, and the gravitational potentials of solar system bodies. Modifications of the model to allow for a binary pulsar system are outlined, including treatment of the system as a point mass. Finally, a quadrupole model is presented for gravitational radiation and techniques are defined for using pulsars in the search for gravitational waves.
Direction and Integration of Experimental Ground Test Capabilities and Computational Methods
NASA Technical Reports Server (NTRS)
Dunn, Steven C.
2016-01-01
This paper groups and summarizes the salient points and findings from two AIAA conference panels targeted at defining the direction, with associated key issues and recommendations, for the integration of experimental ground testing and computational methods. Each panel session utilized rapporteurs to capture comments from both the panel members and the audience. Additionally, a virtual panel of several experts were consulted between the two sessions and their comments were also captured. The information is organized into three time-based groupings, as well as by subject area. These panel sessions were designed to provide guidance to both researchers/developers and experimental/computational service providers in defining the future of ground testing, which will be inextricably integrated with the advancement of computational tools.
Role of Erosion in Shaping Point Bars
NASA Astrophysics Data System (ADS)
Moody, J.; Meade, R.
2012-04-01
A powerful metaphor in fluvial geomorphology has been that depositional features such as point bars (and other floodplain features) constitute the river's historical memory in the form of uniformly thick sedimentary deposits waiting for the geomorphologist to dissect and interpret the past. For the past three decades, along the channel of Powder River (Montana USA) we have documented (with annual cross-sectional surveys and pit trenches) the evolution of the shape of three point bars that were created when an extreme flood in 1978 cut new channels across the necks of two former meander bends and radically shifted the location of a third bend. Subsequent erosion has substantially reshaped, at different time scales, the relic sediment deposits of varying age. At the weekly to monthly time scale (i.e., floods from snowmelt or floods from convective or cyclonic storms), the maximum scour depth was computed (by using a numerical model) at locations spaced 1 m apart across the entire point bar for a couple of the largest floods. The maximum predicted scour is about 0.22 m. At the annual time scale, repeated cross-section topographic surveys (25 during 32 years) indicate that net annual erosion at a single location can be as great as 0.5 m, and that the net erosion is greater than net deposition during 8, 16, and 32% of the years for the three point bars. On average, the median annual net erosion was 21, 36, and 51% of the net deposition. At the decadal time scale, an index of point bar preservation often referred to as completeness was defined for each cross section as the percentage of the initial deposit (older than 10 years) that was still remaining in 2011; computations indicate that 19, 41, and 36% of the initial deposits of sediment were eroded. Initial deposits were not uniform in thickness and often represented thicker pods of sediment connected by thin layers of sediment or even isolated pods at different elevations across the point bar in response to multiple floods during a water year. Erosion often was preferential and removed part or all of pods at lower elevations, and in time left what appears to be a random arrangement of sediment pods forming the point bar. Thus, we conclude that the erosional process is as important as the deposition process in shaping the final form of the point bar, and that point bars are not uniformly aggradational or transgressive deposits of sediment in which the age of the deposit increases monotonically downward at all locations across the point bar.
Absence of time-reversal symmetry breaking in the noncentrosymmetric superconductor Mo3Al2C
NASA Astrophysics Data System (ADS)
Bauer, E.; Sekine, C.; Sai, U.; Rogl, P.; Biswas, P. K.; Amato, A.
2014-08-01
Zero-field muon spin rotation and relaxation (μSR) studies carried out on the strongly coupled, noncentrosymmetric superconductor Mo3Al2C,Tc=9 K, did not reveal hints of time-reversal symmetry breaking as was found for a number of other noncentrosymmetric systems. Transverse field measurements performed above and below the superconducting transition temperature defined the temperature dependent London penetration depth, which in turn served to derive from a microscopic point of view a simple s-wave superconducting state in Mo3Al2C. The present investigations also provide fairly solid grounds to conclude that time-reversal symmetry breaking is not an immanent feature of noncentrosymmetric superconductors.
NASA Astrophysics Data System (ADS)
Rerikh, K. V.
1998-02-01
Using classic results of algebraic geometry for birational plane mappings in plane CP 2 we present a general approach to algebraic integrability of autonomous dynamical systems in C 2 with discrete time and systems of two autonomous functional equations for meromorphic functions in one complex variable defined by birational maps in C 2. General theorems defining the invariant curves, the dynamics of a birational mapping and a general theorem about necessary and sufficient conditions for integrability of birational plane mappings are proved on the basis of a new idea — a decomposition of the orbit set of indeterminacy points of direct maps relative to the action of the inverse mappings. A general method of generating integrable mappings and their rational integrals (invariants) I is proposed. Numerical characteristics Nk of intersections of the orbits Φn- kOi of fundamental or indeterminacy points Oi ɛ O ∩ S, of mapping Φn, where O = { O i} is the set of indeterminacy points of Φn and S is a similar set for invariant I, with the corresponding set O' ∩ S, where O' = { O' i} is the set of indeterminacy points of inverse mapping Φn-1, are introduced. Using the method proposed we obtain all nine integrable multiparameter quadratic birational reversible mappings with the zero fixed point and linear projective symmetry S = CΛC-1, Λ = diag(±1), with rational invariants generated by invariant straight lines and conics. The relations of numbers Nk with such numerical characteristics of discrete dynamical systems as the Arnold complexity and their integrability are established for the integrable mappings obtained. The Arnold complexities of integrable mappings obtained are determined. The main results are presented in Theorems 2-5, in Tables 1 and 2, and in Appendix A.
Defining the Role of Alpha-Synuclein in Enteric Dysfunction in Parkinsons Disease
2017-10-01
direction. o What were the major goals of the project? Animal use approvals – accomplished pre-funding Vector production - 1st round of vector...August 2017. 100% Complete Vector injections. We injected all animals for the long-term survival group as well as additional subjects for shorter...time points. However, as noted below, the transgene expression seen in these animals was below that which was expected/intended. Thus, we are currently
2013-04-08
fined as p( xs , t), to the flow state which is modeled by the time coefficients of a POD truncation (a fj (t) in equation 17) (Note: the f superscript...spatially to desired flow features (e.g. vortex shedding, vortex pairing, boundary layer growth, separation points, etc.) are chosen and defined as ( xs ...within the numeric simulation. A surface POD analysis, p( xs , t)≃ k ∑ p=1 asp(t)ϕsp( xs ), (30) yields surface POD modes φ sp( xs ). The resulting
Thinking About the Unthinkable: Tokyo’s Nuclear Option
2009-01-01
Sagan. Sounding a Thucydidean note, he maintains that “strong states do what they can . . . adopting the costly, but self-sufficient, policy of developing...arsenal within a reasonable amount of time. This is not an uncommon approach for governments. Notes Ariel Levite , “Would-be proliferants rarely make...low.26 More to the point, Levite defines “‘nuclear hedging’ as a national strat- egy lying between nuclear pursuit and nuclear rollback.”27 John F
A Custom Approach for a Flexible, Real-Time and Reliable Software Defined Utility.
Zaballos, Agustín; Navarro, Joan; Martín De Pozuelo, Ramon
2018-02-28
Information and communication technologies (ICTs) have enabled the evolution of traditional electric power distribution networks towards a new paradigm referred to as the smart grid. However, the different elements that compose the ICT plane of a smart grid are usually conceived as isolated systems that typically result in rigid hardware architectures, which are hard to interoperate, manage and adapt to new situations. In the recent years, software-defined systems that take advantage of software and high-speed data network infrastructures have emerged as a promising alternative to classic ad hoc approaches in terms of integration, automation, real-time reconfiguration and resource reusability. The purpose of this paper is to propose the usage of software-defined utilities (SDUs) to address the latent deployment and management limitations of smart grids. More specifically, the implementation of a smart grid's data storage and management system prototype by means of SDUs is introduced, which exhibits the feasibility of this alternative approach. This system features a hybrid cloud architecture able to meet the data storage requirements of electric utilities and adapt itself to their ever-evolving needs. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.
A Custom Approach for a Flexible, Real-Time and Reliable Software Defined Utility
2018-01-01
Information and communication technologies (ICTs) have enabled the evolution of traditional electric power distribution networks towards a new paradigm referred to as the smart grid. However, the different elements that compose the ICT plane of a smart grid are usually conceived as isolated systems that typically result in rigid hardware architectures, which are hard to interoperate, manage and adapt to new situations. In the recent years, software-defined systems that take advantage of software and high-speed data network infrastructures have emerged as a promising alternative to classic ad hoc approaches in terms of integration, automation, real-time reconfiguration and resource reusability. The purpose of this paper is to propose the usage of software-defined utilities (SDUs) to address the latent deployment and management limitations of smart grids. More specifically, the implementation of a smart grid’s data storage and management system prototype by means of SDUs is introduced, which exhibits the feasibility of this alternative approach. This system features a hybrid cloud architecture able to meet the data storage requirements of electric utilities and adapt itself to their ever-evolving needs. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction. PMID:29495599
Li, Xiaohong; Blount, Patricia L; Vaughan, Thomas L; Reid, Brian J
2011-02-01
Aside from primary prevention, early detection remains the most effective way to decrease mortality associated with the majority of solid cancers. Previous cancer screening models are largely based on classification of at-risk populations into three conceptually defined groups (normal, cancer without symptoms, and cancer with symptoms). Unfortunately, this approach has achieved limited successes in reducing cancer mortality. With advances in molecular biology and genomic technologies, many candidate somatic genetic and epigenetic "biomarkers" have been identified as potential predictors of cancer risk. However, none have yet been validated as robust predictors of progression to cancer or shown to reduce cancer mortality. In this Perspective, we first define the necessary and sufficient conditions for precise prediction of future cancer development and early cancer detection within a simple physical model framework. We then evaluate cancer risk prediction and early detection from a dynamic clonal evolution point of view, examining the implications of dynamic clonal evolution of biomarkers and the application of clonal evolution for cancer risk management in clinical practice. Finally, we propose a framework to guide future collaborative research between mathematical modelers and biomarker researchers to design studies to investigate and model dynamic clonal evolution. This approach will allow optimization of available resources for cancer control and intervention timing based on molecular biomarkers in predicting cancer among various risk subsets that dynamically evolve over time.
Eclipse-Free-Time Assessment Tool for IRIS
NASA Technical Reports Server (NTRS)
Eagle, David
2012-01-01
IRIS_EFT is a scientific simulation that can be used to perform an Eclipse-Free- Time (EFT) assessment of IRIS (Infrared Imaging Surveyor) mission orbits. EFT is defined to be those time intervals longer than one day during which the IRIS spacecraft is not in the Earth s shadow. Program IRIS_EFT implements a special perturbation of orbital motion to numerically integrate Cowell's form of the system of differential equations. Shadow conditions are predicted by embedding this integrator within Brent s method for finding the root of a nonlinear equation. The IRIS_EFT software models the effects of the following types of orbit perturbations on the long-term evolution and shadow characteristics of IRIS mission orbits. (1) Non-spherical Earth gravity, (2) Atmospheric drag, (3) Point-mass gravity of the Sun, and (4) Point-mass gravity of the Moon. The objective of this effort was to create an in-house computer program that would perform eclipse-free-time analysis. of candidate IRIS spacecraft mission orbits in an accurate and timely fashion. The software is a suite of Fortran subroutines and data files organized as a "computational" engine that is used to accurately predict the long-term orbit evolution of IRIS mission orbits while searching for Earth shadow conditions.
Guidance, Navigation, and Control Performance for the GOES-R Spacecraft
NASA Technical Reports Server (NTRS)
Chapel, Jim; Stancliffe, Devin; Bevacqua, TIm; Winkler, Stephen; Clapp, Brian; Rood, Tim; Gaylor, David; Freesland, Doug; Krimchansky, Alexander
2014-01-01
The Geostationary Operational Environmental Satellite-R Series (GOES-R) is the first of the next generation geostationary weather satellites. The series represents a dramatic increase in Earth observation capabilities, with 4 times the resolution, 5 times the observation rate, and 3 times the number of spectral bands. GOES-R also provides unprecedented availability, with less than 120 minutes per year of lost observation time. This paper presents the Guidance Navigation & Control (GN&C) requirements necessary to realize the ambitious pointing, knowledge, and Image Navigation and Registration (INR) objectives of GOES-R. Because the suite of instruments is sensitive to disturbances over a broad spectral range, a high fidelity simulation of the vehicle has been created with modal content over 500 Hz to assess the pointing stability requirements. Simulation results are presented showing acceleration, shock response spectra (SRS), and line of sight (LOS) responses for various disturbances from 0 Hz to 512 Hz. Simulation results demonstrate excellent performance relative to the pointing and pointing stability requirements, with LOS jitter for the isolated instrument platform of approximately 1 micro-rad. Attitude and attitude rate knowledge are provided directly to the instrument with an accuracy defined by the Integrated Rate Error (IRE) requirements. The data are used internally for motion compensation. The final piece of the INR performance is orbit knowledge, which GOES-R achieves with GPS navigation. Performance results are shown demonstrating compliance with the 50 to 75 m orbit position accuracy requirements. As presented in this paper, the GN&C performance supports the challenging mission objectives of GOES-R.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsh, I; Otto, M; Weichert, J
Purpose: The focus of this work is to perform Monte Carlo-based dosimetry for several pediatric cancer xenografts in mice treated with a novel radiopharmaceutical {sup 131}I-CLR1404. Methods: Four mice for each tumor cell line were injected with 8–13 µCi/g of the {sup 124}124I-CLR1404. PET/CT images of each individual mouse were acquired at 5–6 time points over the span of 96–170 hours post-injection. Following acquisition, the images were co-registered, resampled, rescaled, corrected for partial volume effects (PVE), and masked. For this work the pre-treatment PET images of {sup 124}I-CLR1404 were used to predict therapeutic doses from {sup 131}I-CLR1404 at each timemore » point by assuming the same injection activity and accounting for the difference in physical decay rates. Tumors and normal tissues were manually contoured using anatomical and functional images. The CT and the PET images were used in the Geant4 (v9.6) Monte Carlo simulation to define the geometry and source distribution, respectively. The total cumulated absorbed dose was calculated by numerically integrating the dose-rate at each time point over all time on a voxel-by-voxel basis. Results: Spatial distributions of the absorbed dose rates and dose volume histograms as well as mean, minimum, maximum, and total dose values for each ROI were generated for each time point. Conclusion: This work demonstrates how mouse-specific MC-based dosimetry could potentially provide more accurate characterization of efficacy of novel radiopharmaceuticals in radionuclide therapy. This work is partially funded by NIH grant CA198392.« less
Donovan, P
1983-01-01
Participants at the Human Life Symposium: An Interdisciplinary Approach to the Concept of Person, held in Houston in March, 1982, considered the question of when life and personhood begin. Previously all such discussions have been held in the political arena and in right to life publications. In 1973 the Supreme Court had refused to resolve the question. In 1981 Senator Helms Human Life Amendment (2038) to debt ceiling legislation stated that life begins at conception and the fetus was entitled to protection under the law. This would have created severe abortion funding restrictions and has not yet been passed. From the scientific point of view it was concluded that biology alone is not able to determine the point at which personhood is established. Several scientists expressed their view on personhood covering such areas as subjective awareness including personality, a sense of self and consciousness, social status rights and obligations. Reasons for not defining the fetus as a person included the negative impact on providing medical services to the mother and the fetus, and ethical issues in fetal surgery. The legal impact of bestowing personhood on the fetus would not resolve the abortion issue, and historically the law has treated the fetus differently for different purposes. If a fetus were legally defined as a person, additional areas of conflict in consitutional law, tax law and others would arise. A final area discussed was whether any of the criteria which define death could help define life; results were inconclusive. The participants agreed that while further explorations of the question are necessary, legislative action seems inappropriate at this time.
Simplicity constraints: A 3D toy model for loop quantum gravity
NASA Astrophysics Data System (ADS)
Charles, Christoph
2018-05-01
In loop quantum gravity, tremendous progress has been made using the Ashtekar-Barbero variables. These variables, defined in a gauge fixing of the theory, correspond to a parametrization of the solutions of the so-called simplicity constraints. Their geometrical interpretation is however unsatisfactory as they do not constitute a space-time connection. It would be possible to resolve this point by using a full Lorentz connection or, equivalently, by using the self-dual Ashtekar variables. This leads however to simplicity constraints or reality conditions which are notoriously difficult to implement in the quantum theory. We explore in this paper the possibility of using completely degenerate actions to impose such constraints at the quantum level in the context of canonical quantization. To do so, we define a simpler model, in 3D, with similar constraints by extending the phase space to include an independent vielbein. We define the classical model and show that a precise quantum theory by gauge unfixing can be defined out of it, completely equivalent to the standard 3D Euclidean quantum gravity. We discuss possible future explorations around this model as it could help as a stepping stone to define full-fledged covariant loop quantum gravity.
The evolving block universe and the meshing together of times.
Ellis, George F R
2014-10-01
It has been proposed that spacetime should be regarded as an evolving block universe, bounded to the future by the present time, which continually extends to the future. This future boundary is defined at each time by measuring proper time along Ricci eigenlines from the start of the universe. A key point, then, is that physical reality can be represented at many different scales: hence, the passage of time may be seen as different at different scales, with quantum gravity determining the evolution of spacetime itself at the Planck scale, but quantum field theory and classical physics determining the evolution of events within spacetime at larger scales. The fundamental issue then arises as to how the effective times at different scales mesh together, leading to the concepts of global and local times. © 2014 New York Academy of Sciences.
Oshikiri, Taro; Yasuda, Takashi; Yamamoto, Masashi; Kanaji, Shingo; Yamashita, Kimihiro; Matsuda, Takeru; Sumi, Yasuo; Nakamura, Tetsu; Fujino, Yasuhiro; Tominaga, Masahiro; Suzuki, Satoshi; Kakeji, Yoshihiro
2016-09-01
Minimally invasive esophagectomy (MIE) has less morbidity than the open approach. In particular, thoracoscopic esophagectomy in the prone position (TEP) has been performed worldwide. Using the cumulative sum control chart (CUSUM) method, this study aimed to confirm whether a trainee surgeon who learned established standards would become skilled in TEP with a shorter learning curve than that of the mentoring surgeon. Surgeon A performed TEP in 100 patients; the first 22 patients comprised period 1. His learning curve, defined based on the operation time (OT) of the thoracic procedure, was evaluated using the CUSUM method, and short-term outcomes were assessed. Another 22 patients underwent TEP performed by surgeon B, with outcomes compared to those of surgeon A's period 1. Using the CUSUM chart, the peak point of the thoracic procedure OT occurred at the 44th case in surgeon A's experience of 100 cases. With surgeon A's first 22 cases (period 1), the peak point of the thoracic procedure OT could not be confirmed and graph is expanding soaring at CUSUM chart. The CUSUM chart of surgeon B's experience of 22 cases clearly indicated that the peak point of the thoracic procedure OT occurred at the 17th case. The rate of recurrent laryngeal nerve palsy for surgeon B (9 %) was significantly lower than for surgeon A in period 1 (36 %) (p = 0.0266). There is some possibility for a trainee surgeon to attain the required basic skills to perform TEP in a relatively short period of time using a standardized procedure developed by a mentoring surgeon. The CUSUM method should be useful in evaluating trainee competence during an initial series of procedures, by assessing the learning curve defined by OT.
Vanbinst, Kiran; Ghesquière, Pol; De Smedt, Bert
2014-11-01
Deficits in arithmetic fact retrieval constitute the hallmark of children with mathematical learning difficulties (MLD). It remains, however, unclear which cognitive deficits underpin these difficulties in arithmetic fact retrieval. Many prior studies defined MLD by considering low achievement criteria and not by additionally taking the persistence of the MLD into account. Therefore, the present longitudinal study contrasted children with persistent MLD (MLD-p; mean age: 9 years 2 months) and typically developing (TD) children (mean age: 9 years 6 months) at three time points, to explore whether differences in arithmetic strategy development were associated with differences in numerical magnitude processing, working memory and phonological processing. Our longitudinal data revealed that children with MLD-p had persistent arithmetic fact retrieval deficits at each time point. Children with MLD-p showed persistent impairments in symbolic, but not in nonsymbolic, magnitude processing at each time point. The two groups differed in phonological processing, but not in working memory. Our data indicate that both domain-specific and domain-general cognitive abilities contribute to individual differences in children's arithmetic strategy development, and that the symbolic processing of numerical magnitudes might be a particular risk factor for children with MLD-p. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Arkin, C. Richard; Ottens, Andrew K.; Diaz, Jorge A.; Griffin, Timothy P.; Follestein, Duke; Adams, Fredrick; Steinrock, T. (Technical Monitor)
2001-01-01
For Space Shuttle launch safety, there is a need to monitor the concentration of H2, He, O2 and Ar around the launch vehicle. Currently a large mass spectrometry system performs this task, using long transport lines to draw in samples. There is great interest in replacing this stationary system with several miniature, portable, rugged mass spectrometers which act as point sensors which can be placed at the sampling point. Five commercial and two non-commercial analyzers are evaluated. The five commercial systems include the Leybold Inficon XPR-2 linear quadrupole, the Stanford Research (SRS-100) linear quadrupole, the Ferran linear quadrupole array, the ThermoQuest Polaris-Q quadrupole ion trap, and the IonWerks Time-of-Flight (TOF). The non-commercial systems include a compact double focusing sector (CDFMS) developed at the University of Minnesota, and a quadrupole ion trap (UF-IT) developed at the University of Florida. The System Volume is determined by measuring the entire system volume including the mass analyzer, its associated electronics, the associated vacuum system, the high vacuum pump and rough pump. Also measured are any ion gauge controllers or other required equipment. Computers are not included. Scan Time is the time required for one scan to be acquired and the data to be transferred. It is determined by measuring the time required acquiring a known number of scans and dividing by said number of scans. Limit of Detection is determined first by performing a zero-span calibration (using a 10-point data set). Then the limit of detection (LOD) is defined as 3 times the standard deviation of the zero data set. (An LOD of 10 ppm or less is considered acceptable.)
Improved-resolution real-time skin-dose mapping for interventional fluoroscopic procedures
NASA Astrophysics Data System (ADS)
Rana, Vijay K.; Rudin, Stephen; Bednarek, Daniel R.
2014-03-01
We have developed a dose-tracking system (DTS) that provides a real-time display of the skin-dose distribution on a 3D patient graphic during fluoroscopic procedures. Radiation dose to individual points on the skin is calculated using exposure and geometry parameters from the digital bus on a Toshiba C-arm unit. To accurately define the distribution of dose, it is necessary to use a high-resolution patient graphic consisting of a large number of elements. In the original DTS version, the patient graphics were obtained from a library of population body scans which consisted of larger-sized triangular elements resulting in poor congruence between the graphic points and the x-ray beam boundary. To improve the resolution without impacting real-time performance, the number of calculations must be reduced and so we created software-designed human models and modified the DTS to read the graphic as a list of vertices of the triangular elements such that common vertices of adjacent triangles are listed once. Dose is calculated for each vertex point once instead of the number of times that a given vertex appears in multiple triangles. By reformatting the graphic file, we were able to subdivide the triangular elements by a factor of 64 times with an increase in the file size of only 1.3 times. This allows a much greater number of smaller triangular elements and improves resolution of the patient graphic without compromising the real-time performance of the DTS and also gives a smoother graphic display for better visualization of the dose distribution.
Trinka, Eugen; Cock, Hannah; Hesdorffer, Dale; Rossetti, Andrea O; Scheffer, Ingrid E; Shinnar, Shlomo; Shorvon, Simon; Lowenstein, Daniel H
2015-10-01
The Commission on Classification and Terminology and the Commission on Epidemiology of the International League Against Epilepsy (ILAE) have charged a Task Force to revise concepts, definition, and classification of status epilepticus (SE). The proposed new definition of SE is as follows: Status epilepticus is a condition resulting either from the failure of the mechanisms responsible for seizure termination or from the initiation of mechanisms, which lead to abnormally, prolonged seizures (after time point t1 ). It is a condition, which can have long-term consequences (after time point t2 ), including neuronal death, neuronal injury, and alteration of neuronal networks, depending on the type and duration of seizures. This definition is conceptual, with two operational dimensions: the first is the length of the seizure and the time point (t1 ) beyond which the seizure should be regarded as "continuous seizure activity." The second time point (t2 ) is the time of ongoing seizure activity after which there is a risk of long-term consequences. In the case of convulsive (tonic-clonic) SE, both time points (t1 at 5 min and t2 at 30 min) are based on animal experiments and clinical research. This evidence is incomplete, and there is furthermore considerable variation, so these time points should be considered as the best estimates currently available. Data are not yet available for other forms of SE, but as knowledge and understanding increase, time points can be defined for specific forms of SE based on scientific evidence and incorporated into the definition, without changing the underlying concepts. A new diagnostic classification system of SE is proposed, which will provide a framework for clinical diagnosis, investigation, and therapeutic approaches for each patient. There are four axes: (1) semiology; (2) etiology; (3) electroencephalography (EEG) correlates; and (4) age. Axis 1 (semiology) lists different forms of SE divided into those with prominent motor systems, those without prominent motor systems, and currently indeterminate conditions (such as acute confusional states with epileptiform EEG patterns). Axis 2 (etiology) is divided into subcategories of known and unknown causes. Axis 3 (EEG correlates) adopts the latest recommendations by consensus panels to use the following descriptors for the EEG: name of pattern, morphology, location, time-related features, modulation, and effect of intervention. Finally, axis 4 divides age groups into neonatal, infancy, childhood, adolescent and adulthood, and elderly. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.
[UV-radiation--sources, wavelength, environment].
Hölzle, Erhard; Hönigsmann, Herbert
2005-09-01
The UV-radiation in our environment is part of the electromagnetic radiation, which emanates from the sun. It is designated as optical radiation and reaches from 290-4,000 nm on the earth's surface. According to international definitions UV irradiation is divided into short-wave UVC (200-280 nm), medium-wave UVB (280-320 nm), and long-wave UVA (320-400 nm). Solar radiation which reaches the surface of the globe at a defined geographical site and a defined time point is called global radiation. It is modified quantitatively and qualitatively while penetrating the atmosphere. Besides atmospheric conditions, like ozone layer and air pollution, geographic latitude, elevation, time of the season, time of the day, cloudiness and the influence of indirect radiation resulting from stray effects in the atmosphere and reflection from the underground play a role in modifying global radiation, which finally represents the biologically effective radiation. The radiation's distribution on the body surface varies according to sun angle and body posture. The cumulative UV exposure is mainly influenced by outdoor profession and recreational activities. The use of sun beds and phototherapeutic measures additionally may contribute to the cumulative UV dose.
Dynamical emergence of Markovianity in local time scheme.
Jeknić-Dugić, J; Arsenijević, M; Dugić, M
2016-06-01
Recently we pointed out the so-called local time scheme as a novel approach to quantum foundations that solves the preferred pointer-basis problem. In this paper, we introduce and analyse in depth a rather non-standard dynamical map that is imposed by the scheme. On the one hand, the map does not allow for introducing a properly defined generator of the evolution nor does it represent a quantum channel. On the other hand, the map is linear, positive, trace preserving and unital as well as completely positive, but is not divisible and therefore non-Markovian. Nevertheless, we provide quantitative criteria for dynamical emergence of time-coarse-grained Markovianity, for exact dynamics of an open system, as well as for operationally defined approximation of a closed or open many-particle system. A closed system never reaches a steady state, whereas an open system may reach a unique steady state given by the Lüders-von Neumann formula; where the smaller the open system, the faster a steady state is attained. These generic findings extend the standard open quantum systems theory and substantially tackle certain cosmological issues.
A Concept for Run-Time Support of the Chapel Language
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
A document presents a concept for run-time implementation of other concepts embodied in the Chapel programming language. (Now undergoing development, Chapel is intended to become a standard language for parallel computing that would surpass older such languages in both computational performance in the efficiency with which pre-existing code can be reused and new code written.) The aforementioned other concepts are those of distributions, domains, allocations, and access, as defined in a separate document called "A Semantic Framework for Domains and Distributions in Chapel" and linked to a language specification defined in another separate document called "Chapel Specification 0.3." The concept presented in the instant report is recognition that a data domain that was invented for Chapel offers a novel approach to distributing and processing data in a massively parallel environment. The concept is offered as a starting point for development of working descriptions of functions and data structures that would be necessary to implement interfaces to a compiler for transforming the aforementioned other concepts from their representations in Chapel source code to their run-time implementations.
Defining human death: an intersection of bioethics and metaphysics.
Manninen, Bertha Alvarez
2009-01-01
For many years now, bioethicists, physicians, and others in the medical field have disagreed concerning how to best define human death. Different theories range from the Harvard Criteria of Brain Death, which defines death as the cessation of all brain activity, to the Cognitive Criteria, which is based on the loss of almost all core mental properties, e.g., memory, self-consciousness, moral agency, and the capacity for reason. A middle ground is the Irreversibility Standard, which defines death as occurring when the capacity for consciousness is forever lost. Given all these different theories, how can we begin to approach solving the issue of how to define death? I propose that a necessary starting point is discussing an even more fundamental question that properly belongs in the philosophical field of metaphysics: we must first address the issue of diachronic identity over time, and the persistence conditions of personal identity. In this paper, I illustrate the interdependent relationship between this metaphysical question and questions concerning the definition of death. I also illustrate how it is necessary to antecedently attend to the metaphysical issue of defining death before addressing certain issues in medical ethics, e.g., whether it is morally permissible to euthanize patients in persistent vegetative states or procure organs from anencephalic infants.
Stochastically gated local and occupation times of a Brownian particle
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.
2017-01-01
We generalize the Feynman-Kac formula to analyze the local and occupation times of a Brownian particle moving in a stochastically gated one-dimensional domain. (i) The gated local time is defined as the amount of time spent by the particle in the neighborhood of a point in space where there is some target that only receives resources from (or detects) the particle when the gate is open; the target does not interfere with the motion of the Brownian particle. (ii) The gated occupation time is defined as the amount of time spent by the particle in the positive half of the real line, given that it can only cross the origin when a gate placed at the origin is open; in the closed state the particle is reflected. In both scenarios, the gate randomly switches between the open and closed states according to a two-state Markov process. We derive a stochastic, backward Fokker-Planck equation (FPE) for the moment-generating function of the two types of gated Brownian functional, given a particular realization of the stochastic gate, and analyze the resulting stochastic FPE using a moments method recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment-generating function, averaged with respect to realizations of the stochastic gate.
Marketing: The roots of your business
Susan S. Franko
2008-01-01
These tools will help you turn the features of your products and services into benefits. A feature is defined from your point of view; a benefit is defined from the customer's point of view. The potential customer has to be helped to understand why you are the right choice for him or her. In this way, you lead them to the decision you want them to make, that is,...
Substance P signalling in primary motor cortex facilitates motor learning in rats
Hertler, Benjamin; Hosp, Jonas Aurel; Blanco, Manuel Buitrago
2017-01-01
Among the genes that are up-regulated in response to a reaching training in rats, Tachykinin 1 (Tac1)—a gene that encodes the neuropeptide Substance P (Sub P)—shows an especially strong expression. Using Real-Time RT-PCR, a detailed time-course of Tac1 expression could be defined: a significant peak occurs 7 hours after training ended at the first and second training session, whereas no up-regulation could be detected at a later time-point (sixth training session). To assess the physiological role of Sub P during movement acquisition, microinjections into the primary motor cortex (M1) contralateral to the trained paw were performed. When Sub P was injected before the first three sessions of a reaching training, effectiveness of motor learning became significantly increased. Injections at a time-point when rats already knew the task (i.e. training session ten and eleven) had no effect on reaching performance. Sub P injections did not influence the improvement of performance within a single training session, but retention of performance between sessions became strengthened at a very early stage (i.e. between baseline-training and first training session). Thus, Sub P facilitates motor learning in the very early phase of skill acquisition by supporting memory consolidation. In line with these findings, learning related expression of the precursor Tac1 occurs at early but not at later time-points during reaching training. PMID:29281692
Powers, John H.; Patrick, Donald L.; Walton, Marc K.; Marquis, Patrick; Cano, Stefan; Hobart, Jeremy; Isaac, Maria; Vamvakas, Spiros; Slagle, Ashley; Molsen, Elizabeth; Burke, Laurie B.
2017-01-01
A clinician-reported outcome (ClinRO) assessment is a type of clinical outcome assessment (COA). ClinRO assessments, like all COAs (patient-reported, observer-reported, or performance outcome assessments), are used to 1) measure patients’ health status and 2) define end points that can be interpreted as treatment benefits of medical interventions on how patients feel, function, or survive in clinical trials. Like other COAs, ClinRO assessments can be influenced by human choices, judgment, or motivation. A ClinRO assessment is conducted and reported by a trained health care professional and requires specialized professional training to evaluate the patient’s health status. This is the second of two reports by the ISPOR Clinical Outcomes Assessment—Emerging Good Practices for Outcomes Research Task Force. The first report provided an overview of COAs including definitions important for an understanding of COA measurement practices. This report focuses specifically on issues related to ClinRO assessments. In this report, we define three types of ClinRO assessments (readings, ratings, and clinician global assessments) and describe emerging good measurement practices in their development and evaluation. The good measurement practices include 1) defining the context of use; 2) identifying the concept of interest measured; 3) defining the intended treatment benefit on how patients feel, function, or survive reflected by the ClinRO assessment and evaluating the relationship between that intended treatment benefit and the concept of interest; 4) documenting content validity; 5) evaluating other measurement properties once content validity is established (including intra- and inter-rater reliability); 6) defining study objectives and end point(s) objectives, and defining study end points and placing study end points within the hierarchy of end points; 7) establishing interpretability in trial results; and 8) evaluating operational considerations for the implementation of ClinRO assessments used as end points in clinical trials. Applying good measurement practices to ClinRO assessment development and evaluation will lead to more efficient and accurate measurement of treatment effects. This is important beyond regulatory approval in that it provides evidence for the uptake of new interventions into clinical practice and provides justification to payers for reimbursement on the basis of the clearly demonstrated added value of the new intervention. PMID:28212963
Powers, John H; Patrick, Donald L; Walton, Marc K; Marquis, Patrick; Cano, Stefan; Hobart, Jeremy; Isaac, Maria; Vamvakas, Spiros; Slagle, Ashley; Molsen, Elizabeth; Burke, Laurie B
2017-01-01
A clinician-reported outcome (ClinRO) assessment is a type of clinical outcome assessment (COA). ClinRO assessments, like all COAs (patient-reported, observer-reported, or performance outcome assessments), are used to 1) measure patients' health status and 2) define end points that can be interpreted as treatment benefits of medical interventions on how patients feel, function, or survive in clinical trials. Like other COAs, ClinRO assessments can be influenced by human choices, judgment, or motivation. A ClinRO assessment is conducted and reported by a trained health care professional and requires specialized professional training to evaluate the patient's health status. This is the second of two reports by the ISPOR Clinical Outcomes Assessment-Emerging Good Practices for Outcomes Research Task Force. The first report provided an overview of COAs including definitions important for an understanding of COA measurement practices. This report focuses specifically on issues related to ClinRO assessments. In this report, we define three types of ClinRO assessments (readings, ratings, and clinician global assessments) and describe emerging good measurement practices in their development and evaluation. The good measurement practices include 1) defining the context of use; 2) identifying the concept of interest measured; 3) defining the intended treatment benefit on how patients feel, function, or survive reflected by the ClinRO assessment and evaluating the relationship between that intended treatment benefit and the concept of interest; 4) documenting content validity; 5) evaluating other measurement properties once content validity is established (including intra- and inter-rater reliability); 6) defining study objectives and end point(s) objectives, and defining study end points and placing study end points within the hierarchy of end points; 7) establishing interpretability in trial results; and 8) evaluating operational considerations for the implementation of ClinRO assessments used as end points in clinical trials. Applying good measurement practices to ClinRO assessment development and evaluation will lead to more efficient and accurate measurement of treatment effects. This is important beyond regulatory approval in that it provides evidence for the uptake of new interventions into clinical practice and provides justification to payers for reimbursement on the basis of the clearly demonstrated added value of the new intervention. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Real Time Correction of Aircraft Flight Fonfiguration
NASA Technical Reports Server (NTRS)
Schipper, John F. (Inventor)
2009-01-01
Method and system for monitoring and analyzing, in real time, variation with time of an aircraft flight parameter. A time-dependent recovery band, defined by first and second recovery band boundaries that are spaced apart at at least one time point, is constructed for a selected flight parameter and for a selected time recovery time interval length .DELTA.t(FP;rec). A flight parameter, having a value FP(t=t.sub.p) at a time t=t.sub.p, is likely to be able to recover to a reference flight parameter value FP(t';ref), lying in a band of reference flight parameter values FP(t';ref;CB), within a time interval given by t.sub.p.ltoreq.t'.ltoreq.t.sub.p.DELTA.t(FP;rec), if (or only if) the flight parameter value lies between the first and second recovery band boundary traces.
Longer wait times affect future use of VHA primary care.
Wong, Edwin S; Liu, Chuan-Fen; Hernandez, Susan E; Augustine, Matthew R; Nelson, Karin; Fihn, Stephan D; Hebert, Paul L
2017-07-29
Improving access to the Veterans Health Administration (VHA) is a high priority, particularly given statutory mandates of the Veterans Access, Choice and Accountability Act. This study examined whether patient-reported wait times for VHA appointments were associated with future reliance on VHA primary care services. This observational study examined 13,595 VHA patients dually enrolled in fee-for-service Medicare. Data sources included VHA administrative data, Medicare claims and the Survey of Healthcare Experiences of Patients (SHEP). Primary care use was defined as the number of face-to-face visits from VHA and Medicare in the 12 months following SHEP completion. VHA reliance was defined as the number of VHA visits divided by total visits (VHA+Medicare). Wait times were derived from SHEP responses measuring the usual number of days to a VHA appointment with patients' primary care provider for those seeking immediate care. We defined appointment wait times categorically: 0 days, 1day, 2-3 days, 4-7 days and >7 days. We used fractional logistic regression to examine the relationship between wait times and reliance. Mean VHA reliance was 88.1% (95% CI = 86.7% to 89.5%) for patients reporting 0day waits. Compared with these patients, reliance over the subsequent year was 1.4 (p = 0.041), 2.8 (p = 0.001) and 1.6 (p = 0.014) percentage points lower for patients waiting 2-3 days, 4-7 days and >7 days, respectively. Patients reporting longer usual wait times for immediate VHA care exhibited lower future reliance on VHA primary care. Longer wait times may reduce care continuity and impact cost shifting across two federal health programs. Copyright © 2017. Published by Elsevier Inc.
Trajectory Specification for Automation of Terminal Air Traffic Control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.
2016-01-01
"Trajectory specification" is the explicit bounding and control of aircraft tra- jectories such that the position at each point in time is constrained to a precisely defined volume of space. The bounding space is defined by cross-track, along-track, and vertical tolerances relative to a reference trajectory that specifies position as a function of time. The tolerances are dynamic and will be based on the aircraft nav- igation capabilities and the current traffic situation. A standard language will be developed to represent these specifications and to communicate them by datalink. Assuming conformance, trajectory specification can guarantee safe separation for an arbitrary period of time even in the event of an air traffic control (ATC) sys- tem or datalink failure, hence it can help to achieve the high level of safety and reliability needed for ATC automation. As a more proactive form of ATC, it can also maximize airspace capacity and reduce the reliance on tactical backup systems during normal operation. It applies to both enroute airspace and the terminal area around airports, but this paper focuses on arrival spacing in the terminal area and presents ATC algorithms and software for achieving a specified delay of runway arrival time.
NASA Astrophysics Data System (ADS)
Zinke, Stephan
2017-02-01
Memory sensitive applications for remote sensing data require memory-optimized data types in remote sensing products. Hierarchical Data Format version 5 (HDF5) offers user defined floating point numbers and integers and the n-bit filter to create data types optimized for memory consumption. The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) applies a compaction scheme to the disseminated products of the Day and Night Band (DNB) data of Suomi National Polar-orbiting Partnership (S-NPP) satellite's instrument Visible Infrared Imager Radiometer Suite (VIIRS) through the EUMETSAT Advanced Retransmission Service, converting the original 32 bits floating point numbers to user defined floating point numbers in combination with the n-bit filter for the radiance dataset of the product. The radiance dataset requires a floating point representation due to the high dynamic range of the DNB. A compression factor of 1.96 is reached by using an automatically determined exponent size and an 8 bits trailing significand and thus reducing the bandwidth requirements for dissemination. It is shown how the parameters needed for user defined floating point numbers are derived or determined automatically based on the data present in a product.
Predict or classify: The deceptive role of time-locking in brain signal classification
NASA Astrophysics Data System (ADS)
Rusconi, Marco; Valleriani, Angelo
2016-06-01
Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate that the high classification accuracy is a consequence of time-locking and that its time behavior is simply related to the large relaxation time of the process. We conclude that when time-locking is a crucial step in the analysis of neural activity patterns, both the emergence and the timing of the classification accuracy are affected by structural properties of the network that generates the signal.
Pinto Pereira, Snehal M; Li, Leah; Power, Chris
2014-12-01
Much adult physical inactivity research ignores early-life factors from which later influences may originate. In the 1958 British birth cohort (followed from 1958 to 2008), leisure-time inactivity, defined as activity frequency of less than once a week, was assessed at ages 33, 42, and 50 years (n = 12,776). Early-life factors (at ages 0-16 years) were categorized into 3 domains (i.e., physical, social, and behavioral). We assessed associations of adult inactivity 1) with factors within domains, 2) with the 3 domains combined, and 3) allowing for adult factors. At each age, approximately 32% of subjects were inactive. When domains were combined, factors associated with inactivity (e.g., at age 50 years) were prepubertal stature (5% lower odds per 1-standard deviation higher height), hand control/coordination problems (14% higher odds per 1-point increase on a 4-point scale), cognition (10% lower odds per 1-standard deviation greater ability), parental divorce (21% higher odds), institutional care (29% higher odds), parental social class at child's birth (9% higher odds per 1-point reduction on a 4-point scale), minimal parental education (13% higher odds), household amenities (2% higher odds per increase (representing poorer amenities) on a 19-point scale), inactivity (8% higher odds per 1-point reduction in activity on a 4-point scale), low sports aptitude (13% higher odds), and externalizing behaviors (i.e., conduct problems) (5% higher odds per 1-standard deviation higher score). Adjustment for adult covariates weakened associations slightly. Factors from early life were associated with adult leisure-time inactivity, allowing for early identification of groups vulnerable to inactivity. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Trends in the Prevalence and Disparity in Cognitive Limitations of Americans 55-69 Years Old.
Choi, HwaJung; Schoeni, Robert F; Martin, Linda G; Langa, Kenneth M
2018-04-16
To determine whether the prevalence of cognitive limitation (CL) among Americans ages 55 to 69 years changed between 1998 and 2014, and to assess the trends in socioeconomic disparities in CL among groups defined by race/ethnicity, education, income, and wealth. Logistic regression using 1998-2014 data from the biennial Health and Retirement Study, a nationally representative data set. CL is defined as a score of 0-11 on a 27-point cognitive battery of items focused on memory. Socioeconomic status (SES) measures are classified as quartiles. In models controlling for age, gender, and previous cognitive testing, we find no significant change over time in the overall prevalence of CL, widening disparities in limitation by income and, in some cases, wealth, and improvements among non-Hispanic whites but not other racial/ethnic groups. Among people 55-69, rates of CL are many times higher for groups with lower SES than those with higher SES, and recent trends show little indication that the gaps are narrowing.
Critical space-time networks and geometric phase transitions from frustrated edge antiferromagnetism
NASA Astrophysics Data System (ADS)
Trugenberger, Carlo A.
2015-12-01
Recently I proposed a simple dynamical network model for discrete space-time that self-organizes as a graph with Hausdorff dimension dH=4 . The model has a geometric quantum phase transition with disorder parameter (dH-ds) , where ds is the spectral dimension of the dynamical graph. Self-organization in this network model is based on a competition between a ferromagnetic Ising model for vertices and an antiferromagnetic Ising model for edges. In this paper I solve a toy version of this model defined on a bipartite graph in the mean-field approximation. I show that the geometric phase transition corresponds exactly to the antiferromagnetic transition for edges, the dimensional disorder parameter of the former being mapped to the staggered magnetization order parameter of the latter. The model has a critical point with long-range correlations between edges, where a continuum random geometry can be defined, exactly as in Kazakov's famed 2D random lattice Ising model but now in any number of dimensions.
NASA Technical Reports Server (NTRS)
Leskovar, B.; Turko, B.
1977-01-01
The development of a high precision time interval digitizer is described. The time digitizer is a 10 psec resolution stop watch covering a range of up to 340 msec. The measured time interval is determined as a separation between leading edges of a pair of pulses applied externally to the start input and the stop input of the digitizer. Employing an interpolation techniques and a 50 MHz high precision master oscillator, the equivalent of a 100 GHz clock frequency standard is achieved. Absolute accuracy and stability of the digitizer are determined by the external 50 MHz master oscillator, which serves as a standard time marker. The start and stop pulses are fast 1 nsec rise time signals, according to the Nuclear Instrument means of tunnel diode discriminators. Firing level of the discriminator define start and stop points between which the time interval is digitized.
Three-axis asymmetric radiation detector system
Martini, Mario Pierangelo; Gedcke, Dale A.; Raudorf, Thomas W.; Sangsingkeow, Pat
2000-01-01
A three-axis radiation detection system whose inner and outer electrodes are shaped and positioned so that the shortest path between any point on the inner electrode and the outer electrode is a different length whereby the rise time of a pulse derived from a detected radiation event can uniquely define the azimuthal and radial position of that event, and the outer electrode is divided into a plurality of segments in the longitudinal axial direction for locating the axial location of a radiation detection event occurring in the diode.
Optimal Recovery Trajectories for Automatic Ground Collision Avoidance Systems (Auto GCAS)
2015-03-01
the Multi-Trajectory path uses a sphere buffer (with a 350 ft radius) around each time point in the propagated path. Hence, the yellow Xs indicate the...the HUD as well as a matrix/line of Xs on the radar electro optical (REO) display. Enhanced ground clobber (EGC) mechanization was integrated on the F...reachable in the timespan t ∈ [t0, tf ], and dthreshold is a scalar user-defined terrain buffer. For the work de- veloped herein, dthreshold was set to 350
2013-04-08
estimator will relate an array of surface mounted sensor signals, de- fined as p( xs , t), to the flow state which is modeled by the time coefficients of a POD...layer growth, separation points, etc.) are chosen and defined as ( xs ) within the numeric simulation. A surface POD analysis, p( xs , t)≃ k ∑ p=1 asp(t)ϕsp... xs ), (30) yields surface POD modes φ sp( xs ). The resulting locations of the maxima and minima of the sur- face modes show where the largest
Method for resource control in parallel environments using program organization and run-time support
NASA Technical Reports Server (NTRS)
Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)
2001-01-01
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
Method for resource control in parallel environments using program organization and run-time support
NASA Technical Reports Server (NTRS)
Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)
1999-01-01
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
Development of the Expert System Domain Advisor and Analysis Tool
1991-09-01
analysis. Typical of the current methods in use at this time is the " tarot metric". This method defines a decision rule whose output is whether to go...B - TAROT METRIC B. ::TTRODUCTION The system chart of ESEM, Figure 1, shows the following three risk-based decision points: i. At prolect initiation...34 decisions. B-I 201 PRELIMINARY T" B-I. Evaluais Factan for ES Deyelopsineg FACTORS POSSIBLE VALUE RATINGS TAROT metric (overall suitability) Poor, Fair
Design criteria for a self-actuated shutdown system to ensure limitation of core damage. [LMFBR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deane, N.A.; Atcheson, D.B.
1981-09-01
Safety-based functional requirements and design criteria for a self-actuated shutdown system (SASS) are derived in accordance with LOA-2 success criteria and reliability goals. The design basis transients have been defined and evaluated for the CDS Phase II design, which is a 2550 MWt mixed oxide heterogeneous core reactor. A partial set of reactor responses for selected transients is provided as a function of SASS characteristics such as reactivity worth, trip points, and insertion times.
Alternative general-aircraft engines
NASA Technical Reports Server (NTRS)
Tomazic, W. A.
1976-01-01
The most promising alternative engine (or engines) for application to general aircraft in the post-1985 time period was defined, and the level of technology was cited to the point where confident development of a new engine can begin early in the 1980's. Low emissions, multifuel capability, and fuel economy were emphasized. Six alternative propulsion concepts were considered to be viable candidates for future general-aircraft application: the advanced spark-ignition piston, rotary combustion, two- and four-stroke diesel, Stirling, and gas turbine engines.
Omega System Performance Assessment
1989-03-01
defined locally (i.e., a point 3-9 function of space and time), a weighted average of P(X 3) over the earth’s surface is taken to match the definition...operations which follow, product indicates set intersection and addition indicates set union . 3-10 Bi event that only station i is off-air, i = 1, 2...case, event X3 could not occur under any circumstances. By expressing the set universe as the union of all possible (mutually exclusive) B-events, it is
1978-03-31
detailed analysis of the data is made in an attempt to reach more definitive conclusion on that matter. Analysis of Data The largest foreshock (OT-II:22...represented with a trapezoid of unit area defined with three time segments (2.5, 1.0, 2.5 seconds). The same pattern is seen in the foreshock as shown in...parameters were taken to be the same as in the case of the aftershock. In the previous report it was pointed out that foreshock shows a secondary arrival
Elevation-relief ratio, hypsometric integral, and geomorphic area-altitude analysis.
NASA Technical Reports Server (NTRS)
Pike, R. J.; Wilson, S. E.
1971-01-01
Mathematical proof establishes identity of hypsometric integral and elevation-relief ratio, two quantitative topographic descriptors developed independently of one another for entirely different purposes. Operationally, values of both measures are in excellent agreement for arbitrarily bounded topographic samples, as well as for low-order fluvial watersheds. By using a point-sampling technique rather than planimetry, elevation-relief ratio (defined as mean elevation minus minimum elevation divided by relief) is calculated manually in about a third of the time required for the hypsometric integral.
Sandia Higher Order Elements (SHOE) v 0.5 alpha
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-09-24
SHOE is research code for characterizing and visualizing higher-order finite elements; it contains a framework for defining classes of interpolation techniques and element shapes; methods for interpolating triangular, quadrilateral, tetrahedral, and hexahedral cells using Lagrange and Legendre polynomial bases of arbitrary order; methods to decompose each element into domains of constant gradient flow (using a polynomial solver to identify critical points); and an isocontouring technique that uses this decomposition to guarantee topological correctness. Please note that this is an alpha release of research software and that some time has passed since it was actively developed; build- and run-time issues likelymore » exist.« less
Mallik, Tanuja; Aneja, S; Tope, R; Muralidhar, V
2012-01-01
Background: In the administration of minimal flow anesthesia, traditionally a fixed time period of high flow has been used before changing over to minimal flow. However, newer studies have used “equilibration time” of a volatile anesthetic agent as the change-over point. Materials and Methods: A randomized prospective study was conducted on 60 patients, who were divided into two groups of 30 patients each. Two volatile inhalational anesthetic agents were compared. Group I received desflurane (n = 30) and group II isoflurane (n = 30). Both the groups received an initial high flow till equilibration between inspired (Fi) and expired (Fe) agent concentration were achieved, which was defined as Fe/Fi = 0.8. The mean (SD) equilibration time was obtained for both the agent. Then, a drift in end-tidal agent concentration during the minimal flow anesthesia and recovery profile was noted. Results: The mean equilibration time obtained for desflurane and isoflurane were 4.96 ± 1.60 and 16.96 ± 9.64 min (P < 0.001). The drift in end-tidal agent concentration over time was minimal in the desflurane group (P = 0.065). Recovery time was 5.70 ± 2.78 min in the desflurane group and 8.06 ± 31 min in the isoflurane group (P = 0.004). Conclusion: Use of equilibration time of the volatile anesthetic agent as a change-over point, from high flow to minimal flow, can help us use minimal flow anesthesia, in a more efficient way. PMID:23225926
Self-Similar Spin Images for Point Cloud Matching
NASA Astrophysics Data System (ADS)
Pulido, Daniel
The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.
Modeling a Mathematical to Quantify the Degree of Emergency Department Crowding
NASA Astrophysics Data System (ADS)
Chang, Y.; Pan, C.; Wen, J.
2012-12-01
The purpose of this study is to deduce a function from the admissions/discharge rate of patient flow to estimate a "Critical Point" that provides a reference for warning systems in regards to crowding in the emergency department (ED) of a hospital or medical clinic. In this study, a model of "Input-Throughput-Output" was used in our established mathematical function to evaluate the critical point. The function was defined as ∂ρ/∂t=-K×∂ρ/∂x , where ρ= number of patients per unit distance (also called density), t= time, x= distance, K= distance of patients movement per unit time. Using the average K of ED crowding, we could initiate the warning system at appropriate time and plan necessary emergency response to facilitate the patient process more smoothly. It was concluded that ED crowding can be quantified using the average value of K, and the value can be used as a reference for medical staff to give optimal emergency medical treatment to patients. Therefore, additional practical work should be launched to collect more precise quantitative data.
Brain-heart linear and nonlinear dynamics during visual emotional elicitation in healthy subjects.
Valenza, G; Greco, A; Gentili, C; Lanata, A; Toschi, N; Barbieri, R; Sebastiani, L; Menicucci, D; Gemignani, A; Scilingo, E P
2016-08-01
This study investigates brain-heart dynamics during visual emotional elicitation in healthy subjects through linear and nonlinear coupling measures of EEG spectrogram and instantaneous heart rate estimates. To this extent, affective pictures including different combinations of arousal and valence levels, gathered from the International Affective Picture System, were administered to twenty-two healthy subjects. Time-varying maps of cortical activation were obtained through EEG spectral analysis, whereas the associated instantaneous heartbeat dynamics was estimated using inhomogeneous point-process linear models. Brain-Heart linear and nonlinear coupling was estimated through the Maximal Information Coefficient (MIC), considering EEG time-varying spectra and point-process estimates defined in the time and frequency domains. As a proof of concept, we here show preliminary results considering EEG oscillations in the θ band (4-8 Hz). This band, indeed, is known in the literature to be involved in emotional processes. MIC highlighted significant arousal-dependent changes, mediated by the prefrontal cortex interplay especially occurring at intermediate arousing levels. Furthermore, lower and higher arousing elicitations were associated to not significant brain-heart coupling changes in response to pleasant/unpleasant elicitations.
Szarpak, Łukasz; Czyżewski, Łukasz; Kurowski, Andrzej
2015-03-01
The study was designed to compare the effectiveness of 3 video laryngoscopes with the Miller laryngoscope during pediatric resuscitation. This was a randomized crossover study involving 87 paramedics and 54 nurses. The primary end point of the study was the success rate of blind tracheal intubation, whereas the secondary end point was defined as the time from insertion of a device to the first manual ventilation of the manikin's lungs. The median time to intubation using the Pentax, Truview, GlideScope, and Miller varied with the times being 20.6 (interquartile range [IQR], 18-27) vs 20.1 (IQR, 18-23.3) vs 30.2 (IQR, 29.6-35) vs 41.3 (IQR, 33-45.2) seconds, respectively. The overall success ratios of intubation for the devices were 100% vs 100% vs 100% vs 79.4%. We concluded that, in a pediatric manikin scenario, the video laryngoscopes are safe devices and can be used for pediatric intubation during uninterrupted chest compressions. Further clinical studies are necessary to confirm these initial positive findings. Copyright © 2015 Elsevier Inc. All rights reserved.
Non-reciprocity in nonlinear elastodynamics
NASA Astrophysics Data System (ADS)
Blanchard, Antoine; Sapsis, Themistoklis P.; Vakakis, Alexander F.
2018-01-01
Reciprocity is a fundamental property of linear time-invariant (LTI) acoustic waveguides governed by self-adjoint operators with symmetric Green's functions. The break of reciprocity in LTI elastodynamics is only possible through the break of time reversal symmetry on the micro-level, and this can be achieved by imposing external biases, adding nonlinearities or allowing for time-varying system properties. We present a Volterra-series based asymptotic analysis for studying spatial non-reciprocity in a class of one-dimensional (1D), time-invariant elastic systems with weak stiffness nonlinearities. We show that nonlinearity is neither necessary nor sufficient for breaking reciprocity in this class of systems; rather, it depends on the boundary conditions, the symmetries of the governing linear and nonlinear operators, and the choice of the spatial points where the non-reciprocity criterion is tested. Extension of the analysis to higher dimensions and time-varying systems is straightforward from a mathematical point of view (but not in terms of new non-reciprocal physical phenomena), whereas the connection of non-reciprocity and time irreversibility can be studied as well. Finally, we show that suitably defined non-reciprocity measures enable optimization, and can provide physical understanding of the nonlinear effects in the dynamics, enabling one to establish regimes of "maximum nonlinearity." We highlight the theoretical developments by means of a numerical example.
Mestdagh, Inge; Bonicelli, Bernard; Laplana, Ramon; Roettele, Manfred
2009-01-01
Based on the results and lessons learned from the TOPPS project (Training the Operators to prevent Pollution from Point Sources), a proposal on a sustainable strategy to avoid point source pollution from Plant Protection Products (PPPs) was made. Within this TOPPS project (2005-2008), stakeholders were interviewed and research and analysis were done in 6 pilot catchment areas (BE, FR, DE, DK, IT, PL). Next, there was a repeated survey on operators' perception and opinion to measure changes resulting from TOPPS activities and good and bad practices were defined based on the Best Management Practices (risk analysis). Aim of the proposal is to suggest a strategy considering the differences between countries which can be implemented on Member State level in order to avoid PPP pollution of water through point sources. The methodology used for the up-scaLing proposal consists of the analysis of the current situation, a gap analysis, a consistency analysis and organisational structures for implementation. The up-scaling proposal focuses on the behaviour of the operators, on the equipment and infrastructure available with the operators. The proposal defines implementation structures to support correct behaviour through the development and updating of Best Management Practices (BMPs) and through the transfer and the implementation of these BMPs. Next, the proposal also defines requirements for the improvement of equipment and infrastructure based on the defined key factors related to point source pollution. It also contains cost estimates for technical and infrastructure upgrades to comply with BMPs.
Intraoperative measurements on the mitral apparatus using optical tracking: a feasibility study
NASA Astrophysics Data System (ADS)
Engelhardt, Sandy; De Simone, Raffaele; Wald, Diana; Zimmermann, Norbert; Al Maisary, Sameer; Beller, Carsten J.; Karck, Matthias; Meinzer, Hans-Peter; Wolf, Ivo
2014-03-01
Mitral valve reconstruction is a widespread surgical method to repair incompetent mitral valves. During reconstructive surgery the judgement of mitral valve geometry and subvalvular apparatus is mandatory in order to choose for the appropriate repair strategy. To date, intraoperative analysis of mitral valve is merely based on visual assessment and inaccurate sizer devices, which do not allow for any accurate and standardized measurement of the complex three-dimensional anatomy. We propose a new intraoperative computer-assisted method for mitral valve measurements using a pointing instrument together with an optical tracking system. Sixteen anatomical points were defined on the mitral apparatus. The feasibility and the reproducibility of the measurements have been tested on a rapid prototyping (RP) heart model and a freshly exercised porcine heart. Four heart surgeons repeated the measurements three times on each heart. Morphologically important distances between the measured points are calculated. We achieved an interexpert variability mean of 2.28 +/- 1:13 mm for the 3D-printed heart and 2.45 +/- 0:75 mm for the porcine heart. The overall time to perform a complete measurement is 1-2 minutes, which makes the method viable for virtual annuloplasty during an intervention.
Revealing plant cryptotypes: defining meaningful phenotypes among infinite traits.
Chitwood, Daniel H; Topp, Christopher N
2015-04-01
The plant phenotype is infinite. Plants vary morphologically and molecularly over developmental time, in response to the environment, and genetically. Exhaustive phenotyping remains not only out of reach, but is also the limiting factor to interpreting the wealth of genetic information currently available. Although phenotyping methods are always improving, an impasse remains: even if we could measure the entirety of phenotype, how would we interpret it? We propose the concept of cryptotype to describe latent, multivariate phenotypes that maximize the separation of a priori classes. Whether the infinite points comprising a leaf outline or shape descriptors defining root architecture, statistical methods to discern the quantitative essence of an organism will be required as we approach measuring the totality of phenotype. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hamiltonian indices and rational spectral densities
NASA Technical Reports Server (NTRS)
Byrnes, C. I.; Duncan, T. E.
1980-01-01
Several (global) topological properties of various spaces of linear systems, particularly symmetric, lossless, and Hamiltonian systems, and multivariable spectral densities of fixed McMillan degree are announced. The study is motivated by a result asserting that on a connected but not simply connected manifold, it is not possible to find a vector field having a sink as its only critical point. In the scalar case, this is illustrated by showing that only on the space of McMillan degree = /Cauchy index/ = n, scalar transfer functions can one define a globally convergent vector field. This result holds both in discrete-time and for the nonautonomous case. With these motivations in mind, theorems of Bochner and Fogarty are used in showing that spaces of transfer functions defined by symmetry conditions are, in fact, smooth algebraic manifolds.
Inhomogeneous point-process entropy: An instantaneous measure of complexity in discrete systems
NASA Astrophysics Data System (ADS)
Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-01
Measures of entropy have been widely used to characterize complexity, particularly in physiological dynamical systems modeled in discrete time. Current approaches associate these measures to finite single values within an observation window, thus not being able to characterize the system evolution at each moment in time. Here, we propose a new definition of approximate and sample entropy based on the inhomogeneous point-process theory. The discrete time series is modeled through probability density functions, which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through probability functions, the novel indices are able to provide instantaneous tracking of the system complexity. The new measures are tested on synthetic data, as well as on real data gathered from heartbeat dynamics of healthy subjects and patients with cardiac heart failure and gait recordings from short walks of young and elderly subjects. Results show that instantaneous complexity is able to effectively track the system dynamics and is not affected by statistical noise properties.
Position and volume estimation of atmospheric nuclear detonations from video reconstruction
NASA Astrophysics Data System (ADS)
Schmitt, Daniel T.
Recent work in digitizing films of foundational atmospheric nuclear detonations from the 1950s provides an opportunity to perform deeper analysis on these historical tests. This work leverages multi-view geometry and computer vision techniques to provide an automated means to perform three-dimensional analysis of the blasts for several points in time. The accomplishment of this requires careful alignment of the films in time, detection of features in the images, matching of features, and multi-view reconstruction. Sub-explosion features can be detected with a 67% hit rate and 22% false alarm rate. Hotspot features can be detected with a 71.95% hit rate, 86.03% precision and a 0.015% false positive rate. Detected hotspots are matched across 57-109 degree viewpoints with 76.63% average correct matching by defining their location relative to the center of the explosion, rotating them to the alternative viewpoint, and matching them collectively. When 3D reconstruction is applied to the hotspot matching it completes an automated process that has been used to create 168 3D point clouds with 31.6 points per reconstruction with each point having an accuracy of 0.62 meters with 0.35, 0.24, and 0.34 meters of accuracy in the x-, y- and z-direction respectively. As a demonstration of using the point clouds for analysis, volumes are estimated and shown to be consistent with radius-based models and in some cases improve on the level of uncertainty in the yield calculation.
Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data
NASA Astrophysics Data System (ADS)
Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.
2017-12-01
The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.
Covassin, Tracey; Petit, Kyle M; Savage, Jennifer L; Bretzin, Abigail C; Fox, Meghan E; Walker, Lauren F; Gould, Daniel
2018-06-01
Sports-related concussion (SRC) injury rates, and identifying those athletes at the highest risk, have been a primary research focus. However, no studies have evaluated at which time point during an athletic event athletes are most susceptible to SRCs. To determine the clinical incidence of SRCs during the start, middle, and end of practice and competition among high school male and female athletes in the state of Michigan. Descriptive epidemiological study. There were 110,774 male and 71,945 female student-athletes in grades 9 through 12 (mean time in high school, 2.32 ± 1.1 years) who participated in sponsored athletic activities (13 sports) during the 2015-2016 academic year. An SRC was diagnosed and managed by a medical professional (ie, MD, DO, PA, NP). SRC injuries were reported by certified athletic trainers, athletic administrators, and coaches using the Michigan High School Athletic Association Head Injury Reporting System. Time of SRC was defined as the beginning, middle, or end of practice/competition. Clinical incidence was calculated by dividing the number of SRCs in a time point (eg, beginning) by the total number of participants in a sport per 100 student-athletes (95% CI). Risk ratios were calculated by dividing one time point by another time point. There were 4314 SRCs reported, with the highest in football, women's basketball, and women's soccer. The total clinical incidence for all sports was 2.36 (95% CI, 2.29-2.43) per 100 student-athletes. The most common time for SRCs was the middle, followed by the end of all events. Athletes had a 4.90 (95% CI, 4.44-5.41) and 1.50 (95% CI, 1.40-1.60) times greater risk during the middle of all events when compared with the beginning and end, respectively. There was a 3.28 (95% CI, 2.96-3.63) times greater risk at the end of all events when compared with the beginning. Athletes were at the greatest risk for SRCs at the middle of practice and competition when compared with the beginning and end. The current study suggests that medical attention is particularly important during the middle of all athletic events. Intervention measures to limit SRCs may be most beneficial during the middle of athletic events.
Robinson, Lucy F; Atlas, Lauren Y; Wager, Tor D
2015-03-01
We present a new method, State-based Dynamic Community Structure, that detects time-dependent community structure in networks of brain regions. Most analyses of functional connectivity assume that network behavior is static in time, or differs between task conditions with known timing. Our goal is to determine whether brain network topology remains stationary over time, or if changes in network organization occur at unknown time points. Changes in network organization may be related to shifts in neurological state, such as those associated with learning, drug uptake or experimental conditions. Using a hidden Markov stochastic blockmodel, we define a time-dependent community structure. We apply this approach to data from a functional magnetic resonance imaging experiment examining how contextual factors influence drug-induced analgesia. Results reveal that networks involved in pain, working memory, and emotion show distinct profiles of time-varying connectivity. Copyright © 2014 Elsevier Inc. All rights reserved.
Ott, T; Schmidtmann, I; Limbach, T; Gottschling, P F; Buggenhagen, H; Kurz, S; Pestel, G
2016-11-01
Simulation-based training (SBT) has developed into an established method of medical training. Studies focusing on the education of medical students have used simulation as an evaluation tool for defined skills. A small number of studies provide evidence that SBT improves medical students' skills in the clinical setting. Moreover, they were strictly limited to a few areas, such as the diagnosis of heart murmurs or the correct application of cricoid pressure. Other studies could not prove adequate transferability from the skills gained in SBT to the patient site. Whether SBT has an effect on medical students' skills in anesthesiology in the clinical setting is controversial. To explore this issue, we designed a prospective, randomized, single-blind trial that was integrated into the undergraduate anesthesiology curriculum of our department during the second year of the clinical phase of medical school. This study intended to explore the effect of SBT on medical students within the mandatory undergraduate anesthesiology curriculum of our department in the operating room with respect to basic skills in anesthesiology. After obtaining ethical approval, the participating students of the third clinical semester were randomized into two groups: the SIM-OR group was trained by a 225 min long SBT in basic skills in anesthesiology before attending the operating room (OR) apprenticeship. The OR-SIM group was trained after the operating room apprenticeship by SBT. During SBT the students were trained in five clinical skills detailed below. Further, two clinical scenarios were simulated using a full-scale simulator. The students had to prepare the patient and perform induction of anesthesia, including bag-mask ventilation after induction in scenario 1 and rapid sequence induction in scenario 2. Using the five-point Likert scale, five defined skills were evaluated at defined time points during the study period. 1) application of the safety checklist, 2) application of basic patient monitoring, 3) establishment of intravenous access, 4) bag-and-mask ventilation, and 5) adjustment of ventilatory parameters after the patients' airways were secured. A cumulative score of 5 points was defined as the best and a cumulative score of 25 as the worst rating for a defined time point. The primary endpoint was the cumulative score after day 1 in the operating room apprenticeship and the difference in cumulative scores from days 1 to 4. Our hypothesis was that the SIM-OR group would achieve a better score after day 1 in the operating room apprenticeship and would gain a larger increase in score from day 1 to day 4 than the OR-SIM group. 73 students were allocated to the OR-SIM group and 70 students to the SIM-OR group. There was no significant difference between the two groups after day 1 of the operating room apprenticeship and no difference in increase of the cumulative score from day 1 to day 4 (median of cumulative score on day 1: 'SIM-OR' 11.2 points vs. 'OR-SIM' 14.6 points; p = 0.067; median of difference from day 1 to day 4: 'SIM-OR' -3.7 vs. 'OR-SIM' -6.4; p = 0.110). With the methods applied, this study could not prove that 225 min of SBT before the operating room apprenticeship increased the medical students' clinical skills as evaluated in the operating room. Secondary endpoints indicate that medical students have better clinical skills at the end of the entire curriculum when they have been trained through SBT before the operating room apprenticeship. However, the authors believe that simulator training has a positive impact on students' acquisition of procedural and patient safety skills, even if the methods applied in this study may not mirror this aspect sufficiently.
Aging and the discrimination of 3-D shape from motion and binocular disparity.
Norman, J Farley; Holmin, Jessica S; Beers, Amanda M; Cheeseman, Jacob R; Ronning, Cecilia; Stethen, Angela G; Frost, Adam L
2012-10-01
Two experiments evaluated the ability of younger and older adults to visually discriminate 3-D shape as a function of surface coherence. The coherence was manipulated by embedding the 3-D surfaces in volumetric noise (e.g., for a 55 % coherent surface, 55 % of the stimulus points fell on a 3-D surface, while 45 % of the points occupied random locations within the same volume of space). The 3-D surfaces were defined by static binocular disparity, dynamic binocular disparity, and motion. The results of both experiments demonstrated significant effects of age: Older adults required more coherence (tolerated volumetric noise less) for reliable shape discrimination than did younger adults. Motion-defined and static-binocular-disparity-defined surfaces resulted in similar coherence thresholds. However, performance for dynamic-binocular-disparity-defined surfaces was superior (i.e., the observers' surface coherence thresholds were lowest for these stimuli). The results of both experiments showed that younger and older adults possess considerable tolerance to the disrupting effects of volumetric noise; the observers could reliably discriminate 3-D surface shape even when 45 % of the stimulus points (or more) constituted noise.
Defining the end-point of mastication: A conceptual model.
Gray-Stuart, Eli M; Jones, Jim R; Bronlund, John E
2017-10-01
The great risks of swallowing are choking and aspiration of food into the lungs. Both are rare in normal functioning humans, which is remarkable given the diversity of foods and the estimated 10 million swallows performed in a lifetime. Nevertheless, it remains a major challenge to define the food properties that are necessary to ensure a safe swallow. Here, the mouth is viewed as a well-controlled processor where mechanical sensory assessment occurs throughout the occlusion-circulation cycle of mastication. Swallowing is a subsequent action. It is proposed here that, during mastication, temporal maps of interfacial property data are generated, which the central nervous system compares against a series of criteria in order to be sure that the bolus is safe to swallow. To determine these criteria, an engineering hazard analysis tool, alongside an understanding of fluid and particle mechanics, is used to deduce the mechanisms by which food may deposit or become stranded during swallowing. These mechanisms define the food properties that must be avoided. By inverting the thinking, from hazards to ensuring safety, six criteria arise which are necessary for a safe-to-swallow bolus. A new conceptual model is proposed to define when food is safe to swallow during mastication. This significantly advances earlier mouth models. The conceptual model proposed in this work provides a framework of decision-making to define when food is safe to swallow. This will be of interest to designers of dietary foods, foods for dysphagia sufferers and will aid the further development of mastication robots for preparation of artificial boluses for digestion research. It enables food designers to influence the swallow-point properties of their products. For example, a product may be designed to satisfy five of the criteria for a safe-to-swallow bolus, which means the sixth criterion and its attendant food properties define the swallow-point. Alongside other organoleptic factors, these properties define the end-point texture and enduring sensory perception of the food. © 2017 Wiley Periodicals, Inc.
Parametric motion control of robotic arms: A biologically based approach using neural networks
NASA Technical Reports Server (NTRS)
Bock, O.; D'Eleuterio, G. M. T.; Lipitkas, J.; Grodski, J. J.
1993-01-01
A neural network based system is presented which is able to generate point-to-point movements of robotic manipulators. The foundation of this approach is the use of prototypical control torque signals which are defined by a set of parameters. The parameter set is used for scaling and shaping of these prototypical torque signals to effect a desired outcome of the system. This approach is based on neurophysiological findings that the central nervous system stores generalized cognitive representations of movements called synergies, schemas, or motor programs. It has been proposed that these motor programs may be stored as torque-time functions in central pattern generators which can be scaled with appropriate time and magnitude parameters. The central pattern generators use these parameters to generate stereotypical torque-time profiles, which are then sent to the joint actuators. Hence, only a small number of parameters need to be determined for each point-to-point movement instead of the entire torque-time trajectory. This same principle is implemented for controlling the joint torques of robotic manipulators where a neural network is used to identify the relationship between the task requirements and the torque parameters. Movements are specified by the initial robot position in joint coordinates and the desired final end-effector position in Cartesian coordinates. This information is provided to the neural network which calculates six torque parameters for a two-link system. The prototypical torque profiles (one per joint) are then scaled by those parameters. After appropriate training of the network, our parametric control design allowed the reproduction of a trained set of movements with relatively high accuracy, and the production of previously untrained movements with comparable accuracy. We conclude that our approach was successful in discriminating between trained movements and in generalizing to untrained movements.
Critical fluid thermal equilibration experiment (19-IML-1)
NASA Technical Reports Server (NTRS)
Wilkinson, R. Allen
1992-01-01
Gravity sometimes blocks all experimental techniques of making a desired measurement. Any pure fluid possesses a liquid-vapor critical point. It is defined by a temperature, pressure, and density state in thermodynamics. The critical issue that this experiment attempts to understand is the time it takes for a sample to reach temperature and density equilibrium as the critical point is approached; is it infinity due to mass and thermal diffusion, or do pressure waves speed up energy transport while mass is still under diffusion control. The objectives are to observe: (1) large phase domain homogenization without and with stirring; (2) time evolution of heat and mass after temperature step is applied to a one phase equilibrium sample; (3) phase evolution and configuration upon going two phase from a one phase equilibrium state; (4) effects of stirring on a low g two phase configuration; (5) two phase to one phase healing dynamics starting from a two phase low g configuration; and (6) effects of shuttle acceleration events on spatially and temporally varying compressible critical fluid dynamics.
Bias correction factors for near-Earth asteroids
NASA Technical Reports Server (NTRS)
Benedix, Gretchen K.; Mcfadden, Lucy Ann; Morrow, Esther M.; Fomenkova, Marina N.
1992-01-01
Knowledge of the population size and physical characteristics (albedo, size, and rotation rate) of near-Earth asteroids (NEA's) is biased by observational selection effects which are functions of the population's intrinsic properties and the size of the telescope, detector sensitivity, and search strategy used. The NEA population is modeled in terms of orbital and physical elements: a, e, i, omega, Omega, M, albedo, and diameter, and an asteroid search program is simulated using actual telescope pointings of right ascension, declination, date, and time. The position of each object in the model population is calculated at the date and time of each telescope pointing. The program tests to see if that object is within the field of view (FOV = 8.75 degrees) of the telescope and above the limiting magnitude (V = +1.65) of the film. The effect of the starting population on the outcome of the simulation's discoveries is compared to the actual discoveries in order to define a most probable starting population.
Digital computer program for generating dynamic turbofan engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Krosel, S. M.; Szuch, J. R.; Westerkamp, E. J.
1983-01-01
This report describes DIGTEM, a digital computer program that simulates two spool, two-stream turbofan engines. The turbofan engine model in DIGTEM contains steady-state performance maps for all of the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. Altogether there are 16 state variables and state equations. DIGTEM features a backward-differnce integration scheme for integrating stiff systems. It trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off-design points and iterates to a balanced engine condition. Transients can also be run. They are generated by defining controls as a function of time (open-loop control) in a user-written subroutine (TMRSP). DIGTEM has run on the IBM 370/3033 computer using implicit integration with time steps ranging from 1.0 msec to 1.0 sec. DIGTEM is generalized in the aerothermodynamic treatment of components.
Numerical modeling of thermal conductive heating in fractured bedrock.
Baston, Daniel P; Falta, Ronald W; Kueper, Bernard H
2010-01-01
Numerical modeling was employed to study the performance of thermal conductive heating (TCH) in fractured shale under a variety of hydrogeological conditions. Model results show that groundwater flow in fractures does not significantly affect the minimum treatment zone temperature, except near the beginning of heating or when groundwater influx is high. However, fracture and rock matrix properties can significantly influence the time necessary to remove all liquid water (i.e., reach superheated steam conditions) in the treatment area. Low matrix permeability, high matrix porosity, and wide fracture spacing can contribute to boiling point elevation in the rock matrix. Consequently, knowledge of these properties is important for the estimation of treatment times. Because of the variability in boiling point throughout a fractured rock treatment zone and the absence of a well-defined constant temperature boiling plateau in the rock matrix, it may be difficult to monitor the progress of thermal treatment using temperature measurements alone. Copyright © 2010 The Author(s). Journal compilation © 2010 National Ground Water Association.
Outcry Consistency and Prosecutorial Decisions in Child Sexual Abuse Cases.
Bracewell, Tammy E
2018-05-18
This study examines the correlation between the consistency in a child's sexual abuse outcry and the prosecutorial decision to accept or reject cases of child sexual abuse. Case-specific information was obtained from one Texas Children's Advocacy Center on all cases from 2010 to 2013. After the needed deletion, the total number of cases included in the analysis was 309. An outcry was defined as a sexual abuse disclosure. Consistency was measured at both the forensic interview and the sexual assault exam. Logistic regression was used to evaluate whether a correlation existed between disclosure and prosecutorial decisions. Disclosure was statistically significant. Partial disclosure (disclosure at one point in time and denial at another) versus full disclosure (disclosure at two points in time) had a statistically significant odds ratio of 4.801. Implications are discussed, specifically, how the different disciplines involved in child protection should take advantage of the expertise of both forensic interviewers and forensic nurses to inform their decisions.
Comparison of tablet-based strategies for incision planning in laser microsurgery
NASA Astrophysics Data System (ADS)
Schoob, Andreas; Lekon, Stefan; Kundrat, Dennis; Kahrs, Lüder A.; Mattos, Leonardo S.; Ortmaier, Tobias
2015-03-01
Recent research has revealed that incision planning in laser surgery deploying stylus and tablet outperforms state-of-the-art micro-manipulator-based laser control. Providing more detailed quantitation regarding that approach, a comparative study of six tablet-based strategies for laser path planning is presented. Reference strategy is defined by monoscopic visualization and continuous path drawing on a graphics tablet. Further concepts deploying stereoscopic or a synthesized laser view, point-based path definition, real-time teleoperation or a pen display are compared with the reference scenario. Volunteers were asked to redraw and ablate stamped lines on a sample. Performance is assessed by measuring planning accuracy, completion time and ease of use. Results demonstrate that significant differences exist between proposed concepts. The reference strategy provides more accurate incision planning than the stereo or laser view scenario. Real-time teleoperation performs best with respect to completion time without indicating any significant deviation in accuracy and usability. Point-based planning as well as the pen display provide most accurate planning and increased ease of use compared to the reference strategy. As a result, combining the pen display approach with point-based planning has potential to become a powerful strategy because of benefiting from improved hand-eye-coordination on the one hand and from a simple but accurate technique for path definition on the other hand. These findings as well as the overall usability scale indicating high acceptance and consistence of proposed strategies motivate further advanced tablet-based planning in laser microsurgery.
Duc, Myriam; Gaboriaud, Fabien; Thomas, Fabien
2005-09-01
The effects of experimental procedures on the acid-base consumption titration curves of montmorillonite suspension were studied using continuous potentiometric titration. For that purpose, the hysteresis amplitudes between the acid and base branches were found to be useful to systematically evaluate the impacts of storage conditions (wet or dried), the atmosphere in titration reactor, the solid-liquid ratio, the time interval between successive increments, and the ionic strength. In the case of storage conditions, the increase of the hysteresis was significantly higher for longer storage of clay in suspension and drying procedures compared to "fresh" clay suspension. The titration carried out under air demonstrated carbonate contamination that could only be cancelled by performing experiments under inert gas. Interestingly, the increase of the time intervals between successive increments of titrant strongly emphasized the amplitude of hysteresis, which could be correlated with the slow kinetic process specifically observed for acid addition in acid media. Thus, such kinetic behavior is probably associated with dissolution processes of clay particles. However, the resulting curves recorded at different ionic strengths under optimized conditions did not show the common intersection point required to define point of zero charge. Nevertheless, the ionic strength dependence of the point of zero net proton charge suggested that the point of zero charge of sodic montmorillonite could be estimated as lower than 5.
NASA Astrophysics Data System (ADS)
Inaba, Hideo; Morita, Shin-Ichi
This paper deals with flow and cold heat storage characteristics of the oil (tetradecane, C14H30, freezing point 278.9 K, Latent heat 229 kJ/kg)/water emulsion as a latent heat storage material having a low melting point. The test emulsion includes a water-urea solution as a continuum phase. The freezing point depression of the continuum phase permits enhancement of the heat transfer rate of the emulison, due to the large temperature difference between the latent heat storage material and water-urea solution. The velocity of emulsion flow and the inlet temperature of coolant in a coiled double tube heat exchanger are chosen as the experimental parameters. The pressure drop, the heat transfer coefficient of the emulsion in the coiled tube are measured in the temperture region over solid and liquid phase of the latent heat storage material. The finishing time of the cold heat storage is defined experimentally in the range of sensible and latent heat storage. It is clarified that the flow behavior of the emulsion as a non-Newtonian fluid has an important role in cold heat storage. The useful nondimentional correlation equations for the additional pressure loss coefficient, the heat transfer coefficient and the finishing time of the cold heat storage are derived in terms of Dean number and heat capacity ratio.
Automatic Registration of Terrestrial Laser Scanner Point Clouds Using Natural Planar Surfaces
NASA Astrophysics Data System (ADS)
Theiler, P. W.; Schindler, K.
2012-07-01
Terrestrial laser scanners have become a standard piece of surveying equipment, used in diverse fields like geomatics, manufacturing and medicine. However, the processing of today's large point clouds is time-consuming, cumbersome and not automated enough. A basic step of post-processing is the registration of scans from different viewpoints. At present this is still done using artificial targets or tie points, mostly by manual clicking. The aim of this registration step is a coarse alignment, which can then be improved with the existing algorithm for fine registration. The focus of this paper is to provide such a coarse registration in a fully automatic fashion, and without placing any target objects in the scene. The basic idea is to use virtual tie points generated by intersecting planar surfaces in the scene. Such planes are detected in the data with RANSAC and optimally fitted using least squares estimation. Due to the huge amount of recorded points, planes can be determined very accurately, resulting in well-defined tie points. Given two sets of potential tie points recovered in two different scans, registration is performed by searching for the assignment which preserves the geometric configuration of the largest possible subset of all tie points. Since exhaustive search over all possible assignments is intractable even for moderate numbers of points, the search is guided by matching individual pairs of tie points with the help of a novel descriptor based on the properties of a point's parent planes. Experiments show that the proposed method is able to successfully coarse register TLS point clouds without the need for artificial targets.
NASA Technical Reports Server (NTRS)
Mulqueen, J. A.; Addona, B. M.; Gwaltney, D. A.; Holt, K. A.; Hopkins, R. C.; Matis, J. A.; McRight, P. S.; Popp, C. G.; Sutherlin, S. G.; Thomas, H. D.;
2012-01-01
The primary purpose of this study was to define a point-of-departure prephase A mission concept for the cryogenic propellant storage and transfer technology demonstration mission to be conducted by the NASA Office of the Chief Technologist (OCT). The mission concept includes identification of the cryogenic propellant management technologies to be demonstrated, definition of a representative mission timeline, and definition of a viable flight system design concept. The resulting mission concept will serve as a point of departure for evaluating alternative mission concepts and synthesizing the results of industry- defined mission concepts developed under the OCT contracted studies
The Egyptian geomagnetic reference field to the Epoch, 2010.0
NASA Astrophysics Data System (ADS)
Deebes, H. A.; Abd Elaal, E. M.; Arafa, T.; Lethy, A.; El Emam, A.; Ghamry, E.; Odah, H.
2017-06-01
The present work is a compilation of two tasks within the frame of the project ;Geomagnetic Survey & Detailed Geomagnetic Measurements within the Egyptian Territory; funded by the ;Science and Technology Development Fund agency (STDF);. The National Research Institute of Astronomy and Geophysics (NRIAG), has conducted a new extensive land geomagnetic survey that covers the whole Egyptian territory. The field measurements have been done at 3212 points along all the asphalted roads, defined tracks, and ill-defined tracks in Egypt; with total length of 11,586 km. In the present work, the measurements cover for the first time new areas as: the southern eastern borders of Egypt including Halayeb and Shlatin, the Quattara depresion in the western desert, and the new roads between Farafra and Baharia oasis. Also marine geomagnetic survey have been applied for the first time in Naser lake. Misallat and Abu-Simble geomagnetic observatories have been used to reduce the field data to the Epoch 2010. During the field measurements, whenever possible, the old stations occupied by the previous observers have been re-occupied to determine the secular variations at these points. The geomagnetic anomaly maps, the normal geomagnetic field maps with their corresponding secular variation maps, the normal geomagnetic field equations of the geomagnetic elements (EGRF) and their corresponding secular variations equations, are outlined. The anomalous sites, as discovered from the anomaly maps are, only, mentioned. In addition, a correlation between the International Geomagnetic Reference Field (IGRF) 2010.0 and the Egyptian Geomagnetic Reference Field (EGRF) 2010 is indicated.
A conceptual ground-water-quality monitoring network for San Fernando Valley, California
Setmire, J.G.
1985-01-01
A conceptual groundwater-quality monitoring network was developed for San Fernando Valley to provide the California State Water Resources Control Board with an integrated, basinwide control system to monitor the quality of groundwater. The geology, occurrence and movement of groundwater, land use, background water quality, and potential sources of pollution were described and then considered in designing the conceptual monitoring network. The network was designed to monitor major known and potential point and nonpoint sources of groundwater contamination over time. The network is composed of 291 sites where wells are needed to define the groundwater quality. The ideal network includes four specific-purpose networks to monitor (1) ambient water quality, (2) nonpoint sources of pollution, (3) point sources of pollution, and (4) line sources of pollution. (USGS)
Higuchi, Takahiro; Noritake, Atsushi; Yanagimoto, Yoshitoki; Kobayashi, Hodaka; Nakamura, Kae; Kaneko, Kazunari
2017-01-01
Children with autism spectrum disorders (ASD) who have neurodevelopmental impairments in social communication often refuse to go to school because of difficulties in learning in class. The exact cause of maladaptation to school in such children is unknown. We hypothesized that these children have difficulty in paying attention to objects at which teachers are pointing. We performed gaze behavior analysis of children with ASD to understand their difficulties in the classroom. The subjects were 26 children with ASD (19 boys and 7 girls; mean age, 8.6 years) and 27 age-matched children with typical development (TD) (14 boys and 13 girls; mean age, 8.2 years). We measured eye movements of the children while they performed free viewing of two movies depicting actual classes: a Japanese class in which a teacher pointed at cartoon characters and an arithmetic class in which the teacher pointed at geometric figures. In the analysis, we defined the regions of interest (ROIs) as the teacher’s face and finger, the cartoon characters and geometric figures at which the teacher pointed, and the classroom wall that contained no objects. We then compared total gaze time for each ROI between the children with ASD and TD by two-way ANOVA. Children with ASD spent less gaze time on the cartoon characters pointed at by the teacher; they spent more gaze time on the wall in both classroom scenes. We could differentiate children with ASD from those with TD almost perfectly by the proportion of total gaze time that children with ASD spent looking at the wall. These results suggest that children with ASD do not follow the teacher’s instructions in class and persist in gazing at inappropriate visual areas such as walls. Thus, they may have difficulties in understanding content in class, leading to maladaptation to school. PMID:28472111
Higuchi, Takahiro; Ishizaki, Yuko; Noritake, Atsushi; Yanagimoto, Yoshitoki; Kobayashi, Hodaka; Nakamura, Kae; Kaneko, Kazunari
2017-01-01
Children with autism spectrum disorders (ASD) who have neurodevelopmental impairments in social communication often refuse to go to school because of difficulties in learning in class. The exact cause of maladaptation to school in such children is unknown. We hypothesized that these children have difficulty in paying attention to objects at which teachers are pointing. We performed gaze behavior analysis of children with ASD to understand their difficulties in the classroom. The subjects were 26 children with ASD (19 boys and 7 girls; mean age, 8.6 years) and 27 age-matched children with typical development (TD) (14 boys and 13 girls; mean age, 8.2 years). We measured eye movements of the children while they performed free viewing of two movies depicting actual classes: a Japanese class in which a teacher pointed at cartoon characters and an arithmetic class in which the teacher pointed at geometric figures. In the analysis, we defined the regions of interest (ROIs) as the teacher's face and finger, the cartoon characters and geometric figures at which the teacher pointed, and the classroom wall that contained no objects. We then compared total gaze time for each ROI between the children with ASD and TD by two-way ANOVA. Children with ASD spent less gaze time on the cartoon characters pointed at by the teacher; they spent more gaze time on the wall in both classroom scenes. We could differentiate children with ASD from those with TD almost perfectly by the proportion of total gaze time that children with ASD spent looking at the wall. These results suggest that children with ASD do not follow the teacher's instructions in class and persist in gazing at inappropriate visual areas such as walls. Thus, they may have difficulties in understanding content in class, leading to maladaptation to school.
Power of tests for comparing trend curves with application to national immunization survey (NIS).
Zhao, Zhen
2011-02-28
To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
2002-01-01
Dramatic losses of bone mineral density (BMD) and muscle strength are two of the best documented changes observed in humans after prolonged exposure to microgravity. Recovery of muscle upon return to a 1-G environment is well studied, however, far less is known about the rate and completeness of BMD recovery to pre-flight values. Using the mature tail-suspended adult rat model, this proposal will focus on the temporal course of recovery in tibial bone following a 28-d period of skeletal unloading. Through the study of bone density and muscle strength in the same animal, time-points during recovery from simulated microgravity will be identified when bone is at an elevated risk for fracture. These will occur due to the rapid recovery of muscle strength coupled with a slower recovery of bone, producing a significant mismatch in functional strength of these two tissues. Once the time-point of maximal mismatch is defined, various mechanical and pharmacological interventions will be tested at and around this time-point in attempt to minimize the functional difference of bone and muscle. The outcomes of this research will have high relevance for optimizing the rehabilitation of astronauts upon return to Earth, as well as upon landing on the Martian surface before assuming arduous physical tasks. Further. it will impact significantly on rehabilitation issues common to patients experiencing long periods of limb immobilization or bed rest.
Tsur, Noga; Defrin, Ruth; Lahav, Yael; Solomon, Zahava
2018-03-01
Orientation to bodily signals is defined as the way somatic sensations are attended, perceived and interpreted. Research suggests that trauma exposure, particularly the pathological reaction to trauma (i.e., PTSD), is associated with catastrophic and frightful orientation to bodily signals. However, little is known regarding the long-term ramifications of trauma exposure and PTSD for orientation to bodily signals. Less is known regarding which PTSD symptom cluster manifests in the 'somatic route' through which orientation to bodily signals is altered. The current study examined the long-term implications of trauma and PTSD trajectories on orientation to bodily signals. Fifty-nine ex-prisoners of war (ex-POWs) and 44 controls were assessed for PTSD along three time-points (18, 30 and 35 years post-war). Orientation to bodily signals (pain catastrophizing and anxiety sensitivity-physical concerns) was assessed at T3. Participants with a chronic PTSD trajectory had higher pain catastrophizing compared to participants with no PTSD. PTSD symptom severity at T2 and T3 mediated the association between captivity and orientation. Among PTSD symptom clusters, hyperarousal at two time-points and intrusion at three time-point mediated the association between captivity and orientation. These findings allude to the cardinal role of long-term PTSD in the subjective experience of the body following trauma. Copyright © 2018 Elsevier B.V. All rights reserved.
Classical evolution of fractal measures on the lattice
NASA Astrophysics Data System (ADS)
Antoniou, N. G.; Diakonos, F. K.; Saridakis, E. N.; Tsolias, G. A.
2007-04-01
We consider the classical evolution of a lattice of nonlinear coupled oscillators for a special case of initial conditions resembling the equilibrium state of a macroscopic thermal system at the critical point. The displacements of the oscillators define initially a fractal measure on the lattice associated with the scaling properties of the order parameter fluctuations in the corresponding critical system. Assuming a sudden symmetry breaking (quench), leading to a change in the equilibrium position of each oscillator, we investigate in some detail the deformation of the initial fractal geometry as time evolves. In particular, we show that traces of the critical fractal measure can be sustained for large times, and we extract the properties of the chain that determine the associated time scales. Our analysis applies generally to critical systems for which, after a slow developing phase where equilibrium conditions are justified, a rapid evolution, induced by a sudden symmetry breaking, emerges on time scales much shorter than the corresponding relaxation or observation time. In particular, it can be used in the fireball evolution in a heavy-ion collision experiment, where the QCD critical point emerges, or in the study of evolving fractals of astrophysical and cosmological scales, and may lead to determination of the initial critical properties of the Universe through observations in the symmetry-broken phase.
Fast global image smoothing based on weighted least squares.
Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N
2014-12-01
This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.
Experimental Estimating Deflection of a Simple Beam Bridge Model Using Grating Eddy Current Sensors
Lü, Chunfeng; Liu, Weiwen; Zhang, Yongjie; Zhao, Hui
2012-01-01
A novel three-point method using a grating eddy current absolute position sensor (GECS) for bridge deflection estimation is proposed in this paper. Real spatial positions of the measuring points along the span axis are directly used as relative reference points of each other rather than using any other auxiliary static reference points for measuring devices in a conventional method. Every three adjacent measuring points are defined as a measuring unit and a straight connecting bar with a GECS fixed on the center section of it links the two endpoints. In each measuring unit, the displacement of the mid-measuring point relative to the connecting bar measured by the GECS is defined as the relative deflection. Absolute deflections of each measuring point can be calculated from the relative deflections of all the measuring units directly without any correcting approaches. Principles of the three-point method and displacement measurement of the GECS are introduced in detail. Both static and dynamic experiments have been carried out on a simple beam bridge model, which demonstrate that the three-point deflection estimation method using the GECS is effective and offers a reliable way for bridge deflection estimation, especially for long-term monitoring. PMID:23112583
Experimental estimating deflection of a simple beam bridge model using grating eddy current sensors.
Lü, Chunfeng; Liu, Weiwen; Zhang, Yongjie; Zhao, Hui
2012-01-01
A novel three-point method using a grating eddy current absolute position sensor (GECS) for bridge deflection estimation is proposed in this paper. Real spatial positions of the measuring points along the span axis are directly used as relative reference points of each other rather than using any other auxiliary static reference points for measuring devices in a conventional method. Every three adjacent measuring points are defined as a measuring unit and a straight connecting bar with a GECS fixed on the center section of it links the two endpoints. In each measuring unit, the displacement of the mid-measuring point relative to the connecting bar measured by the GECS is defined as the relative deflection. Absolute deflections of each measuring point can be calculated from the relative deflections of all the measuring units directly without any correcting approaches. Principles of the three-point method and displacement measurement of the GECS are introduced in detail. Both static and dynamic experiments have been carried out on a simple beam bridge model, which demonstrate that the three-point deflection estimation method using the GECS is effective and offers a reliable way for bridge deflection estimation, especially for long-term monitoring.
Brasil, Albert Vincent Berthier; Teles, Alisson R; Roxo, Marcelo Ricardo; Schuster, Marcelo Neutzling; Zauk, Eduardo Ballverdu; Barcellos, Gabriel da Costa; Costa, Pablo Ramon Fruett da; Ferreira, Nelson Pires; Kraemer, Jorge Luiz; Ferreira, Marcelo Paglioli; Gobbato, Pedro Luis; Worm, Paulo Valdeci
2016-10-01
To analyze the cumulative effect of risk factors associated with early major complications in postoperative spine surgery. Retrospective analysis of 583 surgically-treated patients. Early "major" complications were defined as those that may lead to permanent detrimental effects or require further significant intervention. A balanced risk score was built using multiple logistic regression. Ninety-two early major complications occurred in 76 patients (13%). Age > 60 years and surgery of three or more levels proved to be significant independent risk factors in the multivariate analysis. The balanced scoring system was defined as: 0 points (no risk factor), 2 points (1 factor) or 4 points (2 factors). The incidence of early major complications in each category was 7% (0 points), 15% (2 points) and 29% (4 points) respectively. This balanced scoring system, based on two risk factors, represents an important tool for both surgical indication and for patient counseling before surgery.
Khanna, Sankalp; Boyle, Justin; Good, Norm; Lind, James
2012-10-01
To investigate the effect of hospital occupancy levels on inpatient and ED patient flow parameters, and to simulate the impact of shifting discharge timing on occupancy levels. Retrospective analysis of hospital inpatient data and ED data from 23 reporting public hospitals in Queensland, Australia, across 30 months. Relationships between outcome measures were explored through the aggregation of the historic data into 21 912 hourly intervals. Main outcome measures included admission and discharge rates, occupancy levels, length of stay for admitted and emergency patients, and the occurrence of access block. The impact of shifting discharge timing on occupancy levels was quantified using observed and simulated data. The study identified three stages of system performance decline, or choke points, as hospital occupancy increased. These choke points were found to be dependent on hospital size, and reflect a system change from 'business-as-usual' to 'crisis'. Effecting early discharge of patients was also found to significantly (P < 0.001) impact overcrowding levels and improve patient flow. Modern hospital systems have the ability to operate efficiently above an often-prescribed 85% occupancy level, with optimal levels varying across hospitals of different size. Operating over these optimal levels leads to performance deterioration defined around occupancy choke points. Understanding these choke points and designing strategies around alleviating these flow bottlenecks would improve capacity management, reduce access block and improve patient outcomes. Effecting early discharge also helps alleviate overcrowding and related stress on the system. © 2012 CSIRO. EMA © 2012 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
Kasteleijn-Nolst Trenité, Dorotheé G A; Biton, Victor; French, Jacqueline A; Abou-Khalil, Bassel; Rosenfeld, William E; Diventura, Bree; Moore, Elizabeth L; Hetherington, Seth V; Rigdon, Greg C
2013-08-01
To assess the effects of ICA-105665, an agonist of neuronal Kv7 potassium channels, on epileptiform EEG discharges, evoked by intermittent photic stimulation (IPS), the so-called photoparoxysmal responses (PPRs) in patients with epilepsy. Male and female patients aged 18-60 years with reproducible PPRs were eligible for enrollment. The study was conducted as a single-blind, single-dose, multiple-cohort study. Four patients were enrolled in each of the first three cohorts. Six patients were enrolled in the fourth cohort and one patient was enrolled in the fifth cohort. PPR responses to 14 IPS frequencies (steps) were used to determine the standard photosensitivity range (SPR) following placebo on day 1 and ICA-105665 on day 2. The SPR was quantified for three eye conditions (eyes closing, eyes closed, and eyes open), and the most sensitive condition was used for assessment of efficacy. A partial response was defined as a reduction in the SPR of at least three units at three separate time points following ICA-105665 compared to the same time points following placebo with no time points with more than three units of increase. Complete suppression was defined by no PPRs in any eye condition at one or more time points. Six individual patients participated in the first three cohorts (100, 200, and 400 mg). Six patients participated in the fourth cohort (500 mg), and one patient participated in the fifth cohort (600 mg). Decreases in SPR occurred in one patient at 100 mg, two patients receiving 400 mg ICA-105665 (complete abolishment of SPR occurred in one patient at 400 mg), and in four of six patients receiving 500 mg. The most common adverse events (AEs) were those related to the nervous system, and dizziness appeared to be the first emerging AE. The single patient in the 600 mg cohort developed a brief generalized seizure within 1 h of dosing, leading to the discontinuation of additional patients at this dose, per the predefined protocol stopping rules. ICA-105665 reduced the SPR in patients at single doses of 100 (one of four), 400 (two of four), and 500 mg (four of six). This is the first assessment of the effects of activation of Kv7 potassium channels in the photosensitivity proof of concept model. The reduction of SPR in this patient population provides evidence of central nervous system (CNS) penetration by ICA-105665, and preliminary evidence that engagement with neuronal Kv7 potassium channels has antiseizure effects. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.
Weberpals, Janick; Jansen, Lina; van Herk-Sukel, Myrthe P P; Kuiper, Josephina G; Aarts, Mieke J; Vissers, Pauline A J; Brenner, Hermann
2017-11-01
Immortal time bias (ITB) is still seen frequently in medical literature. However, not much is known about this bias in the field of cancer (pharmaco-)epidemiology. In context of a hypothetical beneficial beta-blocker use among cancer patients, we aimed to demonstrate the magnitude of ITB among 9876 prostate, colorectal, lung and pancreatic cancer patients diagnosed between 1998 and 2011, which were selected from a database linkage of the Netherlands Cancer Registry and the PHARMO Database Network. Hazard ratios (HR) and 95% confidence intervals from three ITB scenarios, defining exposure at a defined point after diagnosis (model 1), at any point after diagnosis (model 2) and as multiple exposures after diagnosis (model 3), were calculated to investigate the association between beta-blockers and cancer prognosis using Cox proportional hazards regression. Results were compared to unbiased estimates derived from the Mantel-Byar model. Ignoring ITB led to substantial smaller HRs for beta-blocker use proposing a significant protective association in all cancer types [e.g. HR 0.18 (0.07-0.43) for pancreatic cancer in model 1], whereas estimates derived from the Mantel-Byar model were mainly suggesting no association [e.g. HR 1.10 (0.84-1.44)]. The magnitude of bias was consistently larger among cancer types with worse prognosis [overall median HR differences between all scenarios in model 1 and Mantel-Byar model of 0.56 (prostate), 0.72 (colorectal), 0.77 (lung) and 0.85 (pancreas)]. In conclusion, ITB led to spurious beneficial associations of beta-blocker use among cancer patients. The magnitude of ITB depends on the duration of excluded immortal time and the prognosis of each cancer.
Hayn, Matthew H; Hussain, Abid; Mansour, Ahmed M; Andrews, Paul E; Carpentier, Paul; Castle, Erik; Dasgupta, Prokar; Rimington, Peter; Thomas, Raju; Khan, Shamim; Kibel, Adam; Kim, Hyung; Manoharan, Murugesan; Menon, Mani; Mottrie, Alex; Ornstein, David; Peabody, James; Pruthi, Raj; Palou Redorta, Joan; Richstone, Lee; Schanne, Francis; Stricker, Hans; Wiklund, Peter; Chandrasekhar, Rameela; Wilding, Greg E; Guru, Khurshid A
2010-08-01
Robot-assisted radical cystectomy (RARC) has evolved as a minimally invasive alternative to open radical cystectomy for patients with invasive bladder cancer. We sought to define the learning curve for RARC by evaluating results from a multicenter, contemporary, consecutive series of patients who underwent this procedure. Utilizing the International Robotic Cystectomy Consortium database, a prospectively maintained and institutional review board-approved database, we identified 496 patients who underwent RARC by 21 surgeons at 14 institutions from 2003 to 2009. Cut-off points for operative time, lymph node yield (LNY), estimated blood loss (EBL), and margin positivity were identified. Using specifically designed statistical mixed models, we were able to inversely predict the number of patients required for an institution to reach the predetermined cut-off points. Mean operative time was 386 min, mean EBL was 408 ml, and mean LNY was 18. Overall, 34 of 482 patients (7%) had a positive surgical margin (PSM). Using statistical models, it was estimated that 21 patients were required for operative time to reach 6.5h and 8, 20, and 30 patients were required to reach an LNY of 12, 16, and 20, respectively. For all patients, PSM rates of <5% were achieved after 30 patients. For patients with pathologic stage higher than T2, PSM rates of <15% were achieved after 24 patients. RARC is a challenging procedure but is a technique that is reproducible throughout multiple centers. This report helps to define the learning curve for RARC and demonstrates an acceptable level of proficiency by the 30th case for proxy measures of RARC quality. Copyright (c) 2010 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Defining service and education: the first step to developing the correct balance.
Reines, H David; Robinson, Linda; Nitzchke, Stephanie; Rizzo, Anne
2007-08-01
Service and education activities have not been well defined or studied. The purpose of this study is to describe how attendings and residents categorize common resident activities on a service-education continuum. A web-based survey was designed to categorize resident activities. A panel of residents and surgical educators reviewed the survey for content validity. Residents and attendings categorized 27 resident activities on a 5-point scale from 1 (pure service) to 5 (pure education). Data analysis was performed using SPSS ver.12. 125 residents and 71 attendings from eight residency programs participated. 66% of residents and 90% of attendings were male. On average, attendings had practiced 14.3 years. Residents' post-graduate year ranged from PGY-1 to PGY-6 (mean of 2.78). Attendings and residents agreed on the categorization of most activities. Residents felt more time should be devoted to pure education than did attendings. Forty percent of residents felt that more than half of their time was spent in pure service versus 10% of attendings. Twenty-five percent of residents and 23% of attendings were dissatisfied with the service-education balance. The Residency Review Committee mandates that education is the central purpose of the surgical residency without clearly defining the balance between education and service. Attendings and residents agree on the educational value of most activities and that the balance between education and service is acceptable. When compared with attendings, residents feel they need significantly more time in education. Adequate learning can be facilitated by the development of clear definitions of service and education and guidelines for the distribution of resident time.
NASA Astrophysics Data System (ADS)
Temme, Francis P.
For uniform spins and their indistinguishable point sets of tensorial bases defining automorphic group-based Liouvillian NMR spin dynamics, the role of recursively-derived coefficients of fractional parentage (CFP) bijections and Schur duality-defined CFP(0)(n) ≡ ¦GI¦(n) group invariant cardinality is central both to understanding the impact of time-reversal invariance(TRI) spin physics, and to analysis as density-matrix formalisms over democratic recoupled (DR) dual tensorial sets, {T
Simple reaction time to the onset of time-varying sounds.
Schlittenlacher, Josef; Ellermeier, Wolfgang
2015-10-01
Although auditory simple reaction time (RT) is usually defined as the time elapsing between the onset of a stimulus and a recorded reaction, a sound cannot be specified by a single point in time. Therefore, the present work investigates how the period of time immediately after onset affects RT. By varying the stimulus duration between 10 and 500 msec, this critical duration was determined to fall between 32 and 40 milliseconds for a 1-kHz pure tone at 70 dB SPL. In a second experiment, the role of the buildup was further investigated by varying the rise time and its shape. The increment in RT for extending the rise time by a factor of ten was about 7 to 8 msec. There was no statistically significant difference in RT between a Gaussian and linear rise shape. A third experiment varied the modulation frequency and point of onset of amplitude-modulated tones, producing onsets at different initial levels with differently rapid increase or decrease immediately afterwards. The results of all three experiments results were explained very well by a straightforward extension of the parallel grains model (Miller and Ulrich Cogn. Psychol. 46, 101-151, 2003), a probabilistic race model employing many parallel channels. The extension of the model to time-varying sounds made the activation of such a grain depend on intensity as a function of time rather than a constant level. A second approach by mechanisms known from loudness produced less accurate predictions.
Patel, Madhukar S; De La Cruz, Salvador; Sally, Mitchell B; Groat, Tahnee; Malinoski, Darren J
2017-10-01
Meeting donor management goals when caring for potential organ donors has been associated with more organs transplanted per donor (OTPD). Concern persists, however, as to whether this indicates that younger/healthier donors are more likely to meet donor management goals or whether active management affects outcomes. A prospective observational study of all standard criteria donors was conducted by 10 organ procurement organizations across United Network for Organ Sharing Regions 4, 5, and 6. Donor management goals representing normal critical care end points were measured at 2 time points: when a catastrophic brain injury was recognized and a referral was made to the organ procurement organization by the DH; and after brain death was declared and authorization for organ donation was obtained. Donor management goals Bundle "met" was defined as achieving any 7 of 9 end points. A positive Bundle status change was defined as not meeting the Bundle at referral and subsequently achieving it at authorization. The primary outcomes measure was having ≥4 OTPD. Data were collected for 1,398 standard criteria donors. Of the 1,166 (83%) who did not meet the Bundle at referral, only 254 (22%) had a positive Bundle status change. On adjusted analysis, positive Bundle status change increased the odds of achieving ≥4 OTPD significantly (odds ratio 2.04; 95% CI 1.49 to 2.81; p < 0.001). A positive donor management goal Bundle status change during donor hospital management is associated with a 2-fold increase in achieving ≥4 OTPD. Active critical care management of the potential organ donor, as evidenced by improvement in routinely measured critical care end points can be a means by which to substantially increase the number of organs available for transplantation. Published by Elsevier Inc.
Aronson, Ronnie; Cohen, Ohad; Conget, Ignacio; Runzis, Sarah; Castaneda, Javier; de Portu, Simona; Lee, Scott; Reznik, Yves
2014-07-01
In insulin-requiring type 2 diabetes patients, current insulin therapy approaches such as basal-alone or basal-bolus multiple daily injections (MDI) have not consistently provided achievement of optimal glycemic control. Previous studies have suggested a potential benefit of continuous subcutaneous insulin infusion (CSII) in these patients. The OpT2mise study is a multicenter, randomized, trial comparing CSII with MDI in a large cohort of subjects with evidence of persistent hyperglycemia despite previous MDI therapy. Subjects were enrolled into a run-in period for optimization of their MDI insulin regimen. Subjects showing persistent hyperglycemia (glycated hemoglobin [HbA1c] ≥8% and ≤12%) were then randomly assigned to CSII or continuing an MDI regimen for a 6-month phase followed by a single crossover of the MDI arm, switching to CSII. The primary end point is the between-group difference in mean change in HbA1c from baseline to 6 months. Secondary end points include change in mean 24-h glucose values, area under the curve and time spent in hypoglycemia and hyperglycemia, measures of glycemic excursions, change in postprandial hyperglycemia, and evaluation of treatment satisfaction. Safety end points include hypoglycemia, hospital admissions, and emergency room visits. When subject enrollment was completed in May 2013, 495 subjects had been enrolled in the study. The study completion for the primary end point is expected in January 2014. OpT2mise will represent the largest studied homogeneous cohort of type 2 diabetes patients with persistent hyperglycemia despite optimized MDI therapy. OpT2mise will help define the role of CSII in insulin intensification and define its safety, rate of hypoglycemia, patient adherence, and patient satisfaction.
Is the DLQI appropriate for medical decision-making in psoriasis patients?
Poór, Adrienn Katalin; Brodszky, Valentin; Péntek, Márta; Gulácsi, László; Ruzsa, Gábor; Hidvégi, Bernadett; Holló, Péter; Kárpáti, Sarolta; Sárdy, Miklós; Rencz, Fanni
2018-01-01
Dermatology Life Quality Index (DLQI) is the most commonly applied measure of health-related quality of life (HRQoL) in psoriasis patients. It is among defining criteria of moderate-to-severe psoriasis and present in treatment guidelines. Our objective was to estimate preference-based HRQoL values (i.e., utilities) for hypothetical health states described by the 10 items of the DLQI in psoriasis patients. Moreover, we compare results to findings of a similar study previously conducted among the general public. A cross-sectional survey was carried out among 238 psoriasis patients. Seven hypothetical DLQI-defined health states with total scores of 6, 11, and 16 (3-3 and 1 states, respectively) were evaluated by time trade-off method. The difference in DLQI scores between hypothetical health states was set at 5 points, as it exceeds the minimal clinically important difference (MCID). Utility scores were found to be homogenous across the seven hypothetical health states (range of means for the 6-point states 0.85-0.91, range of means for the 11-point states 0.83-0.85, and mean of 0.84 for the 16-point state). Overall, mean utilities assessed by psoriasis patients were higher for all seven states compared with the general public (mean difference 0.16-0.28; p < 0.001). In 11 out of the 15 comparisons between health states with DLQI scores differing larger than the MCID, there was no statistically significant difference in utility. Thus, in clinical settings, patients with DLQI scores differing more than the MCID may have identical HRQoL. Improving the definition of moderate-to-severe disease and reconsideration of the DLQI in clinical assessment of psoriasis patients are suggested.
Shaw, Leslee J; Blankstein, Ron; Jacobs, Jill E; Leipsic, Jonathon A; Kwong, Raymond Y; Taqueti, Viviany R; Beanlands, Rob S B; Mieres, Jennifer H; Flamm, Scott D; Gerber, Thomas C; Spertus, John; Di Carli, Marcelo F
2017-12-01
The aims of the current statement are to refine the definition of quality in cardiovascular imaging and to propose novel methodological approaches to inform the demonstration of quality in imaging in future clinical trials and registries. We propose defining quality in cardiovascular imaging using an analytical framework put forth by the Institute of Medicine whereby quality was defined as testing being safe, effective, patient-centered, timely, equitable, and efficient. The implications of each of these components of quality health care are as essential for cardiovascular imaging as they are for other areas within health care. Our proposed statement may serve as the foundation for integrating these quality indicators into establishing designations of quality laboratory practices and developing standards for value-based payment reform for imaging services. We also include recommendations for future clinical research to fulfill quality aims within cardiovascular imaging, including clinical hypotheses of improving patient outcomes, the importance of health status as an end point, and deferred testing options. Future research should evolve to define novel methods optimized for the role of cardiovascular imaging for detecting disease and guiding treatment and to demonstrate the role of cardiovascular imaging in facilitating healthcare quality. © 2017 American Heart Association, Inc.
Trajectory Specification for Terminal Air Traffic: Pairwise Conflict Detection and Resolution
NASA Technical Reports Server (NTRS)
Paielli, Russ; Erzberger, Heinz
2017-01-01
Trajectory specification is the explicit bounding and control of aircraft trajectories such that the position at each point in time is constrained to a precisely defined volume of space. The bounding space is defined by cross-track, along-track, and vertical tolerances relative to a reference trajectory that specifies position as a function of time. The tolerances are dynamic and will be based on the aircraft navigation capabilities and the current traffic situation. A standard language will be developed to represent these specifications and to communicate them by datalink. Assuming conformance, trajectory specification can guarantee safe separation for an arbitrary period of time even in the event of an air traffic control (ATC) system or datalink failure, hence it can help to achieve the high level of safety and reliability needed for ATC automation. As a more proactive form of ATC, it can also maximize airspace capacity and reduce the reliance on tactical backup systems during normal operation. It applies to both enroute airspace and the terminal area around airports, but this paper focuses on the terminal area and presents algorithms and software for spacing arrivals and deconflicting both arrivals and departures.
NASA Astrophysics Data System (ADS)
Earlie, C. S.; Masselink, G.; Russell, P.; Shail, R.; Kingston, K.
2013-12-01
Our understanding of the evolution of hard rock coastlines is limited due to the episodic nature and ';slow' rate at which changes occur. High-resolution surveying techniques, such as Terrestrial Laser Scanning (TLS), have just begun to be adopted as a method of obtaining detailed point cloud data to monitor topographical changes over short periods of time (weeks to months). However, the difficulties involved in comparing consecutive point cloud data sets in a complex three-dimensional plane, such as occlusion due to surface roughness and positioning of data capture point as a result of a consistently changing environment (a beach profile), mean that comparing data sets can lead to errors in the region of 10 - 20 cm. Meshing techniques are often used for point cloud data analysis for simple surfaces, but in surfaces such as rocky cliff faces, this technique has been found to be ineffective. Recession rates of hard rock coastlines in the UK are typically determined using aerial photography or airborne LiDAR data, yet the detail of the important changes occurring to the cliff face and toe are missed using such techniques. In this study we apply an algorithm (M3C2 - Multiscale Model to Model Cloud Comparison), initially developed for analysing fluvial morphological change, that directly compares point to point cloud data using surface normals that are consistent with surface roughness and measure the change that occurs along the normal direction (Lague et al., 2013). The surfaces changes are analysed using a set of user defined scales based on surface roughness and registration error. Once the correct parameters are defined, the volumetric cliff face changes are calculated by integrating the mean distance between the point clouds. The analysis has been undertaken at two hard rock sites identified for their active erosion located on the UK's south west peninsular at Porthleven in south west Cornwall and Godrevy in north Cornwall. Alongside TLS point cloud data, in-situ measurements of the nearshore wave climate, using a pressure transducer, offshore wave climate from a directional wavebuoy, and rainfall records from nearby weather stations were collected. Combining beach elevation information from the georeferenced point clouds with a continuous time series of wave climate provides an indication of the variation in wave energy delivered to the cliff face. The rates of retreat were found to agree with the existing rates that are currently used in shoreline management. The additional geotechnical detail afforded by applying the M3C2 method to a hard rock environment provides not only a means of obtaining volumetric changes with confidence, but also a clear illustration of the locations of failure on the cliff face. Monthly cliff scans help to narrow down the timings of failure under energetic wave conditions or periods of heavy rainfall. Volumetric changes and sensitive regions to failure established using this method allows us to capture episodic changes to the cliff face at a high resolution (1 - 2 cm) that are otherwise missed using lower resolution techniques typically used for shoreline management, and to understand in greater detail the geotechnical behaviour of hard rock cliffs and determine rates of erosion with greater accuracy.
Efficient generation of discontinuity-preserving adaptive triangulations from range images.
Garcia, Miguel Angel; Sappa, Angel Domingo
2004-10-01
This paper presents an efficient technique for generating adaptive triangular meshes from range images. The algorithm consists of two stages. First, a user-defined number of points is adaptively sampled from the given range image. Those points are chosen by taking into account the surface shapes represented in the range image in such a way that points tend to group in areas of high curvature and to disperse in low-variation regions. This selection process is done through a noniterative, inherently parallel algorithm in order to gain efficiency. Once the image has been subsampled, the second stage applies a two and one half-dimensional Delaunay triangulation to obtain an initial triangular mesh. To favor the preservation of surface and orientation discontinuities (jump and crease edges) present in the original range image, the aforementioned triangular mesh is iteratively modified by applying an efficient edge flipping technique. Results with real range images show accurate triangular approximations of the given range images with low processing times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skala, Vaclav
There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E{sup 2} a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E{sup 3} case, the complexity is O(N) even for the convex polyhedron as no orderingmore » is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.« less
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
Burgos, Soledad; Tenorio, Marcela; Zapata, Pamela; Cáceres, Dante D.; Klarian, José; Alvarez, Nancy; Oviedo, Renato; Toro-Campos, Rosario; Claudio, Luz; Iglesias, Verónica
2017-01-01
Between 1984-1998, people living in Arica were involuntarily exposed to metal-containing waste stored in the urban area. The study aims to determine whether children who lived near the waste disposal site during early childhood experienced negative effects on their cognitive development. The cognitive performance was assessed using the Wechsler Intelligence Scale for Children. The exposure variable was defined by the year of birth in three categories: (1) Pre-remediation (born before 1999); (2) During-remediation (born between 1999-2003); and (3) Post-remediation (born after 2003). In the crude analysis a difference of 10 points in the IQ average was observed between the group born in the pre (81.9 points) and post remediation period (91.1 points). The difference between both groups was five times higher as compared to children of similar age and socioeconomic status in other cities of Chile. This result could be related with a period of high potential for exposure to this contaminated site. PMID:28245674
Warburton, William K.; Momayezi, Michael
2006-06-20
A method and apparatus for processing step-like output signals (primary signals) generated by non-ideal, for example, nominally single-pole ("N-1P ") devices. An exemplary method includes creating a set of secondary signals by directing the primary signal along a plurality of signal paths to a signal summation point, summing the secondary signals reaching the signal summation point after propagating along the signal paths to provide a summed signal, performing a filtering or delaying operation in at least one of said signal paths so that the secondary signals reaching said summing point have a defined time correlation with respect to one another, applying a set of weighting coefficients to the secondary signals propagating along said signal paths, and performing a capturing operation after any filtering or delaying operations so as to provide a weighted signal sum value as a measure of the integrated area QgT of the input signal.
Advances in analytical methodologies to guide bioprocess engineering for bio-therapeutics.
Saldova, Radka; Kilcoyne, Michelle; Stöckmann, Henning; Millán Martín, Silvia; Lewis, Amanda M; Tuite, Catherine M E; Gerlach, Jared Q; Le Berre, Marie; Borys, Michael C; Li, Zheng Jian; Abu-Absi, Nicholas R; Leister, Kirk; Joshi, Lokesh; Rudd, Pauline M
2017-03-01
This study was performed to monitor the glycoform distribution of a recombinant antibody fusion protein expressed in CHO cells over the course of fed-batch bioreactor runs using high-throughput methods to accurately determine the glycosylation status of the cell culture and its product. Three different bioreactors running similar conditions were analysed at the same five time-points using the advanced methods described here. N-glycans from cell and secreted glycoproteins from CHO cells were analysed by HILIC-UPLC and MS, and the total glycosylation (both N- and O-linked glycans) secreted from the CHO cells were analysed by lectin microarrays. Cell glycoproteins contained mostly high mannose type N-linked glycans with some complex glycans; sialic acid was α-(2,3)-linked, galactose β-(1,4)-linked, with core fucose. Glycans attached to secreted glycoproteins were mostly complex with sialic acid α-(2,3)-linked, galactose β-(1,4)-linked, with mostly core fucose. There were no significant differences noted among the bioreactors in either the cell pellets or supernatants using the HILIC-UPLC method and only minor differences at the early time-points of days 1 and 3 by the lectin microarray method. In comparing different time-points, significant decreases in sialylation and branching with time were observed for glycans attached to both cell and secreted glycoproteins. Additionally, there was a significant decrease over time in high mannose type N-glycans from the cell glycoproteins. A combination of the complementary methods HILIC-UPLC and lectin microarrays could provide a powerful and rapid HTP profiling tool capable of yielding qualitative and quantitative data for a defined biopharmaceutical process, which would allow valuable near 'real-time' monitoring of the biopharmaceutical product. Copyright © 2016 Elsevier Inc. All rights reserved.
The Creep of Laminated Synthetic Resin Plastics
NASA Technical Reports Server (NTRS)
Perkuhn, H
1941-01-01
The long-time loading strength of a number of laminated synthetic resin plastics was ascertained and the effect of molding pressure and resin content determined. The best value was observed with a 30 to 40 percent resin content. The long-time loading strength also increases with increasing molding pressure up to 250 kg/cm(exp 2); a further rise in pressure affords no further substantial improvement. The creep strength is defined as the load which in the hundredth hour of loading produces a rate of elongation of 5 X 10(exp -4) percent per hour. The creep strength values of different materials were determined and tabulated. The effect of humidity during long-term tests is pointed out.
NASA Technical Reports Server (NTRS)
Batterson, James G. (Technical Monitor); Morelli, E. A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5,20,30,45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Thrust Vectoring (TV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time / amplitude points defining each input are included, along with plots of the input time histories.
Topological phases in a Kitaev chain with imbalanced pairing
NASA Astrophysics Data System (ADS)
Li, C.; Zhang, X. Z.; Zhang, G.; Song, Z.
2018-03-01
We systematically study a Kitaev chain with imbalanced pair creation and annihilation, which is introduced by non-Hermitian pairing terms. An exact phase diagram shows that the topological phase is still robust under the influence of the conditional imbalance. The gapped phases are characterized by a topological invariant, the extended Zak phase, which is defined by the biorthonormal inner product. Such phases are destroyed at the points where the coalescence of ground states occurs, associated with the time-reversal symmetry breaking. We find that the Majorana edge modes also exist in an open chain in the time-reversal symmetry-unbroken region, demonstrating the bulk-edge correspondence in such a non-Hermitian system.
Symmetry Relations in Chemical Kinetics Arising from Microscopic Reversibility
NASA Astrophysics Data System (ADS)
Adib, Artur B.
2006-01-01
It is shown that the kinetics of time-reversible chemical reactions having the same equilibrium constant but different initial conditions are closely related to one another by a directly measurable symmetry relation analogous to chemical detailed balance. In contrast to detailed balance, however, this relation does not require knowledge of the elementary steps that underlie the reaction, and remains valid in regimes where the concept of rate constants is ill defined, such as at very short times and in the presence of low activation barriers. Numerical simulations of a model of isomerization in solution are provided to illustrate the symmetry under such conditions, and potential applications in protein folding or unfolding are pointed out.
Adoption of evidence into practice: can change be sustainable?
Cockburn, Jill
2004-03-15
Few studies have monitored change in professional practice over time to determine the sustainability of change. Research from other behavioural change literature shows that initial change is difficult to maintain, with reported relapse rates as high as 80%. Interventions most likely to succeed are based on a clear understanding of target behaviours and the environmental context. Facilitators and barriers are usually multifaceted and occur at a number of interrelated levels. The issue targeted for intervention must be clearly defined at the outset, so that antecedents, determinants and supporting mechanisms can be defined, suggesting points for intervention and strategies for initial and sustainable change. The target population's readiness to change is an important factor at both an individual and organisational level. In most cases, a combination of different interventions will be needed to achieve lasting change.
Disease management of dairy calves and heifers.
McGuirk, Sheila M
2008-03-01
This article focuses on the most important diseases of dairy calves and heifers and presents clinical approaches that can improve detection, diagnosis, and treatment of herd-based problems. A systematic herd investigation strategy is pivotal to define the problems, understand important risk factors, develop a plan, and make recommendations for disease management accurately. A review of records, colostrum and feeding routines, housing and bedding management, routine procedures, vaccination, and treatment protocols begins the investigation and determines which diagnostic procedures and testing strategies are most useful. Disease management is most effective when the problem source is well defined and the exposure can be limited, calf immunity can be enhanced, or a combination of both. Screening examinations performed regularly or done at strategic time points improves detection of disease, can be used to monitor treatment outcomes, and can avoid disease outbreaks.
How many upper Eocene microspherule layers: More than we thought
NASA Technical Reports Server (NTRS)
Hazel, Joseph E.
1988-01-01
The scientific controversy over the origin of upper Eocene tektites, microtektites and other microspherules cannot be logically resolved until it is determined just how many events are involved. The microspherule-bearing beds in marine sediments have been dated using standard biozonal techniques. Although a powerful stratigraphic tool, zonal biostratigraph has its limitations. One is that if an event, such as a microspherule occurrence, is observed to occur in a zone at one locality and then a similar event observed in the same zone at another locality, it still may be unwarranted to conclude that these events exactly correlate. To be in a zone a sample only need be between the fossil events that define the zone boundaries. It is often very difficult to accurately determine where within a zone one might be. Further, the zone defining events do not everywhere occur at the same points in time. That is, the ranges of the defining taxa are not always filled. Thus, the length of time represented by a zone (but not, of course, its chronozone) can vary from place to place. These problems can be offset by use of chronostratigraphic modelling techniques such as Graphic Correlation. This technique was used to build a Cretaceous and Cenozoic model containing fossil, magnetopolarity, and other events. The scale of the model can be demonstrated to be linear with time. This model was used to determine the chronostratigraphic position of upper Eocene microspherule layers.
Estimation of the time since death--reconsidering the re-establishment of rigor mortis.
Anders, Sven; Kunz, Michaela; Gehl, Axel; Sehner, Susanne; Raupach, Tobias; Beck-Bornholdt, Hans-Peter
2013-01-01
In forensic medicine, there is an undefined data background for the phenomenon of re-establishment of rigor mortis after mechanical loosening, a method used in establishing time since death in forensic casework that is thought to occur up to 8 h post-mortem. Nevertheless, the method is widely described in textbooks on forensic medicine. We examined 314 joints (elbow and knee) of 79 deceased at defined time points up to 21 h post-mortem (hpm). Data were analysed using a random intercept model. Here, we show that re-establishment occurred in 38.5% of joints at 7.5 to 19 hpm. Therefore, the maximum time span for the re-establishment of rigor mortis appears to be 2.5-fold longer than thought so far. These findings have major impact on the estimation of time since death in forensic casework.
Ordinary differential equation for local accumulation time.
Berezhkovskii, Alexander M
2011-08-21
Cell differentiation in a developing tissue is controlled by the concentration fields of signaling molecules called morphogens. Formation of these concentration fields can be described by the reaction-diffusion mechanism in which locally produced molecules diffuse through the patterned tissue and are degraded. The formation kinetics at a given point of the patterned tissue can be characterized by the local accumulation time, defined in terms of the local relaxation function. Here, we show that this time satisfies an ordinary differential equation. Using this equation one can straightforwardly determine the local accumulation time, i.e., without preliminary calculation of the relaxation function by solving the partial differential equation, as was done in previous studies. We derive this ordinary differential equation together with the accompanying boundary conditions and demonstrate that the earlier obtained results for the local accumulation time can be recovered by solving this equation. © 2011 American Institute of Physics
An industrial robot singular trajectories planning based on graphs and neural networks
NASA Astrophysics Data System (ADS)
Łęgowski, Adrian; Niezabitowski, Michał
2016-06-01
Singular trajectories are rarely used because of issues during realization. A method of planning trajectories for given set of points in task space with use of graphs and neural networks is presented. In every desired point the inverse kinematics problem is solved in order to derive all possible solutions. A graph of solutions is made. The shortest path is determined to define required nodes in joint space. Neural networks are used to define the path between these nodes.
Accuracy analysis of pointing control system of solar power station
NASA Technical Reports Server (NTRS)
Hung, J. C.; Peebles, P. Z., Jr.
1978-01-01
The first-phase effort concentrated on defining the minimum basic functions that the retrodirective array must perform, identifying circuits that are capable of satisfying the basic functions, and looking at some of the error sources in the system and how they affect accuracy. The initial effort also examined three methods for generating torques for mechanical antenna control, performed a rough analysis of the flexible body characteristics of the solar collector, and defined a control system configuration for mechanical pointing control of the array.
A Defense of Semantic Minimalism
ERIC Educational Resources Information Center
Kim, Su
2012-01-01
Semantic Minimalism is a position about the semantic content of declarative sentences, i.e., the content that is determined entirely by syntax. It is defined by the following two points: "Point 1": The semantic content is a complete/truth-conditional proposition. "Point 2": The semantic content is useful to a theory of…
Using Technology to Unify Geometric Theorems about the Power of a Point
ERIC Educational Resources Information Center
Contreras, Jose N.
2011-01-01
In this article, I describe a classroom investigation in which a group of prospective secondary mathematics teachers discovered theorems related to the power of a point using "The Geometer's Sketchpad" (GSP). The power of a point is defines as follows: Let "P" be a fixed point coplanar with a circle. If line "PA" is a secant line that intersects…
Oversampling of digitized images. [effects on interpolation in signal processing
NASA Technical Reports Server (NTRS)
Fischel, D.
1976-01-01
Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.
General constraints on sampling wildlife on FIA plots
Bailey, L.L.; Sauer, J.R.; Nichols, J.D.; Geissler, P.H.; McRoberts, Ronald E.; Reams, Gregory A.; Van Deusen, Paul C.; McWilliams, William H.; Cieszewski, Chris J.
2005-01-01
This paper reviews the constraints to sampling wildlife populations at FIA points. Wildlife sampling programs must have well-defined goals and provide information adequate to meet those goals. Investigators should choose a State variable based on information needs and the spatial sampling scale. We discuss estimation-based methods for three State variables: species richness, abundance, and patch occupancy. All methods incorporate two essential sources of variation: detectability estimation and spatial variation. FIA sampling imposes specific space and time criteria that may need to be adjusted to meet local wildlife objectives.
NASA Technical Reports Server (NTRS)
Englander, Jacob; Vavrina, Matthew
2015-01-01
The customer (scientist or project manager) most often does not want just one point solution to the mission design problem Instead, an exploration of a multi-objective trade space is required. For a typical main-belt asteroid mission the customer might wish to see the trade-space of: Launch date vs. Flight time vs. Deliverable mass, while varying the destination asteroid, planetary flybys, launch year, etcetera. To address this question we use a multi-objective discrete outer-loop which defines many single objective real-valued inner-loop problems.
Farthing, William Earl [Pinson, AL; Felix, Larry Gordon [Pelham, AL; Snyder, Todd Robert [Birmingham, AL
2008-02-12
An apparatus and method for diluting and cooling that is extracted from high temperature and/or high pressure industrial processes. Through a feedback process, a specialized, CFD-modeled dilution cooler is employed along with real-time estimations of the point at which condensation will occur within the dilution cooler to define a level of dilution and diluted gas temperature that results in a gas that can be conveyed to standard gas analyzers that contains no condensed hydrocarbon compounds or condensed moisture.
Farthing, William Earl; Felix, Larry Gordon; Snyder, Todd Robert
2009-12-15
An apparatus and method for diluting and cooling that is extracted from high temperature and/or high pressure industrial processes. Through a feedback process, a specialized, CFD-modeled dilution cooler is employed along with real-time estimations of the point at which condensation will occur within the dilution cooler to define a level of dilution and diluted gas temperature that results in a gas that can be conveyed to standard gas analyzers that contains no condensed hydrocarbon compounds or condensed moisture.
Hieronymi Fracastorii: the Italian scientist who described the "French disease"*
Pesapane, Filippo; Marcelli, Stefano; Nazzaro, Gianluca
2015-01-01
Girolamo Fracastoro was a true Italian Renaissance man: he excelled in literature, poetry, music, geography, geology, philosophy, astronomy and, of course, medicine to the point that made Charles-Edward Armory Winslow define him as "a peak unequaled by anyone between Hippocrates and Pasteur". In 1521 Fracastoro wrote the poem "Syphilis Sive de Morbo Gallico" in which was established the use of the term "syphilis" for this terrible and inexplicably transmitted disease, often referred to as "French disease" by the people of the time and by Fracastoro himself. PMID:26560214
Hieronymi Fracastorii: the Italian scientist who described the "French disease".
Pesapane, Filippo; Marcelli, Stefano; Nazzaro, Gianluca
2015-01-01
Girolamo Fracastoro was a true Italian Renaissance man: he excelled in literature, poetry, music, geography, geology, philosophy, astronomy and, of course, medicine to the point that made Charles-Edward Armory Winslow define him as "a peak unequaled by anyone between Hippocrates and Pasteur". In 1521 Fracastoro wrote the poem "Syphilis Sive de Morbo Gallico" in which was established the use of the term "syphilis" for this terrible and inexplicably transmitted disease, often referred to as "French disease" by the people of the time and by Fracastoro himself.
Time-series Analysis of Heat Waves and Emergency Department Visits in Atlanta, 1993 to 2012
Chen, Tianqi; Sarnat, Stefanie E.; Grundstein, Andrew J.; Winquist, Andrea
2017-01-01
Background: Heat waves are extreme weather events that have been associated with adverse health outcomes. However, there is limited knowledge of heat waves’ impact on population morbidity, such as emergency department (ED) visits. Objectives: We investigated associations between heat waves and ED visits for 17 outcomes in Atlanta over a 20-year period, 1993–2012. Methods: Associations were estimated using Poisson log-linear models controlling for continuous air temperature, dew-point temperature, day of week, holidays, and time trends. We defined heat waves as periods of ≥2 consecutive days with temperatures beyond the 98th percentile of the temperature distribution over the period from 1945–2012. We considered six heat wave definitions using maximum, minimum, and average air temperatures and apparent temperatures. Associations by heat wave characteristics were examined. Results: Among all outcome-heat wave combinations, associations were strongest between ED visits for acute renal failure and heat waves defined by maximum apparent temperature at lag 0 [relative risk (RR) = 1.15; 95% confidence interval (CI): 1.03–1.29], ED visits for ischemic stroke and heat waves defined by minimum temperature at lag 0 (RR = 1.09; 95% CI: 1.02–1.17), and ED visits for intestinal infection and heat waves defined by average temperature at lag 1 (RR = 1.10; 95% CI: 1.00–1.21). ED visits for all internal causes were associated with heat waves defined by maximum temperature at lag 1 (RR = 1.02; 95% CI: 1.00, 1.04). Conclusions: Heat waves can confer additional risks of ED visits beyond those of daily air temperature, even in a region with high air-conditioning prevalence. https://doi.org/10.1289/EHP44 PMID:28599264
Time-series Analysis of Heat Waves and Emergency Department Visits in Atlanta, 1993 to 2012.
Chen, Tianqi; Sarnat, Stefanie E; Grundstein, Andrew J; Winquist, Andrea; Chang, Howard H
2017-05-31
Heat waves are extreme weather events that have been associated with adverse health outcomes. However, there is limited knowledge of heat waves' impact on population morbidity, such as emergency department (ED) visits. We investigated associations between heat waves and ED visits for 17 outcomes in Atlanta over a 20-year period, 1993-2012. Associations were estimated using Poisson log-linear models controlling for continuous air temperature, dew-point temperature, day of week, holidays, and time trends. We defined heat waves as periods of consecutive days with temperatures beyond the 98th percentile of the temperature distribution over the period from 1945-2012. We considered six heat wave definitions using maximum, minimum, and average air temperatures and apparent temperatures. Associations by heat wave characteristics were examined. Among all outcome-heat wave combinations, associations were strongest between ED visits for acute renal failure and heat waves defined by maximum apparent temperature at lag 0 [relative risk (RR) = 1.15; 95% confidence interval (CI): 1.03-1.29], ED visits for ischemic stroke and heat waves defined by minimum temperature at lag 0 (RR = 1.09; 95% CI: 1.02-1.17), and ED visits for intestinal infection and heat waves defined by average temperature at lag 1 (RR = 1.10; 95% CI: 1.00-1.21). ED visits for all internal causes were associated with heat waves defined by maximum temperature at lag 1 (RR = 1.02; 95% CI: 1.00, 1.04). Heat waves can confer additional risks of ED visits beyond those of daily air temperature, even in a region with high air-conditioning prevalence. https://doi.org/10.1289/EHP44.
Lunar Surface Architecture Utilization and Logistics Support Assessment
NASA Astrophysics Data System (ADS)
Bienhoff, Dallas; Findiesen, William; Bayer, Martin; Born, Andrew; McCormick, David
2008-01-01
Crew and equipment utilization and logistics support needs for the point of departure lunar outpost as presented by the NASA Lunar Architecture Team (LAT) and alternative surface architectures were assessed for the first ten years of operation. The lunar surface architectures were evaluated and manifests created for each mission. Distances between Lunar Surface Access Module (LSAM) landing sites and emplacement locations were estimated. Physical characteristics were assigned to each surface element and operational characteristics were assigned to each surface mobility element. Stochastic analysis was conducted to assess probable times to deploy surface elements, conduct exploration excursions, and perform defined crew activities. Crew time is divided into Outpost-related, exploration and science, overhead, and personal activities. Outpost-related time includes element deployment, EVA maintenance, IVA maintenance, and logistics resupply. Exploration and science activities include mapping, geological surveys, science experiment deployment, sample analysis and categorizing, and physiological and biological tests in the lunar environment. Personal activities include sleeping, eating, hygiene, exercising, and time off. Overhead activities include precursor or close-out tasks that must be accomplished but don't fit into the other three categories such as: suit donning and doffing, airlock cycle time, suit cleaning, suit maintenance, post-landing safing actions, and pre-departure preparations. Equipment usage time, spares, maintenance actions, and Outpost consumables are also estimated to provide input into logistics support planning. Results are normalized relative to the NASA LAT point of departure lunar surface architecture.
Some limitations of frequency as a component of risk: an expository note.
Cox, Louis Anthony
2009-02-01
Students of risk analysis are often taught that "risk is frequency times consequence" or, more generally, that risk is determined by the frequency and severity of adverse consequences. But is it? This expository note reviews the concepts of frequency as average annual occurrence rate and as the reciprocal of mean time to failure (MTTF) or mean time between failures (MTBF) in a renewal process. It points out that if two risks (represented as two (frequency, severity) pairs for adverse consequences) have identical values for severity but different values of frequency, then it is not necessarily true that the one with the smaller value of frequency is preferable-and this is true no matter how frequency is defined. In general, there is not necessarily an increasing relation between the reciprocal of the mean time until an event occurs, its long-run average occurrences per year, and other criteria, such as the probability or expected number of times that it will happen over a specific interval of interest, such as the design life of a system. Risk depends on more than frequency and severity of consequences. It also depends on other information about the probability distribution for the time of a risk event that can become lost in simple measures of event "frequency." More flexible descriptions of risky processes, such as point process models can avoid these limitations.
Validation of the WristOx 3100 oximeter for the diagnosis of sleep apnea/hypopnea syndrome.
Nigro, Carlos Alberto; Aimaretti, Silvia; Gonzalez, Sergio; Rhodius, Edgardo
2009-05-01
To evaluate the diagnostic accuracy of the Nonin WristOx 3100 and its software (nVision 5.0) in patients with suspicion of sleep apnea/hypopnea syndrome (SAHS). All participants (168) had the oximetry and polysomnography simultaneously. The two recordings were interpreted blindly. The software calculated: adjusted O(2) desaturation index [ADI]-mean number of O(2) desaturation per hour of total recording analyzed time of > or = 2%, 3%, 4%, 5%, and 6% (ADI2, 3, 4, 5, and 6) and AT90-accumulated time at SO(2) < 90%. The ADI2, 3, 4, 5, and 6 and the AT90 cutoff points that better discriminated between subjects with or without SAHS arose from the receiver operating characteristic curve analysis. The sensitivity (S), specificity (E), and positive and negative likelihood ratio (LR+, LR-) for the different thresholds for ADI were calculated. One hundred and fifty-four patients were included (119 men, mean age 51, median apnea/hypopnea index [AHI] 14, median body mass index [BMI] 28.3 kg/m(2)). The best cutoff points of ADI were: SAHS = AHI > or = 5: ADI2 > 19.3 (S 89%, E 94%, LR+ 15.5 LR- 0.11); SAHS =AHI > or = 10: ADI3 > 10.5 (S 88%, E 94%, LR+ 15 LR- 0.12); SAHS = AHI > or = 15: ADI3 > 13.4 (S 88%, E 90%, LR+ 8.9, LR- 0.14). AT90 had the lowest diagnosis accuracy. An ADI2 < or = 12.2 excluded SAHS (AHI > or = 5 and 10; S 100%, LR- 0) and ADI3 > 4.3 (AHI > or = 5 and 10) or 32 (AHI > or = 15) confirmed SAHS (E 100%). A negative oximetry defined as ADI2 < or = 12.2 excluded SAHS defined as AHI > or = 5 or 10 with a sensitivity and negative likelihood ratio of 100% and 0%, respectively. Furthermore, a positive oximetry defined as an ADI3 > 32 (SAHS = AHI > or = 15) had a specificity of 100% to confirm the pathology.
The symmetry of single-molecule conduction.
Solomon, Gemma C; Gagliardi, Alessio; Pecchia, Alessandro; Frauenheim, Thomas; Di Carlo, Aldo; Reimers, Jeffrey R; Hush, Noel S
2006-11-14
We introduce the conductance point group which defines the symmetry of single-molecule conduction within the nonequilibrium Green's function formalism. It is shown, either rigorously or to within a very good approximation, to correspond to a molecular-conductance point group defined purely in terms of the properties of the conducting molecule. This enables single-molecule conductivity to be described in terms of key qualitative chemical descriptors that are independent of the nature of the molecule-conductor interfaces. We apply this to demonstrate how symmetry controls the conduction through 1,4-benzenedithiol chemisorbed to gold electrodes as an example system, listing also the molecular-conductance point groups for a range of molecules commonly used in molecular electronics research.
Sagittal focusing Laue monochromator
Zhong,; Zhong, Hanson [Stony Brook, NY; Jonathan, Hastings [Wading River, NY; Jerome, Kao [Stanford, CA; Chi-Chang, Lenhard [Setauket, NY; Anthony, Siddons [Medford, NY; David Peter, Zhong [Cutchogue, NY; Hui, [Coram, NY
2009-03-24
An x-ray focusing device generally includes a slide pivotable about a pivot point defined at a forward end thereof, a rail unit fixed with respect to the pivotable slide, a forward crystal for focusing x-rays disposed at the forward end of the pivotable slide and a rearward crystal for focusing x-rays movably coupled to the pivotable slide and the fixed rail unit at a distance rearward from the forward crystal. The forward and rearward crystals define reciprocal angles of incidence with respect to the pivot point, wherein pivoting of the slide about the pivot point changes the incidence angles of the forward and rearward crystals while simultaneously changing the distance between the forward and rearward crystals.
Herrera, VM; Casas, JP; Miranda, JJ; Perel, P; Pichardo, R; González, A; Sanchez, JR; Ferreccio, C; Aguilera, X; Silva, E; Oróstegui, M; Gómez, LF; Chirinos, JA; Medina-Lezama, J; Pérez, CM; Suárez, E; Ortiz, AP; Rosero, L; Schapochnik, N; Ortiz, Z; Ferrante, D; Diaz, M; Bautista, LE
2009-01-01
Background Cut points for defining obesity have been derived from mortality data among Whites from Europe and the United States and their accuracy to screen for high risk of coronary heart disease (CHD) in other ethnic groups has been questioned. Objective To compare the accuracy and to define ethnic and gender-specific optimal cut points for body mass index (BMI), waist circumference (WC) and waist-to-hip ratio (WHR) when they are used in screening for high risk of CHD in the Latin-American and the US populations. Methods We estimated the accuracy and optimal cut points for BMI, WC and WHR to screen for CHD risk in Latin Americans (n=18 976), non-Hispanic Whites (Whites; n=8956), non-Hispanic Blacks (Blacks; n=5205) and Hispanics (n=5803). High risk of CHD was defined as a 10-year risk ≥20% (Framingham equation). The area under the receiver operator characteristic curve (AUC) and the misclassification-cost term were used to assess accuracy and to identify optimal cut points. Results WHR had the highest AUC in all ethnic groups (from 0.75 to 0.82) and BMI had the lowest (from 0.50 to 0.59). Optimal cut point for BMI was similar across ethnic/gender groups (27 kg/m2). In women, cut points for WC (94 cm) and WHR (0.91) were consistent by ethnicity. In men, cut points for WC and WHR varied significantly with ethnicity: from 91 cm in Latin Americans to 102 cm in Whites, and from 0.94 in Latin Americans to 0.99 in Hispanics, respectively. Conclusion WHR is the most accurate anthropometric indicator to screen for high risk of CHD, whereas BMI is almost uninformative. The same BMI cut point should be used in all men and women. Unique cut points for WC and WHR should be used in all women, but ethnic-specific cut points seem warranted among men. PMID:19238159
3D change detection in staggered voxels model for robotic sensing and navigation
NASA Astrophysics Data System (ADS)
Liu, Ruixu; Hampshire, Brandon; Asari, Vijayan K.
2016-05-01
3D scene change detection is a challenging problem in robotic sensing and navigation. There are several unpredictable aspects in performing scene change detection. A change detection method which can support various applications in varying environmental conditions is proposed. Point cloud models are acquired from a RGB-D sensor, which provides the required color and depth information. Change detection is performed on robot view point cloud model. A bilateral filter smooths the surface and fills the holes as well as keeps the edge details on depth image. Registration of the point cloud model is implemented by using Random Sample Consensus (RANSAC) algorithm. It uses surface normal as the previous stage for the ground and wall estimate. After preprocessing the data, we create a point voxel model which defines voxel as surface or free space. Then we create a color model which defines each voxel that has a color by the mean of all points' color value in this voxel. The preliminary change detection is detected by XOR subtract on the point voxel model. Next, the eight neighbors for this center voxel are defined. If they are neither all `changed' voxels nor all `no changed' voxels, a histogram of location and hue channel color is estimated. The experimental evaluations performed to evaluate the capability of our algorithm show promising results for novel change detection that indicate all the changing objects with very limited false alarm rate.
Serial MRI evaluation following arthroscopic rotator cuff repair in double-row technique.
Stahnke, Katharina; Nikulka, Constanze; Diederichs, Gerd; Haneveld, Hendrik; Scheibel, Markus; Gerhardt, Christian
2016-05-01
So far, recurrent rotator cuff defects are described to occur in the early postoperative period after arthroscopic repair. The aim of this study was to evaluate the musculotendinous structure of the supraspinatus, as well as bone marrow edema or osteolysis after arthroscopic double-row repair. Therefore, magnetic resonance (MR) images were performed at defined intervals up to 2 years postoperatively. Case series; Level of evidence, 3. MR imaging was performed within 7 days, 3, 6, 12, 26, 52 and 108 weeks after surgery. All patients were operated using an arthroscopic modified suture bridge technique. Tendon integrity, tendon retraction ["foot-print-coverage" (FPC)], muscular atrophy and fatty infiltration (signal intensity analysis) were measured at all time points. Furthermore, postoperative bone marrow edema and signs of osteolysis were assessed. MR images of 13 non-consecutive patients (6f/7m, ∅ age 61.05 ± 7.7 years) could be evaluated at all time points until ∅ 108 weeks postoperatively. 5/6 patients with recurrent defect at final follow-up displayed a time of failure between 12 and 24 months after surgery. Predominant mode of failure was medial cuff failures in 4/6 cases. The initial FPC increased significantly up to 2 years follow-up (p = 0.004). Evaluations of muscular atrophy or fatty infiltration were not significant different comparing the results of all time points (p > 0.05). Postoperative bone marrow edema disappeared completely at 6 months after surgery, whereas signs of osteolysis appeared at 3 months follow-up and increased to final follow-up. Recurrent defects after arthroscopic reconstruction of supraspinatus tears in modified suture bridge technique seem to occur between 12 and 24 months after surgery. Serial MRI evaluation shows good muscle structure at all time points. Postoperative bone marrow edema disappears completely several months after surgery. Signs of osteolysis seem to appear caused by bio-absorbable anchor implantations.
Gill, Thomas M; Han, Ling; Gahbauer, Evelyne A; Leo-Summers, Linda; Allore, Heather G
2018-05-02
To evaluate the prognostic effect of changes in physical function at different intervals over the prior year on subsequent outcomes after accounting for present function. Prospective longitudinal study. Greater New Haven, Connecticut, from March 1998 to January 2006. Community-living persons aged 71 and older who completed an 18-month comprehensive assessment (N=658). Disability in 13 activities of daily living, instrumental activities of daily living, and mobility activities was assessed at the 18-month comprehensive assessment and at 12, 6, and 3 months before 18 months. Time to death and long-term nursing home admission, defined as 3 months and longer, were ascertained for up to 5 years after 18 months. In the bivariate models, disability at 18 months and change in disability between 18 months and each of the 3 prior time-points (12, 6, 3 months) were significantly associated with time to death. The risk of death, for example, increased by 24% for each 1-point increase in 18-month disability score (on a scale from 0 to 13) and by 22% for each 1-point change in disability score between 18 months and prior 12 months (on a scale from -13 to 13). In a set of multivariable models with and without covariates, the associations were maintained for 18-month disability but not for change in disability between 18 months and each of the 3 prior time-points. The results were comparable for time to long-term nursing home admission except that 2 of the associations were not statistically significant. When evaluating risk of adverse outcomes, such as death and long-term nursing home admission, an assessment of change in physical function at different intervals over the prior year, although a strong bivariate predictor, did not provide useful prognostic information beyond that available from current level of function. © 2018, Copyright the Authors Journal compilation © 2018, The American Geriatrics Society.
Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan
2014-01-01
Purpose The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Methods and Materials Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3+ xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R2, chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Results Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R2 was satisfactory and corresponded well with the expected values. Conclusions Multivariate NTCP models with LASSO can be used to predict patient-rated xerostomia after IMRT. PMID:24586971
Lee, Tsair-Fwu; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan
2014-01-01
The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3(+) xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R(2), chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R(2) was satisfactory and corresponded well with the expected values. Multivariate NTCP models with LASSO can be used to predict patient-rated xerostomia after IMRT.
Compensation for unfavorable characteristics of irregular individual shift rotas.
Knauth, Peter; Jung, Detlev; Bopp, Winfried; Gauderer, Patric C; Gissel, Andreas
2006-01-01
Some employees of TV companies, such as those who produce remote TV programs, have to cope with very irregular rotas and many short-term schedule deviations. Many of these employees complain about the negative effects of such on their wellbeing and private life. Therefore, a working group of employers, council representatives, and researchers developed a so-called bonus system. Based on the criteria of the BESIAK system, the following list of criteria for the ergonomic assessment of irregular shift systems was developed: proportion of night hours worked between 22 : 00 and 01 : 00 h and between 06 : 00 and 07 : 00 h, proportion of night hours worked between 01 : 00 and 06 : 00 h, number of successive night shifts, number of successive working days, number of shifts longer than 9 h, proportion of phase advances, off hours on weekends, work hours between 17 : 00 and 23 : 00 h from Monday to Friday, number of working days with leisure time at remote places, and sudden deviations from the planned shift rota. Each individual rota was evaluated in retrospect. If pre-defined thresholds of criteria were surpassed, bonus points were added to the worker's account. In general, more bonus points add up to more free time. Only in particular cases was monetary compensation possible for some criteria. The bonus point system, which was implemented in the year 2002 for about 850 employees of the TV company, has the advantages of more transparency concerning the unfavorable characteristics of working-time arrangements, incentive for superiors to design "good" rosters that avoid the bonus point thresholds (to reduce costs), positive short-term effects on the employee social life, and expected positive long-term effects on the employee health. In general, the most promising approach to cope with the problems of shift workers in irregular and flexible shift systems seems to be to increase their influence on the arrangement of working times. If this is not possible, bonus point systems may help to achieve greater transparency and fairness in the distribution of unfavorable working-time arrangements within a team, and even reduce the unnecessary unfavorable aspects of shift systems.
An iterative approach to optimize change classification in SAR time series data
NASA Astrophysics Data System (ADS)
Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan
2016-10-01
The detection of changes using remote sensing imagery has become a broad field of research with many approaches for many different applications. Besides the simple detection of changes between at least two images acquired at different times, analyses which aim on the change type or category are at least equally important. In this study, an approach for a semi-automatic classification of change segments is presented. A sparse dataset is considered to ensure the fast and simple applicability for practical issues. The dataset is given by 15 high resolution (HR) TerraSAR-X (TSX) amplitude images acquired over a time period of one year (11/2013 to 11/2014). The scenery contains the airport of Stuttgart (GER) and its surroundings, including urban, rural, and suburban areas. Time series imagery offers the advantage of analyzing the change frequency of selected areas. In this study, the focus is set on the analysis of small-sized high frequently changing regions like parking areas, construction sites and collecting points consisting of high activity (HA) change objects. For each HA change object, suitable features are extracted and a k-means clustering is applied as the categorization step. Resulting clusters are finally compared to a previously introduced knowledge-based class catalogue, which is modified until an optimal class description results. In other words, the subjective understanding of the scenery semantics is optimized by the data given reality. Doing so, an even sparsely dataset containing only amplitude imagery can be evaluated without requiring comprehensive training datasets. Falsely defined classes might be rejected. Furthermore, classes which were defined too coarsely might be divided into sub-classes. Consequently, classes which were initially defined too narrowly might be merged. An optimal classification results when the combination of previously defined key indicators (e.g., number of clusters per class) reaches an optimum.
Cubature versus Fekete-Gauss nodes for spectral element methods on simplicial meshes
NASA Astrophysics Data System (ADS)
Pasquetti, Richard; Rapetti, Francesca
2017-10-01
In a recent JCP paper [9], a higher order triangular spectral element method (TSEM) is proposed to address seismic wave field modeling. The main interest of this TSEM is that the mass matrix is diagonal, so that an explicit time marching becomes very cheap. This property results from the fact that, similarly to the usual SEM (say QSEM), the basis functions are Lagrange polynomials based on a set of points that shows both nice interpolation and quadrature properties. In the quadrangle, i.e. for the QSEM, the set of points is simply obtained by tensorial product of Gauss-Lobatto-Legendre (GLL) points. In the triangle, finding such an appropriate set of points is however not trivial. Thus, the work of [9] follows anterior works that started in 2000's [2,6,11] and now provides cubature nodes and weights up to N = 9, where N is the total degree of the polynomial approximation. Here we wish to evaluate the accuracy of this cubature nodes TSEM with respect to the Fekete-Gauss one, see e.g.[12], that makes use of two sets of points, namely the Fekete points and the Gauss points of the triangle for interpolation and quadrature, respectively. Because the Fekete-Gauss TSEM is in the spirit of any nodal hp-finite element methods, one may expect that the conclusions of this Note will remain relevant if using other sets of carefully defined interpolation points.
Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...
2014-08-05
A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less
Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph
2018-06-01
Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.
Redox gradients in distribution systems influence water quality, corrosion, and microbial ecology.
Masters, Sheldon; Wang, Hong; Pruden, Amy; Edwards, Marc A
2015-01-01
Simulated distribution systems (SDSs) defined the interplay between disinfectant type (free chlorine and chloramines), water age (1-10.2 days), and pipe material (PVC, iron and cement surfaces) on water chemistry, redox zones and infrastructure degradation. Redox gradients developed as a function of water age and pipe material affected the quality of water consumers would receive. Free chlorine was most stable in the presence of PVC while chloramine was most stable in the presence of cement. At a 3.6 day water age the residual in the chlorinated PVC SDS was more than 3.5 times higher than in the chlorinated iron or cement systems. In contrast, the residual in the chloraminated cement SDS was more than 10 times greater than in the chloraminated iron or PVC systems. Near the point of entry to the SDSs where disinfectant residuals were present, free chlorine tended to cause as much as 4 times more iron corrosion when compared to chloramines. Facultative denitrifying bacteria were ubiquitous, and caused complete loss of nitrogen at distal points in systems with iron, and these bacteria co-occurred with very severe pitting attack (1.6-1.9 mm/year) at high water age.
Dynamic laser speckle analyzed considering inhomogeneities in the biological sample
NASA Astrophysics Data System (ADS)
Braga, Roberto A.; González-Peña, Rolando J.; Viana, Dimitri Campos; Rivera, Fernando Pujaico
2017-04-01
Dynamic laser speckle phenomenon allows a contactless and nondestructive way to monitor biological changes that are quantified by second-order statistics applied in the images in time using a secondary matrix known as time history of the speckle pattern (THSP). To avoid being time consuming, the traditional way to build the THSP restricts the data to a line or column. Our hypothesis is that the spatial restriction of the information could compromise the results, particularly when undesirable and unexpected optical inhomogeneities occur, such as in cell culture media. It tested a spatial random approach to collect the points to form a THSP. Cells in a culture medium and in drying paint, representing homogeneous samples in different levels, were tested, and a comparison with the traditional method was carried out. An alternative random selection based on a Gaussian distribution around a desired position was also presented. The results showed that the traditional protocol presented higher variation than the outcomes using the random method. The higher the inhomogeneity of the activity map, the higher the efficiency of the proposed method using random points. The Gaussian distribution proved to be useful when there was a well-defined area to monitor.
Virtual walks in spin space: A study in a family of two-parameter models
NASA Astrophysics Data System (ADS)
Mullick, Pratik; Sen, Parongama
2018-05-01
We investigate the dynamics of classical spins mapped as walkers in a virtual "spin" space using a generalized two-parameter family of spin models characterized by parameters y and z [de Oliveira et al., J. Phys. A 26, 2317 (1993), 10.1088/0305-4470/26/10/006]. The behavior of S (x ,t ) , the probability that the walker is at position x at time t , is studied in detail. In general S (x ,t ) ˜t-αf (x /tα) with α ≃1 or 0.5 at large times depending on the parameters. In particular, S (x ,t ) for the point y =1 ,z =0.5 corresponding to the Voter model shows a crossover in time; associated with this crossover, two timescales can be defined which vary with the system size L as L2logL . We also show that as the Voter model point is approached from the disordered regions along different directions, the width of the Gaussian distribution S (x ,t ) diverges in a power law manner with different exponents. For the majority Voter case, the results indicate that the the virtual walk can detect the phase transition perhaps more efficiently compared to other nonequilibrium methods.
NASA Astrophysics Data System (ADS)
Doko, Tomoko; Chen, Wenbo; Higuchi, Hiroyoshi
2016-06-01
Satellite tracking technology has been used to reveal the migration patterns and flyways of migratory birds. In general, bird migration can be classified according to migration status. These statuses include the wintering period, spring migration, breeding period, and autumn migration. To determine the migration status, periods of these statuses should be individually determined, but there is no objective method to define 'a threshold date' for when an individual bird changes its status. The research objective is to develop an effective and objective method to determine threshold dates of migration status based on satellite-tracked data. The developed method was named the "MATCHED (Migratory Analytical Time Change Easy Detection) method". In order to demonstrate the method, data acquired from satellite-tracked Tundra Swans were used. MATCHED method is composed by six steps: 1) dataset preparation, 2) time frame creation, 3) automatic identification, 4) visualization of change points, 5) interpretation, and 6) manual correction. Accuracy was tested. In general, MATCHED method was proved powerful to identify the change points between migration status as well as stopovers. Nevertheless, identifying "exact" threshold dates is still challenging. Limitation and application of this method was discussed.
Current State of the Art Historic Building Information Modelling
NASA Astrophysics Data System (ADS)
Dore, C.; Murphy, M.
2017-08-01
In an extensive review of existing literature a number of observations were made in relation to the current approaches for recording and modelling existing buildings and environments: Data collection and pre-processing techniques are becoming increasingly automated to allow for near real-time data capture and fast processing of this data for later modelling applications. Current BIM software is almost completely focused on new buildings and has very limited tools and pre-defined libraries for modelling existing and historic buildings. The development of reusable parametric library objects for existing and historic buildings supports modelling with high levels of detail while decreasing the modelling time. Mapping these parametric objects to survey data, however, is still a time-consuming task that requires further research. Promising developments have been made towards automatic object recognition and feature extraction from point clouds for as-built BIM. However, results are currently limited to simple and planar features. Further work is required for automatic accurate and reliable reconstruction of complex geometries from point cloud data. Procedural modelling can provide an automated solution for generating 3D geometries but lacks the detail and accuracy required for most as-built applications in AEC and heritage fields.
On Pfaffian Random Point Fields
NASA Astrophysics Data System (ADS)
Kargin, V.
2014-02-01
We study Pfaffian random point fields by using the Moore-Dyson quaternion determinants. First, we give sufficient conditions that ensure that a self-dual quaternion kernel defines a valid random point field, and then we prove a CLT for Pfaffian point fields. The proofs are based on a new quaternion extension of the Cauchy-Binet determinantal identity. In addition, we derive the Fredholm determinantal formulas for the Pfaffian point fields which use the quaternion determinant.
Complex Event Recognition Architecture
NASA Technical Reports Server (NTRS)
Fitzgerald, William A.; Firby, R. James
2009-01-01
Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.
GC/MS analysis of pesticides in the Ferrara area (Italy) surface water: a chemometric study.
Pasti, Luisa; Nava, Elisabetta; Morelli, Marco; Bignami, Silvia; Dondi, Francesco
2007-01-01
The development of a network to monitor surface waters is a critical element in the assessment, restoration and protection of water quality. In this study, concentrations of 42 pesticides--determined by GC-MS on samples from 11 points along the Ferrara area rivers--have been analyzed by chemometric tools. The data were collected over a three-year period (2002-2004). Principal component analysis of the detected pesticides was carried out in order to define the best spatial locations for the sampling points. The results obtained have been interpreted in view of agricultural land use. Time series data regarding pesticide contents in surface waters has been analyzed using the Autocorrelation function. This chemometric tool allows for seasonal trends and makes it possible to optimize sampling frequency in order to detect the effective maximum pesticide content.
Organic-inorganic hybrid foams with diatomite addition: Effect on functional properties
NASA Astrophysics Data System (ADS)
Verdolotti, L.; D'Auria, M.; Lavorgna, M.; Vollaro, P.; Iannace, S.; Capasso, I.; Galzerano, B.; Caputo, D.; Liguori, B.
2016-05-01
Organic-inorganic hybrid foams were prepared by using metakaolin, diatomite as a partial (or total) replacement of metakaolin, as matrix, silicon and whipped protein as pore forming. The foamed systems were hardened at defined temperature and time and then characterized by mechanical point of view through compression tests and by functional point of view through fire reaction and acoustic tests. The experimental findings highlighted that the replacement of diatomite in the formulation affected the morphological structure of the foams and consequently their mechanical properties. In particular, the consolidation mechanism in the diatomite based-hybrid foams changed from geopolymerization to a silicate polycondensation mechanism. Therefore, mechanical performances enhanced with increasing of the diatomite content. Fire reaction tests, such as non-combustibility and cone calorimeter tests, showed positive thermal inertia of samples regardless of the content of diatomite.
Reconstruction phases in the planar three- and four-vortex problems
NASA Astrophysics Data System (ADS)
Hernández-Garduño, Antonio; Shashikanth, Banavara N.
2018-03-01
Pure reconstruction phases—geometric and dynamic—are computed in the N-point-vortex model in the plane, for the cases N=3 and N=4 . The phases are computed relative to a metric-orthogonal connection on appropriately defined principal fiber bundles. The metric is similar to the kinetic energy metric for point masses but with the masses replaced by vortex strengths. The geometric phases are shown to be proportional to areas enclosed by the closed orbit on the symmetry reduced spaces. More interestingly, simple formulae are obtained for the dynamic phases, analogous to Montgomery’s result for the free rigid body, which show them to be proportional to the time period of the symmetry reduced closed orbits. For the case N = 3 a non-zero total vortex strength is assumed. For the case N = 4 the vortex strengths are assumed equal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damast, Shari, E-mail: shari.damast@yale.edu; Alektiar, Kaled M.; Goldfarb, Shari
Purpose: We used the Female Sexual Function Index (FSFI) to investigate the prevalence of sexual dysfunction (SD) and factors associated with diminished sexual functioning in early stage endometrial cancer (EC) patients treated with simple hysterectomy and adjuvant brachytherapy. Methods and Materials: A cohort of 104 patients followed in a radiation oncology clinic completed questionnaires to quantify current levels of sexual functioning. The time interval between hysterectomy and questionnaire completion ranged from <6 months to >5 years. Multivariate regression was performed using the FSFI as a continuous variable (score range, 1.2-35.4). SD was defined as an FSFI score of <26, basedmore » on the published validation study. Results: SD was reported by 81% of respondents. The mean ({+-} standard deviation) domain scores in order of highest-to-lowest functioning were: satisfaction, 2.9 ({+-}2.0); orgasm, 2.5 ({+-}2.4); desire, 2.4 ({+-}1.3); arousal, 2.2 ({+-}2.0); dryness, 2.1 ({+-}2.1); and pain, 1.9 ({+-}2.3). Compared to the index population in which the FSFI cut-score was validated (healthy women ages 18-74), all scores were low. Compared to published scores of a postmenopausal population, scores were not statistically different. Multivariate analysis isolated factors associated with lower FSFI scores, including having laparotomy as opposed to minimally invasive surgery (effect size, -7.1 points; 95% CI, -11.2 to -3.1; P<.001), lack of vaginal lubricant use (effect size, -4.4 points; 95% CI, -8.7 to -0.2, P=.040), and short time interval (<6 months) from hysterectomy to questionnaire completion (effect size, -4.6 points; 95% CI, -9.3-0.2; P=.059). Conclusions: The rate of SD, as defined by an FSFI score <26, was prevalent. The postmenopausal status of EC patients alone is a known risk factor for SD. Additional factors associated with poor sexual functioning following treatment for EC included receipt of laparotomy and lack of vaginal lubricant use.« less
Clinically Relevant Cut-off Points for the Diagnosis of Sarcopenia in Older Korean People.
Choe, Yu-Ri; Joh, Ju-Youn; Kim, Yeon-Pyo
2017-11-09
The optimal criteria applied to older Korean people have not been defined. We aimed to define clinically relevant cut-off points for older Korean people and to compare the predictive validity with other definitions of sarcopenia. Nine hundred and sixteen older Koreans (≥65 years) were included in this cross-sectional observational study. We used conditional inference tree analysis to determine cut-off points for height-adjusted grip strength (GS) and appendicular skeletal muscle mass (ASM), for use in the diagnosis of sarcopenia. We then compared the Korean sarcopenia criteria with the Foundation for the National Institutes of Health and Asian Working Group for Sarcopenia criteria, using frailty, assessed with the Korean Frailty Index, as an outcome variable. For men, a residual GS (GSre) of ≤ 0.25 was defined as weak, and a residual ASM (ASMre) of ≤ 1.29 was defined as low. Corresponding cut-off points for women were a GSre of ≤ 0.17 and an ASMre of ≤ 0.69. GSre and ASMre values were adjusted for height. In logistic regression analysis with new cut-off points, the adjusted odds ratios for pre-frail or frail status in the sarcopenia group were 3.23 (95% confidence interval [CI] 1.33-7.83) for the men and 1.74 (95% CI 0.91-3.35) for the women. In receiver operating characteristic curve analysis, the unadjusted area under the curve for Korean sarcopenia criteria in men and women were 0.653 and 0.608, respectively (p < .001). Our proposed cut-off points for low GS and low ASM should be useful in the diagnosis of sarcopenia in older Korean people. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Johnson, Michael R.
2006-01-01
In most general chemistry and introductory physical chemistry classes, critical point is defined as that temperature-pressure point on a phase diagram where the liquid-gas interface disappears, a phenomenon that generally occurs at relatively high temperatures or high pressures. Two examples are: water, with a critical point at 647 K (critical…
Stability and chaos in Kustaanheimo-Stiefel space induced by the Hopf fibration
NASA Astrophysics Data System (ADS)
Roa, Javier; Urrutxua, Hodei; Peláez, Jesús
2016-07-01
The need for the extra dimension in Kustaanheimo-Stiefel (KS) regularization is explained by the topology of the Hopf fibration, which defines the geometry and structure of KS space. A trajectory in Cartesian space is represented by a four-dimensional manifold called the fundamental manifold. Based on geometric and topological aspects classical concepts of stability are translated to KS language. The separation between manifolds of solutions generalizes the concept of Lyapunov stability. The dimension-raising nature of the fibration transforms fixed points, limit cycles, attractive sets, and Poincaré sections to higher dimensional subspaces. From these concepts chaotic systems are studied. In strongly perturbed problems, the numerical error can break the topological structure of KS space: points in a fibre are no longer transformed to the same point in Cartesian space. An observer in three dimensions will see orbits departing from the same initial conditions but diverging in time. This apparent randomness of the integration can only be understood in four dimensions. The concept of topological stability results in a simple method for estimating the time-scale in which numerical simulations can be trusted. Ideally, all trajectories departing from the same fibre should be KS transformed to a unique trajectory in three-dimensional space, because the fundamental manifold that they constitute is unique. By monitoring how trajectories departing from one fibre separate from the fundamental manifold a critical time, equivalent to the Lyapunov time, is estimated. These concepts are tested on N-body examples: the Pythagorean problem, and an example of field stars interacting with a binary.
Empirical study on human acupuncture point network
NASA Astrophysics Data System (ADS)
Li, Jian; Shen, Dan; Chang, Hui; He, Da-Ren
2007-03-01
Chinese medical theory is ancient and profound, however is confined by qualitative and faint understanding. The effect of Chinese acupuncture in clinical practice is unique and effective, and the human acupuncture points play a mysterious and special role, however there is no modern scientific understanding on human acupuncture points until today. For this reason, we attend to use complex network theory, one of the frontiers in the statistical physics, for describing the human acupuncture points and their connections. In the network nodes are defined as the acupuncture points, two nodes are connected by an edge when they are used for a medical treatment of a common disease. A disease is defined as an act. Some statistical properties have been obtained. The results certify that the degree distribution, act degree distribution, and the dependence of the clustering coefficient on both of them obey SPL distribution function, which show a function interpolating between a power law and an exponential decay. The results may be helpful for understanding Chinese medical theory.
Analytical approximation of a distorted reflector surface defined by a discrete set of points
NASA Technical Reports Server (NTRS)
Acosta, Roberto J.; Zaman, Afroz A.
1988-01-01
Reflector antennas on Earth orbiting spacecrafts generally cannot be described analytically. The reflector surface is subjected to a large temperature fluctuation and gradients, and is thus warped from its true geometrical shape. Aside from distortion by thermal stresses, reflector surfaces are often purposely shaped to minimize phase aberrations and scanning losses. To analyze distorted reflector antennas defined by discrete surface points, a numerical technique must be applied to compute an interpolatory surface passing through a grid of discrete points. In this paper, the distorted reflector surface points are approximated by two analytical components: an undistorted surface component and a surface error component. The undistorted surface component is a best fit paraboloid polynomial for the given set of points and the surface error component is a Fourier series expansion of the deviation of the actual surface points, from the best fit paraboloid. By applying the numerical technique to approximate the surface normals of the distorted reflector surface, the induced surface current can be obtained using physical optics technique. These surface currents are integrated to find the far field radiation pattern.
Distress or no distress, that's the question: A cutoff point for distress in a working population
van Rhenen, Willem; van Dijk, Frank JH; Schaufeli, Wilmar B; Blonk, Roland WB
2008-01-01
Background The objective of the present study is to establish an optimal cutoff point for distress measured with the corresponding scale of the 4DSQ, using the prediction of sickness absence as a criterion. The cutoff point should result in a measure that can be used as a credible selection instrument for sickness absence in occupational health practice and in future studies on distress and mental disorders. Methods Distress is measured using the Four Dimensional Symptom Questionnaire (4DSQ), a 50-item self-report questionnaire, in a working population with and without sickness absence due to distress. Sensitivity and specificity were compared for various potential cutoff points, and a receiver operating characteristics analysis was conducted. Results and conclusion A distress cutoff point of ≥11 was defined. The choice was based on a challenging specificity and negative predictive value and indicates a distress level at which an employee is presumably at risk for subsequent sick leave on psychological grounds. The defined distress cutoff point is appropriate for use in occupational health practice and in studies of distress in working populations. PMID:18205912
Distress or no distress, that's the question: A cutoff point for distress in a working population.
van Rhenen, Willem; van Dijk, Frank Jh; Schaufeli, Wilmar B; Blonk, Roland Wb
2008-01-18
The objective of the present study is to establish an optimal cutoff point for distress measured with the corresponding scale of the 4DSQ, using the prediction of sickness absence as a criterion. The cutoff point should result in a measure that can be used as a credible selection instrument for sickness absence in occupational health practice and in future studies on distress and mental disorders. Distress is measured using the Four Dimensional Symptom Questionnaire (4DSQ), a 50-item self-report questionnaire, in a working population with and without sickness absence due to distress. Sensitivity and specificity were compared for various potential cutoff points, and a receiver operating characteristics analysis was conducted. A distress cutoff point of >/=11 was defined. The choice was based on a challenging specificity and negative predictive value and indicates a distress level at which an employee is presumably at risk for subsequent sick leave on psychological grounds. The defined distress cutoff point is appropriate for use in occupational health practice and in studies of distress in working populations.
Normal aging reduces motor synergies in manual pointing.
Verrel, Julius; Lövdén, Martin; Lindenberger, Ulman
2012-01-01
Depending upon its organization, movement variability may reflect poor or flexible control of a motor task. We studied adult age-related differences in the structure of postural variability in manual pointing using the uncontrolled manifold (UCM) method. Participants from 2 age groups (younger: 20-30 years; older: 70-80 years; 12 subjects per group) completed a total of 120 pointing trials to 2 different targets presented according to 3 schedules: blocked, alternating, and random. The age groups were similar with respect to basic kinematic variables, end point precision, as well as the accuracy of the biomechanical forward model of the arm. Following the uncontrolled manifold approach, goal-equivalent and nongoal-equivalent components of postural variability (goal-equivalent variability [GEV] and nongoal-equivalent variability [NGEV]) were determined for 5 time points of the movements (start, 10%, 50%, 90%, and end) and used to define a synergy index reflecting the flexibility/stability aspect of motor synergies. Toward the end of the movement, younger adults showed higher synergy indexes than older adults. Effects of target schedule were not reliable. We conclude that normal aging alters the organization of common multidegree-of-freedom movements, with older adults making less flexible use of motor abundance than younger adults. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gonor, Alexander; Hooton, Irene
2006-07-01
Impact of a rigid projectile (impactor), against a metal target and a condensed explosive surface considered as the important process accompanying the normal entry of a rigid projectile into a target, was overlooked in the preceding studies. Within the framework of accurate shock wave theory, the flow-field, behind the shock wave attached to the perimeter of the adjoined surface, was defined. An important result is the peak pressure rises at points along the target surface away from the stagnation point. The maximum values of the peak pressure are 2.2 to 3.2 times higher for the metallic and soft targets (nitromethane, PBX 9502), than peak pressure values at the stagnation point. This effect changes the commonly held notion that the maximum peak pressure is reached at the projectile stagnation point. In the present study the interaction of a spherical decaying blast wave, caused by an underwater explosion, with a piece-wise plane target, having corner configurations, is investigated. The numerical calculation results in the determination of the vulnerable spots on the target, where the maximum peak overpressure surpassed that for the head-on shock wave reflection by a factor of 4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezák, Viktor, E-mail: bezak@fmph.uniba.sk
Quantum theory of the non-harmonic oscillator defined by the energy operator proposed by Yurke and Buks (2006) is presented. Although these authors considered a specific problem related to a model of transmission lines in a Kerr medium, our ambition is not to discuss the physical substantiation of their model. Instead, we consider the problem from an abstract, logically deductive, viewpoint. Using the Yurke–Buks energy operator, we focus attention on the imaginary-time propagator. We derive it as a functional of the Mehler kernel and, alternatively, as an exact series involving Hermite polynomials. For a statistical ensemble of identical oscillators defined bymore » the Yurke–Buks energy operator, we calculate the partition function, average energy, free energy and entropy. Using the diagonal element of the canonical density matrix of this ensemble in the coordinate representation, we define a probability density, which appears to be a deformed Gaussian distribution. A peculiarity of this probability density is that it may reveal, when plotted as a function of the position variable, a shape with two peaks located symmetrically with respect to the central point.« less
NASA Technical Reports Server (NTRS)
Sahai, Ranjana; Pierce, Larry; Cicolani, Luigi; Tischler, Mark
1998-01-01
Helicopter slung load operations are common in both military and civil contexts. The slung load adds load rigid body modes, sling stretching, and load aerodynamics to the system dynamics, which can degrade system stability and handling qualities, and reduce the operating envelope of the combined system below that of the helicopter alone. Further, the effects of the load on system dynamics vary significantly among the large range of loads, slings, and flight conditions that a utility helicopter will encounter in its operating life. In this context, military helicopters and loads are often qualified for slung load operations via flight tests which can be time consuming and expensive. One way to reduce the cost and time required to carry out these tests and generate quantitative data more readily is to provide an efficient method for analysis during the flight, so that numerous test points can be evaluated in a single flight test, with evaluations performed in near real time following each test point and prior to clearing the aircraft to the next point. Methodology for this was implemented at Ames and demonstrated in slung load flight tests in 1997 and was improved for additional flight tests in 1999. The parameters of interest for the slung load tests are aircraft handling qualities parameters (bandwidth and phase delay), stability margins (gain and phase margin), and load pendulum roots (damping and natural frequency). A procedure for the identification of these parameters from frequency sweep data was defined using the CIFER software package. CIFER is a comprehensive interactive package of utilities for frequency domain analysis previously developed at Ames for aeronautical flight test applications. It has been widely used in the US on a variety of aircraft, including some primitive flight time analysis applications.
Forecasting Global Point Rainfall using ECMWF's Ensemble Forecasting System
NASA Astrophysics Data System (ADS)
Pillosu, Fatima; Hewson, Timothy; Zsoter, Ervin; Baugh, Calum
2017-04-01
ECMWF (the European Centre for Medium range Weather Forecasts), in collaboration with the EFAS (European Flood Awareness System) and GLOFAS (GLObal Flood Awareness System) teams, has developed a new operational system that post-processes grid box rainfall forecasts from its ensemble forecasting system to provide global probabilistic point-rainfall predictions. The project attains a higher forecasting skill by applying an understanding of how different rainfall generation mechanisms lead to different degrees of sub-grid variability in rainfall totals. In turn this approach facilitates identification of cases in which very localized extreme totals are much more likely. This approach aims also to improve the rainfall input required in different hydro-meteorological applications. Flash flood forecasting, in particular in urban areas, is a good example. In flash flood scenarios precipitation is typically characterised by high spatial variability and response times are short. In this case, to move beyond radar based now casting, the classical approach has been to use very high resolution hydro-meteorological models. Of course these models are valuable but they can represent only very limited areas, may not be spatially accurate and may give reasonable results only for limited lead times. On the other hand, our method aims to use a very cost-effective approach to downscale global rainfall forecasts to a point scale. It needs only rainfall totals from standard global reporting stations and forecasts over a relatively short period to train it, and it can give good results even up to day 5. For these reasons we believe that this approach better satisfies user needs around the world. This presentation aims to describe two phases of the project: The first phase, already completed, is the implementation of this new system to provide 6 and 12 hourly point-rainfall accumulation probabilities. To do this we use a limited number of physically relevant global model parameters (i.e. convective precipitation ratio, speed of steering winds, CAPE - Convective Available Potential Energy - and solar radiation), alongside the rainfall forecasts themselves, to define the "weather types" that in turn define the expected sub-grid variability. The calibration and computational strategy intrinsic to the system will be illustrated. The quality of the global point rainfall forecasts is also illustrated by analysing recent case studies in which extreme totals and a greatly elevated flash flood risk could be foreseen some days in advance but especially by a longer-term verification that arises out of retrospective global point rainfall forecasting for 2016. The second phase, currently in development, is focussing on the relationships with other relevant geographical aspects, for instance, orography and coastlines. Preliminary results will be presented. These are promising but need further study to fully understand their impact on the spatial distribution of point rainfall totals.
Challenges in early clinical development of adjuvanted vaccines.
Della Cioppa, Giovanni; Jonsdottir, Ingileif; Lewis, David
2015-06-08
A three-step approach to the early development of adjuvanted vaccine candidates is proposed, the goal of which is to allow ample space for exploratory and hypothesis-generating human experiments and to select dose(s) and dosing schedule(s) to bring into full development. Although the proposed approach is more extensive than the traditional early development program, the authors suggest that by addressing key questions upfront the overall time, size and cost of development will be reduced and the probability of public health advancement enhanced. The immunogenicity end-points chosen for early development should be critically selected: an established immunological parameter with a well characterized assay should be selected as primary end-point for dose and schedule finding; exploratory information-rich end-points should be limited in number and based on pre-defined hypothesis generating plans, including system biology and pathway analyses. Building a pharmacodynamic profile is an important aspect of early development: to this end, multiple early (within 24h) and late (up to one year) sampling is necessary, which can be accomplished by sampling subgroups of subjects at different time points. In most cases the final target population, even if vulnerable, should be considered for inclusion in early development. In order to obtain the multiple formulations necessary for the dose and schedule finding, "bed-side mixing" of various components of the vaccine is often necessary: this is a complex and underestimated area that deserves serious research and logistical support. Copyright © 2015 Elsevier Ltd. All rights reserved.
Generation Mechanisms UV and X-ray Emissions During SL9 Impact
NASA Technical Reports Server (NTRS)
Waite, J. Hunter, Jr.
1997-01-01
The purpose of this grant was to study the ultraviolet and X-ray emissions associated with the impact of comet Shoemaker-Levy 9 with Jupiter. The University of Michigan task was primarily focused on theoretical calculations. The NAGW-4788 subtask was to be largely devoted to determining the constraints placed by the X-ray observations on the physical mechanisms responsible for the generation of the X-rays. Author summarized below the ROSAT observations and suggest a physical mechanism that can plausibly account for the observed emissions. It is hoped that the full set of activities can be completed at a later date. Further analysis of the ROSAT data acquired at the time of the impact was necessary to define the observational constraints on the magnetospheric-ionospheric processes involved in the excitation of the X-ray emissions associated with the fragment impacts. This analysis centered around improvements in the pointing accuracy and improvements in the timing information. Additional pointing information was made possible by the identification of the optical counterparts to the X-ray sources in the ROSAT field-of-view. Due to the large number of worldwide observers of the impacts, a serendipitous visible plate image from an observer in Venezuela provided a very accurate location of the present position of the X-ray source, virtually eliminating pointing errors in the data. Once refined, the pointing indicated that the two observed X-ray brightenings that were highly correlated in time with the K and P2 events were brightenings of the X-ray aurora (as identified in images prior to the impact).Appendix A "ROSAT observations of X-ray emissions from Jupiter during the impact of comet Shoemaker-Levy 9' also included.
EUV brightness variations in the quiet Sun
NASA Astrophysics Data System (ADS)
Brković, A.; Rüedi, I.; Solanki, S. K.; Fludra, A.; Harrison, R. A.; Huber, M. C. E.; Stenflo, J. O.; Stucki, K.
2000-01-01
The Coronal Diagnostic Spectrometer (CDS) onboard the SOHO satellite has been used to obtain movies of quiet Sun regions at disc centre. These movies were used to study brightness variations of solar features at three different temperatures sampled simultaneously in the chromospheric He I 584.3 Ä (2 * 104 K), the transition region O V 629.7 Ä (2.5 * 105 K) and coronal Mg IX 368.1 Ä (106 K) lines. In all parts of the quiet Sun, from darkest intranetwork to brightest network, we find significant variability in the He I and O V line, while the variability in the Mg IX line is more marginal. The relative variability, defined by rms of intensity normalised to the local intensity, is independent of brightness and strongest in the transition region line. Thus the relative variability is the same in the network and the intranetwork. More than half of the points on the solar surface show a relative variability, determined over a period of 4 hours, greater than 15.5% for the O V line, but only 5% of the points exhibit a variability above 25%. Most of the variability appears to take place on time-scales between 5 and 80 minutes for the He I and O V lines. Clear signs of ``high variability'' events are found. For these events the variability as a function of time seen in the different lines shows a good correlation. The correlation is higher for more variable events. These events coincide with the (time averaged) brightest points on the solar surface, i.e. they occur in the network. The spatial positions of the most variable points are identical in all the lines.
NASA Astrophysics Data System (ADS)
Vitiello, Giuseppe
In closed systems, energy is conserved. The origin of the time axis is completely arbitrary due to the invariance under continuous time-translations. The flowing of time swallows those fictitious origins one might assign on its axis, as Kronos ate his sons. Dissipation breaks such a scenario. It implies a non-forgettable origin of time. Open systems need their complement (their "double") in order to become, together, a closed system. Time emerges as an observable measured by the evolution of the open system complement, which acts as a clock. The conservation of the energy-momentum tensor in electrodynamics is considered and its relation with dissipative systems and self-similar fractal structures is discussed. The isomorphism with coherent states in quantum field theory (QFT) is established and the generator of transitions among unitarily inequivalent representations of the canonical commutation relations (CCR) is shown to provide sequences in time of phases, which defines the arrow of time. Merging properties of electrodynamics, fractal self-similarity, dissipation and coherent states point to an integrated vision of Nature.
NASA Technical Reports Server (NTRS)
Riehl, John P.; Sjauw, Waldy K.
2004-01-01
Trajectory, mission, and vehicle engineers concern themselves with finding the best way for an object to get from one place to another. These engineers rely upon special software to assist them in this. For a number of years, many engineers have used the OTIS program for this assistance. With OTIS, an engineer can fully optimize trajectories for airplanes, launch vehicles like the space shuttle, interplanetary spacecraft, and orbital transfer vehicles. OTIS provides four modes of operation, with each mode providing successively stronger optimization capability. The most powerful mode uses a mathematical method called implicit integration to solve what engineers and mathematicians call the optimal control problem. OTIS 3.2, which was developed at the NASA Glenn Research Center, is the latest release of this industry workhorse and features new capabilities for parameter optimization and mission design. OTIS stands for Optimal Control by Implicit Simulation, and it is implicit integration that makes OTIS so powerful at solving trajectory optimization problems. Why is this so important? The optimization process not only determines how to get from point A to point B, but it can also determine how to do this with the least amount of propellant, with the lightest starting weight, or in the fastest time possible while avoiding certain obstacles along the way. There are numerous conditions that engineers can use to define optimal, or best. OTIS provides a framework for defining the starting and ending points of the trajectory (point A and point B), the constraints on the trajectory (requirements like "avoid these regions where obstacles occur"), and what is being optimized (e.g., minimize propellant). The implicit integration method can find solutions to very complicated problems when there is not a lot of information available about what the optimal trajectory might be. The method was first developed for solving two-point boundary value problems and was adapted for use in OTIS. Implicit integration usually allows OTIS to find solutions to problems much faster than programs that use explicit integration and parametric methods. Consequently, OTIS is best suited to solving very complicated and highly constrained problems.
Impact of spatial organization on a novel auxotrophic interaction among soil microbes
Jiang, Xue; ZerfaB, Christian; Feng, Song; ...
2018-03-23
Here, a key prerequisite to achieve a deeper understanding of microbial communities and to engineer synthetic ones is to identify the individual metabolic interactions among key species and how these interactions are affected by different environmental factors. Deciphering the physiological basis of species–species and species–environment interactions in spatially organized environments requires reductionist approaches using ecologically and functionally relevant species. To this end, we focus here on a defined system to study the metabolic interactions in a spatial context among the plant-beneficial endophytic fungus Serendipita indica, and the soil-dwelling model bacterium Bacillus subtilis. Focusing on the growth dynamics of S. indicamore » under defined conditions, we identified an auxotrophy in this organism for thiamine, which is a key co-factor for essential reactions in the central carbon metabolism. We found that S. indica growth is restored in thiamine-free media, when co-cultured with B. subtilis. The success of this auxotrophic interaction, however, was dependent on the spatial and temporal organization of the system; the beneficial impact of B. subtilis was only visible when its inoculation was separated from that of S. indica either in time or space. These findings describe a key auxotrophic interaction in the soil among organisms that are shown to be important for plant ecosystem functioning, and point to the potential importance of spatial and temporal organization for the success of auxotrophic interactions. These points can be particularly important for engineering of minimal functional synthetic communities as plant seed treatments and for vertical farming under defined conditions.« less
Impact of spatial organization on a novel auxotrophic interaction among soil microbes.
Jiang, Xue; Zerfaß, Christian; Feng, Song; Eichmann, Ruth; Asally, Munehiro; Schäfer, Patrick; Soyer, Orkun S
2018-06-01
A key prerequisite to achieve a deeper understanding of microbial communities and to engineer synthetic ones is to identify the individual metabolic interactions among key species and how these interactions are affected by different environmental factors. Deciphering the physiological basis of species-species and species-environment interactions in spatially organized environments requires reductionist approaches using ecologically and functionally relevant species. To this end, we focus here on a defined system to study the metabolic interactions in a spatial context among the plant-beneficial endophytic fungus Serendipita indica, and the soil-dwelling model bacterium Bacillus subtilis. Focusing on the growth dynamics of S. indica under defined conditions, we identified an auxotrophy in this organism for thiamine, which is a key co-factor for essential reactions in the central carbon metabolism. We found that S. indica growth is restored in thiamine-free media, when co-cultured with B. subtilis. The success of this auxotrophic interaction, however, was dependent on the spatial and temporal organization of the system; the beneficial impact of B. subtilis was only visible when its inoculation was separated from that of S. indica either in time or space. These findings describe a key auxotrophic interaction in the soil among organisms that are shown to be important for plant ecosystem functioning, and point to the potential importance of spatial and temporal organization for the success of auxotrophic interactions. These points can be particularly important for engineering of minimal functional synthetic communities as plant seed treatments and for vertical farming under defined conditions.
Human Guidance Behavior Decomposition and Modeling
NASA Astrophysics Data System (ADS)
Feit, Andrew James
Trained humans are capable of high performance, adaptable, and robust first-person dynamic motion guidance behavior. This behavior is exhibited in a wide variety of activities such as driving, piloting aircraft, skiing, biking, and many others. Human performance in such activities far exceeds the current capability of autonomous systems in terms of adaptability to new tasks, real-time motion planning, robustness, and trading safety for performance. The present work investigates the structure of human dynamic motion guidance that enables these performance qualities. This work uses a first-person experimental framework that presents a driving task to the subject, measuring control inputs, vehicle motion, and operator visual gaze movement. The resulting data is decomposed into subspace segment clusters that form primitive elements of action-perception interactive behavior. Subspace clusters are defined by both agent-environment system dynamic constraints and operator control strategies. A key contribution of this work is to define transitions between subspace cluster segments, or subgoals, as points where the set of active constraints, either system or operator defined, changes. This definition provides necessary conditions to determine transition points for a given task-environment scenario that allow a solution trajectory to be planned from known behavior elements. In addition, human gaze behavior during this task contains predictive behavior elements, indicating that the identified control modes are internally modeled. Based on these ideas, a generative, autonomous guidance framework is introduced that efficiently generates optimal dynamic motion behavior in new tasks. The new subgoal planning algorithm is shown to generate solutions to certain tasks more quickly than existing approaches currently used in robotics.
Impact of spatial organization on a novel auxotrophic interaction among soil microbes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Xue; ZerfaB, Christian; Feng, Song
Here, a key prerequisite to achieve a deeper understanding of microbial communities and to engineer synthetic ones is to identify the individual metabolic interactions among key species and how these interactions are affected by different environmental factors. Deciphering the physiological basis of species–species and species–environment interactions in spatially organized environments requires reductionist approaches using ecologically and functionally relevant species. To this end, we focus here on a defined system to study the metabolic interactions in a spatial context among the plant-beneficial endophytic fungus Serendipita indica, and the soil-dwelling model bacterium Bacillus subtilis. Focusing on the growth dynamics of S. indicamore » under defined conditions, we identified an auxotrophy in this organism for thiamine, which is a key co-factor for essential reactions in the central carbon metabolism. We found that S. indica growth is restored in thiamine-free media, when co-cultured with B. subtilis. The success of this auxotrophic interaction, however, was dependent on the spatial and temporal organization of the system; the beneficial impact of B. subtilis was only visible when its inoculation was separated from that of S. indica either in time or space. These findings describe a key auxotrophic interaction in the soil among organisms that are shown to be important for plant ecosystem functioning, and point to the potential importance of spatial and temporal organization for the success of auxotrophic interactions. These points can be particularly important for engineering of minimal functional synthetic communities as plant seed treatments and for vertical farming under defined conditions.« less
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-01-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-10-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
Jacoby, Ann; Lane, Steven; Marson, Anthony; Baker, Gus A
2011-05-01
We defined a series of clinical trajectories represented among adult patients with new-onset seizures across a 4-year follow-up period; and linked these clinical trajectories to the quality of life (QOL) profiles and trajectories of those experiencing them. We examined both between- and within-group differences. Analyses were based on 253 individuals completing QOL questionnaires at baseline and 2 and 4 years subsequently. Based on patient self-report, we defined five "clinical trajectory" groups: individuals experiencing a single seizure only; individuals entering early remission; individuals experiencing late remission; individuals initially becoming seizure-free but subsequently relapsing; individuals with seizures persisting throughout follow-up. QOL profiles at each time point were compared using a validated QOL battery, NEWQOL. Even at baseline, there were significant between-group differences, with patients experiencing a single seizure only reporting the best QOL profile and those with seizures subsequently persisting across all time points reporting the worst. By 2 years, the QOL profiles of individuals experiencing early remission were similar to those of single seizure patients, as were those for late remission and relapse patients. A consistent pattern was seen, with "single seizure" individuals doing best and individuals with persistent seizures doing worst. Of particular concern is that even at baseline, individuals whose seizures persisted were doing poorly for QOL, suggesting the possibility that underlying neurobiologic mechanisms were operating. In contrast, our findings support previous reports of only short-lived and small QOL decrements for individuals experiencing a single or few seizures. Wiley Periodicals, Inc. © 2011 International League Against Epilepsy.
NASA Astrophysics Data System (ADS)
Gillies, J. A.; Nield, J. M.; Nickling, W. G.; Furtak-Cole, E.
2014-12-01
Wind erosion and dust emissions occur in many dryland environments from a range of surfaces with different types and amounts of vegetation. Understanding how vegetation modulates these processes remains a research challenge. Here we present results from a study that examines the relationship between an index of shelter (SI=distance from a point to the nearest upwind vegetation/vegetation height) and particle threshold expressed as the ratio of wind speed measured at 0.45 times the mean plant height divided by the wind speed at 17 m when saltation commences, and saltation flux. The results are used to evaluate SI as a parameter to characterize the influence of vegetation on local winds and sediment transport conditions. Wind speed, wind direction, saltation activity and point saltation flux were measured at 35 locations in defined test areas (~13,000 m2) in two vegetation communities: mature streets of mesquite covered nebkhas and incipient nebkhas dominated by low mesquite plants. Measurement positions represent the most open areas, and hence those places most susceptible to wind erosion among the vegetation elements. Shelter index was calculated for each measurement position for each 10° wind direction bin using digital elevation models for each site acquired using terrestrial laser scanning. SI can show the susceptibility to wind erosion at different time scales, i.e., event, seasonal, or annual, but in a supply-limited system it can fail to define actual flux amounts due to a lack of knowledge of the distribution of sediment across the surface of interest with respect to the patterns of SI.
Trabeculectomy versus Ahmed Glaucoma Valve implantation in neovascular glaucoma
Shen, Christopher C; Salim, Sarwat; Du, Haiming; Netland, Peter A
2011-01-01
Purpose: To compare surgical outcomes in neovascular glaucoma patients who underwent trabeculectomy with mitomycin C versus Ahmed Glaucoma Valve implantation. Patients and methods: This was a retrospective comparative case series. We reviewed 40 eyes of 39 patients with underlying diagnosis of neovascular glaucoma, divided into two groups: Ahmed Glaucoma Valve (N = 20) and trabeculectomy with mitomycin C (N = 20). Surgical success was defined as 6 mm Hg ≤ intraocular pressure ≤21 mm Hg, with or without the use of glaucoma medications, with no further glaucoma surgery, and light perception or better vision. Early postoperative hypotony was defined as intraocular pressure <5 mm Hg during the first postoperative week. Results: The average follow-up was 31 months (range 6–87 months) for the Ahmed Glaucoma Valve group and 25 months (6–77 months) for the trabeculectomy group. Although the mean number of postoperative intraocular pressure-lowering medications was significantly higher in the trabeculectomy group compared with the Ahmed Glaucoma Valve group at 3 and 6 month time points, there was no statistically significant difference at any other time point. There was no statistically significant difference between both groups in postoperative visual acuity and intraocular pressure. Success was 70% and 65% at 1 year and 60% and 55% at 2 years after Ahmed Glaucoma Valve and trabeculectomy, respectively. Kaplan–Meier survival curve analysis showed no significant difference in success between the two groups (P = 0.815). Hyphema was the most common complication in both groups. Conclusion: We found similar results after trabeculectomy with mitomycin C and Ahmed Glaucoma Valve implantation in eyes with neovascular glaucoma. PMID:21468334
Transcriptomic changes throughout post-hatch development in Gallus gallus pituitary
Lamont, Susan J; Schmidt, Carl J
2016-01-01
The pituitary gland is a neuroendocrine organ that works closely with the hypothalamus to affect multiple processes within the body including the stress response, metabolism, growth and immune function. Relative tissue expression (rEx) is a transcriptome analysis method that compares the genes expressed in a particular tissue to the genes expressed in all other tissues with available data. Using rEx, the aim of this study was to identify genes that are uniquely or more abundantly expressed in the pituitary when compared to all other collected chicken tissues. We applied rEx to define genes enriched in the chicken pituitaries at days 21, 22 and 42 post-hatch. rEx analysis identified 25 genes shared between all time points, 295 genes shared between days 21 and 22 and 407 genes unique to day 42. The 25 genes shared by all time points are involved in morphogenesis and general nervous tissue development. The 295 shared genes between days 21 and 22 are involved in neurogenesis and nervous system development and differentiation. The 407 unique day 42 genes are involved in pituitary development, endocrine system development and other hormonally related gene ontology terms. Overall, rEx analysis indicates a focus on nervous system/tissue development at days 21 and 22. By day 42, in addition to nervous tissue development, there is expression of genes involved in the endocrine system, possibly for maturation and preparation for reproduction. This study defines the transcriptome of the chicken pituitary gland and aids in understanding the expressed genes critical to its function and maturation. PMID:27856505
Comparing current definitions of return to work: a measurement approach.
Steenstra, I A; Lee, H; de Vroome, E M M; Busse, J W; Hogg-Johnson, S J
2012-09-01
Return-to-work (RTW) status is an often used outcome in work and health research. In low back pain, work is regarded as a normal activity a worker should return to in order to fully recover. Comparing outcomes across studies and even jurisdictions using different definitions of RTW can be challenging for readers in general and when performing a systematic review in particular. In this study, the measurement properties of previously defined RTW outcomes were examined with data from two studies from two countries. Data on RTW in low back pain (LBP) from the Canadian Early Claimant Cohort (ECC); a workers' compensation based study, and the Dutch Amsterdam Sherbrooke Evaluation (ASE) study were analyzed. Correlations between outcomes, differences in predictive validity when using different outcomes and construct validity when comparing outcomes to a functional status outcome were analyzed. In the ECC all definitions were highly correlated and performed similarly in predictive validity. When compared to functional status, RTW definitions in the ECC study performed fair to good on all time points. In the ASE study all definitions were highly correlated and performed similarly in predictive validity. The RTW definitions, however, failed to compare or compared poorly with functional status. Only one definition compared fairly on one time point. Differently defined outcomes are highly correlated, give similar results in prediction, but seem to differ in construct validity when compared to functional status depending on societal context or possibly birth cohort. Comparison of studies using different RTW definitions appears valid as long as RTW status is not considered as a measure of functional status.
Energy-efficiency based classification of the manufacturing workstation
NASA Astrophysics Data System (ADS)
Frumuşanu, G.; Afteni, C.; Badea, N.; Epureanu, A.
2017-08-01
EU Directive 92/75/EC established for the first time an energy consumption labelling scheme, further implemented by several other directives. As consequence, nowadays many products (e.g. home appliances, tyres, light bulbs, houses) have an EU Energy Label when offered for sale or rent. Several energy consumption models of manufacturing equipments have been also developed. This paper proposes an energy efficiency - based classification of the manufacturing workstation, aiming to characterize its energetic behaviour. The concept of energy efficiency of the manufacturing workstation is defined. On this base, a classification methodology has been developed. It refers to specific criteria and their evaluation modalities, together to the definition & delimitation of energy efficiency classes. The energy class position is defined after the amount of energy needed by the workstation in the middle point of its operating domain, while its extension is determined by the value of the first coefficient from the Taylor series that approximates the dependence between the energy consume and the chosen parameter of the working regime. The main domain of interest for this classification looks to be the optimization of the manufacturing activities planning and programming. A case-study regarding an actual lathe classification from energy efficiency point of view, based on two different approaches (analytical and numerical) is also included.
Entrenched obesity in childhood: findings from a national cohort study.
Cunningham, Solveig A; Datar, Ashlesha; Narayan, K M Venkat; Kramer, Michael R
2017-07-01
Given the high levels of obesity among U.S. children, we examine whether obesity in childhood is a passing phenomenon or remains entrenched into adolescence. Data are from the prospective nationally representative Early Childhood Longitudinal Study, Kindergarten Class of 1998-1999 (analytic sample = 6600). Anthropometrics were measured six times during 1998-2007. Overweight and obesity were defined using CDC cut-points. Entrenched obesity was defined as obesity between ages 5-9 coupled with persistent obesity at ages 11 and 14. Almost 30% of children experienced obesity at some point between ages 5.6 and 14.1 years; 63% of children who ever had obesity between ages 5.6 and 9.1 and 72% of those who had obesity at kindergarten entry experienced entrenched obesity. Children with severe obesity in kindergarten or who had obesity at more than 1 year during early elementary were very likely to experience obesity through age 14, regardless of their sex, race, or socioeconomic backgrounds. Prevention should focus on early childhood, as obesity at school entry is not often a passing phenomenon. Even one timepoint of obesity measured during the early elementary school years may be an indicator of risk for long-term obesity. Copyright © 2017 Elsevier Inc. All rights reserved.
Effect of Reynolds number and turbulence on airfoil aerodynamics at -90-degree incidence
NASA Technical Reports Server (NTRS)
Stremel, Paul M.
1994-01-01
A method has been developed for calculating the viscous flow about airfoils with and without deflected flaps at -90 deg incidence. This method provides for the solution of the unsteady incompressible Navier-Stokes equations by means of an implicit technique. The solution is calculated on a body-fitted computational mesh using a staggered-grid method. The vorticity is defined at the node points, and the velocity components are defined at the mesh-cell sides. The staggered-grid orientation provides for accurate representation of vorticity at the node points and the continuity equation at the mesh-cell centers. The method provides for the noniterative solution of the flowfield and satisfies the continuity equation to machine zero at each time step. The method is evaluated in terms of its stability to predict two-dimensional flow about an airfoil at -90-deg incidence for varying Reynolds number and laminar/turbulent models. The variations of the average loading and surface pressure distribution due to flap deflection, Reynolds number, and laminar or turbulent flow are presented and compared with experimental results. The comparisom indicate that the calculated drag and drag reduction caused by flap deflection and the calculated average surface pressure are in excellent agreement with the measured results at a similar Reynolds number.
Defining, treating and preventing hospital acquired pneumonia: European perspective.
Torres, Antoni; Ewig, Santiago; Lode, Harmut; Carlet, Jean
2009-01-01
Many controversies still remain in the management of hospital acquired pneumonia (HAP), and ventilation-acquired pneumonia (VAP), Three European Societies, European Respiratory Society (ERS), European Society of Clinical Microbiology and Infectious Diseases (ESCMID) and European Society of Intensive Care Medicine (ESICM), were interested in producing a document on HAP and VAP with European perspective. The scientific committees from each Society designated one chairman; Antoni Torres (ERS), Harmut Lode (ESCMID) and Jean Carlet (ESICM). The chairmen of this Task Force suggested names from each Society to be a member of the panel. They also choose controversial topics on the field and others that were not covered by the last IDSA/ATS guidelines. Each topic was assigned to a pair of members to be reviewed and written. Finally, the panel defined 20 consensual points that were circulated several times among the members of the panel until total agreement was reached. A combination of evidences and clinical-based medicine was used to reach these consensus. This manuscript reviews in depth several controversial or new topics in HAP and VAP. In addition 20 consensual points are presented. This manuscript may be useful for the development of future guidelines and to stimulate clinical research by lying out what is currently accepted and what is unknown or controversial.
NASA Astrophysics Data System (ADS)
Tournes, C.; Aucouturier, J.; Arnaud, B.; Brasile, J. P.; Convert, G.; Simon, M.
1992-07-01
A current-driven wiggler is the cornerstone of an innovative, compact, high-efficiency, transportable tunable free-electron laser (FEL), the feasibility of which is currently being evaluated by Thomson-CSF. The salient advantages are: compactness of the FEL, along with the possibility to accelerate the beam through several successive passes through the accelerating section (the number of passes being defined by the final wavelength of the radiation; i.e. visible, MWIR, LWIR); the wiggler can be turned off and be transparent to the beam until the last pass. Wiggler periodicities as small as 5 mm can be achieved, hence contributing to FEL compactness. To achieve overall efficiencies in the range of 10% at visible wavelengths, not only the wiggler periodicity must be variable, but the strength of the magnetic field of each period can be adjusted separately and fine-tuned versus time during the macropulse, so as to take into account the growing contribution of the wave energy in the cavity to the total ponderomotive force. The salient theoretical point of this design is the optimization of the parameters defining each period of the wiggler for each micropacket of the macropulse. The salient technology point is the mechanical and thermal design of the wiggler which allows the required high currents to achieve magnetic fields up to 2T.
Horridge, Karen A; Mcgarry, Kenneth; Williams, Jane; Whitlingum, Gabriel
2016-06-01
To pilot prospective data collection by paediatricians at the point of care across England using a defined terminology set; demonstrate feasibility of data collection and utility of data outputs; and confirm that counting the number of needs per child is valid for quantifying complexity. Paediatricians in 16 hospital and community settings collected and anonymized data. Participants completed a survey regarding the process. Data were analysed using R version 3.1.2. Overall, 8117 needs captured from 1224 consultations were recorded. Sixteen clinicians responded positively about the process and utility of data collection. The sum of needs varied significantly (p<0.01) by level of gross motor function ascertained using the Gross Motor Function Classification System for children with cerebral palsy; epilepsy severity as defined by level of expertise required to manage it; and by severity of intellectual disability. Prospective data collection at the point of clinical care proved possible without disrupting clinics, even for those with the most complex needs, and took the least time when done electronically. Counting the number of needs was easy to do, and quantified complexity in a way that informed clinical care for individuals and related directly to validated scales of functioning. Data outputs could inform more appropriate design and commissioning of quality services. © 2016 Mac Keith Press.
Valenza, Gaetano; Faes, Luca; Citi, Luca; Orini, Michele; Barbieri, Riccardo
2018-05-01
Measures of transfer entropy (TE) quantify the direction and strength of coupling between two complex systems. Standard approaches assume stationarity of the observations, and therefore are unable to track time-varying changes in nonlinear information transfer with high temporal resolution. In this study, we aim to define and validate novel instantaneous measures of TE to provide an improved assessment of complex nonstationary cardiorespiratory interactions. We here propose a novel instantaneous point-process TE (ipTE) and validate its assessment as applied to cardiovascular and cardiorespiratory dynamics. In particular, heartbeat and respiratory dynamics are characterized through discrete time series, and modeled with probability density functions predicting the time of the next physiological event as a function of the past history. Likewise, nonstationary interactions between heartbeat and blood pressure dynamics are characterized as well. Furthermore, we propose a new measure of information transfer, the instantaneous point-process information transfer (ipInfTr), which is directly derived from point-process-based definitions of the Kolmogorov-Smirnov distance. Analysis on synthetic data, as well as on experimental data gathered from healthy subjects undergoing postural changes confirms that ipTE, as well as ipInfTr measures are able to dynamically track changes in physiological systems coupling. This novel approach opens new avenues in the study of hidden, transient, nonstationary physiological states involving multivariate autonomic dynamics in cardiovascular health and disease. The proposed method can also be tailored for the study of complex multisystem physiology (e.g., brain-heart or, more in general, brain-body interactions).
Stability of Alprostadil in 0.9% Sodium Chloride Stored in Polyvinyl Chloride Containers.
McCluskey, Susan V; Kirkham, Kylian; Munson, Jessica M
2017-01-01
The stability of alprostadil diluted in 0.9% sodium chloride stored in polyvinyl chloride (VIAFLEX) containers at refrigerated temperature, protected from light, is reported. Five solutions of alprostadil 11 mcg/mL were prepared in 250 mL 0.9% sodium chloride polyvinyl chloride (PL146) containers. The final concentration of alcohol was 2%. Samples were stored under refrigeration (2°C to 8°C) with protection from light. Two containers were submitted for potency testing and analyzed in duplicate with the stability-indicating high-performance liquid chromatography assay at specific time points over 14 days. Three containers were submitted for pH and visual testing at specific time points over 14 days. Stability was defined as retention of 90% to 110% of initial alprostadil concentration, with maintenance of the original clear, colorless, and visually particulate-free solution. Study results reported retention of 90% to 110% initial alprostadil concentration at all time points through day 10. One sample exceeded 110% potency at day 14. pH values did not change appreciably over the 14 days. There were no color changes or particle formation detected in the solutions over the study period. This study concluded that during refrigerated, light-protected storage in polyvinyl chloride (VIAFLEX) containers, a commercial alcohol-containing alprostadil formulation diluted to 11 mcg/mL with 0.9% sodium chloride 250 mL was stable for 10 days. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
Sullivan, Julie M.; Prasanna, Pataje G. S.; Grace, Marcy B.; Wathen, Lynne; Wallace, Rodney L.; Koerner, John F.; Coleman, C. Norman
2013-01-01
Following a mass-casualty nuclear disaster, effective medical triage has the potential to save tens of thousands of lives. In order to best use the available scarce resources, there is an urgent need for biodosimetry tools to determine an individual’s radiation dose. Initial triage for radiation exposure will include location during the incident, symptoms, and physical examination. Stepwise triage will include point of care assessment of less than or greater than 2 Gy, followed by secondary assessment, possibly with high throughput screening, to further define an individual’s dose. Given the multisystem nature of radiation injury, it is unlikely that any single biodosimetry assay can be used as a stand-alone tool to meet the surge in capacity with the timeliness and accuracy needed. As part of the national preparedness and planning for a nuclear or radiological incident, we reviewed the primary literature to determine the capabilities and limitations of a number of biodosimetry assays currently available or under development for use in the initial and secondary triage of patients. Understanding the requirements from a response standpoint and the capability and logistics for the various assays will help inform future biodosimetry technology development and acquisition. Factors considered include: type of sample required, dose detection limit, time interval when the assay is feasible biologically, time for sample preparation and analysis, ease of use, logistical requirements, potential throughput, point-of-care capability, and the ability to support patient diagnosis and treatment within a therapeutically relevant time point. PMID:24162058
Estimating the Critical Point of Crowding in the Emergency Department for the Warning System
NASA Astrophysics Data System (ADS)
Chang, Y.; Pan, C.; Tseng, C.; Wen, J.
2011-12-01
The purpose of this study is to deduce a function from the admissions/discharge rate of patient flow to estimate a "Critical Point" that provides a reference for warning systems in regards to crowding in the emergency department (ED) of a hospital or medical clinic. In this study, a model of "Input-Throughput-Output" was used in our established mathematical function to evaluate the critical point. The function is defined as dPin/dt=dwait/dt+Cp×B+ dPout/dt where Pin= number of registered patients, Pwait= number of waiting patients, Cp= retention rate per bed (calculated for the critical point), B= number of licensed beds in the treatment area, and Pout= number of patients discharged from the treatment area. Using the average Cp of ED crowding, we could start the warning system at an appropriate time and then plan for necessary emergency response to facilitate the patient process more smoothly. It was concluded that ED crowding could be quantified using the average value of Cp and the value could be used as a reference for medical staff to give optimal emergency medical treatment to patients. Therefore, additional practical work should be launched to collect more precise quantitative data.
Quantum group spin nets: Refinement limit and relation to spin foams
NASA Astrophysics Data System (ADS)
Dittrich, Bianca; Martin-Benito, Mercedes; Steinhaus, Sebastian
2014-07-01
So far spin foam models are hardly understood beyond a few of their basic building blocks. To make progress on this question, we define analogue spin foam models, so-called "spin nets," for quantum groups SU(2)k and examine their effective continuum dynamics via tensor network renormalization. In the refinement limit of this coarse-graining procedure, we find a vast nontrivial fixed-point structure beyond the degenerate and the BF phase. In comparison to previous work, we use fixed-point intertwiners, inspired by Reisenberger's construction principle [M. P. Reisenberger, J. Math. Phys. (N.Y.) 40, 2046 (1999)] and the recent work [B. Dittrich and W. Kaminski, arXiv:1311.1798], as the initial parametrization. In this new parametrization fine-tuning is not required in order to flow to these new fixed points. Encouragingly, each fixed point has an associated extended phase, which allows for the study of phase transitions in the future. Finally we also present an interpretation of spin nets in terms of melonic spin foams. The coarse-graining flow of spin nets can thus be interpreted as describing the effective coupling between two spin foam vertices or space time atoms.
Single toxin dose-response models revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demidenko, Eugene, E-mail: eugened@dartmouth.edu
The goal of this paper is to offer a rigorous analysis of the sigmoid shape single toxin dose-response relationship. The toxin efficacy function is introduced and four special points, including maximum toxin efficacy and inflection points, on the dose-response curve are defined. The special points define three phases of the toxin effect on mortality: (1) toxin concentrations smaller than the first inflection point or (2) larger then the second inflection point imply low mortality rate, and (3) concentrations between the first and the second inflection points imply high mortality rate. Probabilistic interpretation and mathematical analysis for each of the fourmore » models, Hill, logit, probit, and Weibull is provided. Two general model extensions are introduced: (1) the multi-target hit model that accounts for the existence of several vital receptors affected by the toxin, and (2) model with a nonzero mortality at zero concentration to account for natural mortality. Special attention is given to statistical estimation in the framework of the generalized linear model with the binomial dependent variable as the mortality count in each experiment, contrary to the widespread nonlinear regression treating the mortality rate as continuous variable. The models are illustrated using standard EPA Daphnia acute (48 h) toxicity tests with mortality as a function of NiCl or CuSO{sub 4} toxin. - Highlights: • The paper offers a rigorous study of a sigmoid dose-response relationship. • The concentration with highest mortality rate is rigorously defined. • A table with four special points for five morality curves is presented. • Two new sigmoid dose-response models have been introduced. • The generalized linear model is advocated for estimation of sigmoid dose-response relationship.« less
NASA Technical Reports Server (NTRS)
Frew, A. M.; Eisenhut, D. F.; Farrenkopf, R. L.; Gates, R. F.; Iwens, R. P.; Kirby, D. K.; Mann, R. J.; Spencer, D. J.; Tsou, H. S.; Zaremba, J. G.
1972-01-01
The precision pointing control system (PPCS) is an integrated system for precision attitude determination and orientation of gimbaled experiment platforms. The PPCS concept configures the system to perform orientation of up to six independent gimbaled experiment platforms to design goal accuracy of 0.001 degrees, and to operate in conjunction with a three-axis stabilized earth-oriented spacecraft in orbits ranging from low altitude (200-2500 n.m., sun synchronous) to 24 hour geosynchronous, with a design goal life of 3 to 5 years. The system comprises two complementary functions: (1) attitude determination where the attitude of a defined set of body-fixed reference axes is determined relative to a known set of reference axes fixed in inertial space; and (2) pointing control where gimbal orientation is controlled, open-loop (without use of payload error/feedback) with respect to a defined set of body-fixed reference axes to produce pointing to a desired target.
A mesh gradient technique for numerical optimization
NASA Technical Reports Server (NTRS)
Willis, E. A., Jr.
1973-01-01
A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.
Spatial and temporal dependence of the convective electric field in Saturn’s inner magnetosphere
NASA Astrophysics Data System (ADS)
Andriopoulou, M.; Roussos, E.; Krupp, N.; Paranicas, C.; Thomsen, M.; Krimigis, S.; Dougherty, M. K.; Glassmeier, K.-H.
2014-02-01
The recently established presence of a convective electric field in Saturn’s inner and middle magnetosphere, with an average pointing approximately towards midnight and an intensity less than 1 mV/m, is one of the most puzzling findings by the Cassini spacecraft. In order to better characterize the properties of this electric field, we augmented the original analysis method used to identify it (Andriopoulou et al., 2012) and applied it to an extended energetic electron microsignature dataset, constructed from observations at the vicinity of four saturnian moons. We study the average characteristics of the convective pattern and additionally its temporal and spatial variations. In our updated dataset we include data from the recent Cassini orbits and also microsignatures from the two moons, Rhea and Enceladus, allowing us to further extend this analysis to cover a greater time period as well as larger radial distances within the saturnian magnetosphere. When data from the larger radial range and more recent orbits are included, we find that the originally inferred electric field pattern persists, and in fact penetrates at least as far in as the orbit of Enceladus, a region of particular interest due to the plasma loading that takes place there. We perform our electric field calculations by setting the orientation of the electric field as a free, time-dependent parameter, removing the pointing constraints from previous works. Analytical but also numerical techniques have been employed, that help us overcome possible errors that could have been introduced from simplified assumptions used previously. We find that the average electric field pointing is not directed exactly at midnight, as we initially assumed, but is found to be stably displaced by approximately 12-32° from midnight, towards dawn. The fact, however, that the field’s pointing is much more variable in short time scales, in addition to our observations that it penetrates inside the orbit of Enceladus (∼4 Rs), may suggest that the convective pattern is dominating all the way down to the main rings (2.2 Rs), when data from the Saturn Orbit Insertion are factored in. We also report changes of the electric field strength and pointing over the course of time, possibly related to seasonal effects, with the largest changes occurring during a period that envelopes the saturnian equinox. Finally, the average electric field strength seems to be sensitive to radial distance, exhibiting a drop as we move further out in the magnetosphere, confirming earlier results. This drop-off, however, appears to be more intense in the earlier years of the mission. Between 2010 and 2012 the electric field is quasi-uniform, at least between the L-shells of Tethys and Dione. These new findings provide constraints in the possible electric field sources that might be causing such a convection pattern that has not been observed before in other planetary magnetospheres. The very well defined values of the field’s average properties may suggest a periodic variation of the convective pattern, which can average out very effectively the much larger changes in both pointing and intensity over short time scales, although this period cannot be defined. The slight evidence of changes in the properties across the equinox (seasonal control), may also hint that the source of the electric field resides in the planet’s atmosphere/ionosphere system.
NASA Astrophysics Data System (ADS)
Cristescu, Constantin P.; Stan, Cristina; Scarlat, Eugen I.; Minea, Teofil; Cristescu, Cristina M.
2012-04-01
We present a novel method for the parameter oriented analysis of mutual correlation between independent time series or between equivalent structures such as ordered data sets. The proposed method is based on the sliding window technique, defines a new type of correlation measure and can be applied to time series from all domains of science and technology, experimental or simulated. A specific parameter that can characterize the time series is computed for each window and a cross correlation analysis is carried out on the set of values obtained for the time series under investigation. We apply this method to the study of some currency daily exchange rates from the point of view of the Hurst exponent and the intermittency parameter. Interesting correlation relationships are revealed and a tentative crisis prediction is presented.
Time to rethink the neural mechanisms of learning and memory
Gallistel, Charles R.; Balsam, Peter D
2014-01-01
Most studies in the neurobiology of learning assume that the underlying learning process is a pairing – dependent change in synaptic strength that requires repeated experience of events presented in close temporal contiguity. However, much learning is rapid and does not depend on temporal contiguity which has never been precisely defined. These points are well illustrated by studies showing that temporal relationships between events are rapidly learned-even over long delays- and this knowledge governs the form and timing of behavior. The speed with which anticipatory responses emerge in conditioning paradigms is determined by the information that cues provide about the timing of rewards. The challenge for understanding the neurobiology of learning is to understand the mechanisms in the nervous system that encode information from even a single experience, the nature of the memory mechanisms that can encode quantities such as time, and how the brain can flexibly perform computations based on this information. PMID:24309167
Exploring biomedical ontology mappings with graph theory methods.
Kocbek, Simon; Kim, Jin-Dong
2017-01-01
In the era of semantic web, life science ontologies play an important role in tasks such as annotating biological objects, linking relevant data pieces, and verifying data consistency. Understanding ontology structures and overlapping ontologies is essential for tasks such as ontology reuse and development. We present an exploratory study where we examine structure and look for patterns in BioPortal, a comprehensive publicly available repository of live science ontologies. We report an analysis of biomedical ontology mapping data over time. We apply graph theory methods such as Modularity Analysis and Betweenness Centrality to analyse data gathered at five different time points. We identify communities, i.e., sets of overlapping ontologies, and define similar and closest communities. We demonstrate evolution of identified communities over time and identify core ontologies of the closest communities. We use BioPortal project and category data to measure community coherence. We also validate identified communities with their mutual mentions in scientific literature. With comparing mapping data gathered at five different time points, we identified similar and closest communities of overlapping ontologies, and demonstrated evolution of communities over time. Results showed that anatomy and health ontologies tend to form more isolated communities compared to other categories. We also showed that communities contain all or the majority of ontologies being used in narrower projects. In addition, we identified major changes in mapping data after migration to BioPortal Version 4.
Wetterling, Friedrich; Gallagher, Lindsay; Mullin, Jim; Holmes, William M; McCabe, Chris; Macrae, I Mhairi; Fagan, Andrew J
2015-01-01
Tissue sodium concentration increases in irreversibly damaged (core) tissue following ischemic stroke and can potentially help to differentiate the core from the adjacent hypoperfused but viable penumbra. To test this, multinuclear hydrogen-1/sodium-23 magnetic resonance imaging (MRI) was used to measure the changing sodium signal and hydrogen-apparent diffusion coefficient (ADC) in the ischemic core and penumbra after rat middle cerebral artery occlusion (MCAO). Penumbra and core were defined from perfusion imaging and histologically defined irreversibly damaged tissue. The sodium signal in the core increased linearly with time, whereas the ADC rapidly decreased by >30% within 20 minutes of stroke onset, with very little change thereafter (0.5–6 hours after MCAO). Previous reports suggest that the time point at which tissue sodium signal starts to rise above normal (onset of elevated tissue sodium, OETS) represents stroke onset time (SOT). However, extrapolating core data back in time resulted in a delay of 72±24 minutes in OETS compared with actual SOT. At the OETS in the core, penumbra sodium signal was significantly decreased (88±6%, P=0.0008), whereas penumbra ADC was not significantly different (92±18%, P=0.2) from contralateral tissue. In conclusion, reduced sodium-MRI signal may serve as a viability marker for penumbra detection and can complement hydrogen ADC and perfusion MRI in the time-independent assessment of tissue fate in acute stroke patients. PMID:25335803
NASA Astrophysics Data System (ADS)
Özcan, Abdullah; Rivière-Lorphèvre, Edouard; Ducobu, François
2018-05-01
In part manufacturing, efficient process should minimize the cycle time needed to reach the prescribed quality on the part. In order to optimize it, the machining time needs to be as low as possible and the quality needs to meet some requirements. For a 2D milling toolpath defined by sharp corners, the programmed feedrate is different from the reachable feedrate due to kinematic limits of the motor drives. This phenomena leads to a loss of productivity. Smoothing the toolpath allows to reduce significantly the machining time but the dimensional accuracy should not be neglected. Therefore, a way to address the problem of optimizing a toolpath in part manufacturing is to take into account the manufacturing time and the part quality. On one hand, maximizing the feedrate will minimize the manufacturing time and, on the other hand, the maximum of the contour error needs to be set under a threshold to meet the quality requirements. This paper presents a method to optimize sharp corner smoothing using b-spline curves by adjusting the control points defining the curve. The objective function used in the optimization process is based on the contour error and the difference between the programmed feedrate and an estimation of the reachable feedrate. The estimation of the reachable feedrate is based on geometrical information. Some simulation results are presented in the paper and the machining times are compared in each cases.
Monte Carlo simulation of induction time and metastable zone width; stochastic or deterministic?
NASA Astrophysics Data System (ADS)
Kubota, Noriaki
2018-03-01
The induction time and metastable zone width (MSZW) measured for small samples (say 1 mL or less) both scatter widely. Thus, these two are observed as stochastic quantities. Whereas, for large samples (say 1000 mL or more), the induction time and MSZW are observed as deterministic quantities. The reason for such experimental differences is investigated with Monte Carlo simulation. In the simulation, the time (under isothermal condition) and supercooling (under polythermal condition) at which a first single crystal is detected are defined as the induction time t and the MSZW ΔT for small samples, respectively. The number of crystals just at the moment of t and ΔT is unity. A first crystal emerges at random due to the intrinsic nature of nucleation, accordingly t and ΔT become stochastic. For large samples, the time and supercooling at which the number density of crystals N/V reaches a detector sensitivity (N/V)det are defined as t and ΔT for isothermal and polythermal conditions, respectively. The points of t and ΔT are those of which a large number of crystals have accumulated. Consequently, t and ΔT become deterministic according to the law of large numbers. Whether t and ΔT may stochastic or deterministic in actual experiments should not be attributed to change in nucleation mechanisms in molecular level. It could be just a problem caused by differences in the experimental definition of t and ΔT.
Boundary|Time|Surface: Art and Geology Meet in Gros Morne National Park, NL, Canada
NASA Astrophysics Data System (ADS)
Lancaster, Sydney; Waldron, John
2015-04-01
Environmental Art works range in scope from major permanent interventions in the landscape to less intrusive, more ephemeral site-specific installations constructed of materials from the local environment. Despite this range of intervention, however, these works all share in a tradition of art making that situates the artwork in direct response to the surrounding landscape. Andy Goldsworthy and Richard Long, for example, both favour methods that combine elements of both sculpture and performance in the creation of non-permanent interventions in the landscape, and both rely upon photographic, text-based, or video documentation as the only lasting indication of the works' existence. Similarly, Earth Scientists are responsible for interventions in the landscape, both physical and conceptual. For example, in Earth science, the periods of the geologic timescale - Cambrian, Ordovician, Silurian, etc. - were established by 19th century pioneers of geology at a time when they were believed to represent natural chapters in Earth history. Since the mid-20th century, stratigraphers have attempted to resolve ambiguities in the original definitions by defining stratotypes: sections of continuously deposited strata where a single horizon is chosen as a boundary. One such international stratotype, marking the Cambrian-Ordovician boundary, is defined at Green Point in Gros Morne National Park, Newfoundland. Boundary|Time|Surface was an ephemeral sculptural installation work constructed in June 2014. The main installation work was a fence of 52 vertical driftwood poles, 2-3 m tall, positioned precisely along the boundary stratotype horizon at Green Point in Newfoundland. The fence extended across a 150 m wave-cut platform from sea cliffs to the low-water mark, separating Ordovician from Cambrian strata. The installation was constructed by hand (with volunteer assistance) on June 22, as the wave-cut platform was exposed by the falling tide. During the remainder of the tidal cycle, and the following days, we allowed the fence to be dismantled by wave action and the incoming flood tide. The cycle of construction and destruction was documented in video and with time-lapse still photography. This project provided an opportunity for viewers to contemplate the brevity of human experience relative to the enormity of time, and the fragile and arbitrary nature of human-defined boundaries of all types. Future exhibitions of the documentation of this work are envisaged, which will provide opportunities for the public to interact with still and video images of the work directly, both as aesthetic objects and as sources of information regarding the geological and socio-political history of the site.
Event and Apparent Horizon Finders for 3 + 1 Numerical Relativity.
Thornburg, Jonathan
2007-01-01
Event and apparent horizons are key diagnostics for the presence and properties of black holes. In this article I review numerical algorithms and codes for finding event and apparent horizons in numerically-computed spacetimes, focusing on calculations done using the 3 + 1 ADM formalism. The event horizon of an asymptotically-flat spacetime is the boundary between those events from which a future-pointing null geodesic can reach future null infinity and those events from which no such geodesic exists. The event horizon is a (continuous) null surface in spacetime. The event horizon is defined nonlocally in time : it is a global property of the entire spacetime and must be found in a separate post-processing phase after all (or at least the nonstationary part) of spacetime has been numerically computed. There are three basic algorithms for finding event horizons, based on integrating null geodesics forwards in time, integrating null geodesics backwards in time, and integrating null surfaces backwards in time. The last of these is generally the most efficient and accurate. In contrast to an event horizon, an apparent horizon is defined locally in time in a spacelike slice and depends only on data in that slice, so it can be (and usually is) found during the numerical computation of a spacetime. A marginally outer trapped surface (MOTS) in a slice is a smooth closed 2-surface whose future-pointing outgoing null geodesics have zero expansion Θ. An apparent horizon is then defined as a MOTS not contained in any other MOTS. The MOTS condition is a nonlinear elliptic partial differential equation (PDE) for the surface shape, containing the ADM 3-metric, its spatial derivatives, and the extrinsic curvature as coefficients. Most "apparent horizon" finders actually find MOTSs. There are a large number of apparent horizon finding algorithms, with differing trade-offs between speed, robustness, accuracy, and ease of programming. In axisymmetry, shooting algorithms work well and are fairly easy to program. In slices with no continuous symmetries, spectral integral-iteration algorithms and elliptic-PDE algorithms are fast and accurate, but require good initial guesses to converge. In many cases, Schnetter's "pretracking" algorithm can greatly improve an elliptic-PDE algorithm's robustness. Flow algorithms are generally quite slow but can be very robust in their convergence. Minimization methods are slow and relatively inaccurate in the context of a finite differencing simulation, but in a spectral code they can be relatively faster and more robust.
The existence of negative absolute temperatures in Axelrod’s social influence model
NASA Astrophysics Data System (ADS)
Villegas-Febres, J. C.; Olivares-Rivas, W.
2008-06-01
We introduce the concept of temperature as an order parameter in the standard Axelrod’s social influence model. It is defined as the relation between suitably defined entropy and energy functions, T=(. We show that at the critical point, where the order/disorder transition occurs, this absolute temperature changes in sign. At this point, which corresponds to the transition homogeneous/heterogeneous culture, the entropy of the system shows a maximum. We discuss the relationship between the temperature and other properties of the model in terms of cultural traits.
A Sixteen Node Shell Element with a Matrix Stabilization Scheme.
1987-04-22
coordinates with components x, y and z are defined on the shell midsurface in addition to global coordinates with components X, Y and Z. The x, y and z axes... midsurface while a3 is normal to the surface. The al, A2 and a3 vectors are given at each node as an input. In addition, they are defined at each integra...drawn from the point on the midsurface to the generic material point, t is the shell thickness and the nondimenslonal coordinate C runs from -1 to 1
When is a theory a theory? A case example.
Alkin, Marvin C
2017-08-01
This discussion comments on the approximately 20years history of writings on the prescriptive theory called Empowerment Evaluation. To do so, involves examining how "Empowerment Evaluation Theory" has been defined at various points of time (particularly 1996 and now in 2015). Defining a theory is different from judging the success of a theory. This latter topic has been addressed elsewhere by Michael Scriven, Michael Patton, and Brad Cousins. I am initially guided by the work of Robin Miller (2010) who has written on the issue of how to judge the success of a theory. In doing so, she provided potential standards for judging the adequacy of theories. My task is not judging the adequacy or success of the Empowerment Evaluation prescriptive theory in practice, but determining how well the theory is delineated. That is, to what extent do the writings qualify as a prescriptive theory. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jeong, Sunho; Song, Hae Chun; Lee, Won Woo; Lee, Sun Sook; Choi, Youngmin; Son, Wonil; Kim, Eui Duk; Paik, Choon Hoon; Oh, Seok Heon; Ryu, Beyong-Hwan
2011-03-15
With the aim of inkjet printing highly conductive and well-defined Cu features on plastic substrates, aqueous based Cu ink is prepared for the first time using water-soluble Cu nanoparticles with a very thin surface oxide layer. Owing to the specific properties, high surface tension and low boiling point, of water, the aqueous based Cu ink endows a variety of advantages over conventional Cu inks based on organic solvents in printing narrow conductive patterns without irregular morphologies. It is demonstrated how the design of aqueous based ink affects the basic properties of printed conductive features such as surface morphology, microstructure, conductivity, and line width. The long-term stability of aqueous based Cu ink against oxidation is analyzed through an X-ray photoelectron spectroscopy (XPS) based investigation on the evolution of the surface oxide layer in the aqueous based ink.
Current state of the art for enhancing urine biomarker discovery
Harpole, Michael; Davis, Justin; Espina, Virginia
2016-01-01
Urine is a highly desirable biospecimen for biomarker analysis because it can be collected recurrently by non-invasive techniques, in relatively large volumes. Urine contains cellular elements, biochemicals, and proteins derived from glomerular filtration of plasma, renal tubule excretion, and urogenital tract secretions that reflect, at a given time point, an individual's metabolic and pathophysiologic state. High-resolution mass spectrometry, coupled with state of the art fractionation systems are revealing the plethora of diagnostic/prognostic proteomic information existing within urinary exosomes, glycoproteins, and proteins. Affinity capture pre-processing techniques such as combinatorial peptide ligand libraries and biomarker harvesting hydrogel nanoparticles are enabling measurement/identification of previously undetectable urinary proteins. Future challenges in the urinary proteomics field include a) defining either single or multiple, universally applicable data normalization methods for comparing results within and between individual patients/data sets, and b) defining expected urinary protein levels in healthy individuals. PMID:27232439
Chung, Kian Fan
2017-09-30
Asthma is a heterogeneous disease comprising several phenotypes driven by different pathways. To define these phenotypes or endotypes (phenotypes defined by mechanisms), an unbiased approach to clustering of various omics platforms will yield molecular phenotypes from which composite biomarkers can be obtained. Biomarkers can help differentiate between these phenotypes and pinpoint patients suitable for specific targeted therapies - the basis for personalised medicine. Biomarkers need to be linked to point-of-care biomarkers that may be measured readily in exhaled breath, blood or urine. The potential for using mobile healthcare approaches will help patient enpowerment, an essential tool for personalised medicine. Personalised medicine in asthma is not far off - it is already here, but we need more tools and implements to carry it out for the benefit of our patients. Copyright ©ERS 2017.
Observation Planning Made Simple with Science Opportunity Analyzer (SOA)
NASA Technical Reports Server (NTRS)
Streiffert, Barbara A.; Polanskey, Carol A.
2004-01-01
As NASA undertakes the exploration of the Moon and Mars as well as the rest of the Solar System while continuing to investigate Earth's oceans, winds, atmosphere, weather, etc., the ever-existing need to allow operations users to easily define their observations increases. Operation teams need to be able to determine the best time to perform an observation, as well as its duration and other parameters such as the observation target. In addition, operations teams need to be able to check the observation for validity against objectives and intent as well as spacecraft constraints such as turn rates and acceleration or pointing exclusion zones. Science Opportunity Analyzer (SOA), in development for the last six years, is a multi-mission toolset that has been built to meet those needs. The operations team can follow six simple steps and define his/her observation without having to know the complexities of orbital mechanics, coordinate transformations, or the spacecraft itself.
Shen, Peiping; Zhang, Tongli; Wang, Chunfeng
2017-01-01
This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.
Single-Molecule Reaction Chemistry in Patterned Nanowells
2016-01-01
A new approach to synthetic chemistry is performed in ultraminiaturized, nanofabricated reaction chambers. Using lithographically defined nanowells, we achieve single-point covalent chemistry on hundreds of individual carbon nanotube transistors, providing robust statistics and unprecedented spatial resolution in adduct position. Each device acts as a sensor to detect, in real-time and through quantized changes in conductance, single-point functionalization of the nanotube as well as consecutive chemical reactions, molecular interactions, and molecular conformational changes occurring on the resulting single-molecule probe. In particular, we use a set of sequential bioconjugation reactions to tether a single-strand of DNA to the device and record its repeated, reversible folding into a G-quadruplex structure. The stable covalent tether allows us to measure the same molecule in different solutions, revealing the characteristic increased stability of the G-quadruplex structure in the presence of potassium ions (K+) versus sodium ions (Na+). Nanowell-confined reaction chemistry on carbon nanotube devices offers a versatile method to isolate and monitor individual molecules during successive chemical reactions over an extended period of time. PMID:27270004
Sébastian, C; Barraud, S; Ribun, S; Zoropogui, A; Blaha, D; Becouze-Lareure, C; Kouyi, G Lipeme; Cournoyer, B
2014-04-01
Accumulated sediments in a 32,000-m(3) detention basin linked to a separate stormwater system were characterized in order to infer their health hazards. A sampling scheme of 15 points was defined according to the hydrological behaviour of the basin. Physical parameters (particle size and volatile organic matter content) were in the range of those previously reported for stormwater sediments. Chemical analyses on hydrocarbons, PAHs, PCBs and heavy metals showed high pollutant concentrations. Microbiological analyses of these points highlighted the presence of faecal indicator bacteria (Escherichia coli and intestinal enterococci) and actinomycetes of the genus Nocardia. These are indicative of the presence of human pathogens. E. coli and enterococcal numbers in the sediments were higher at the proximity of the low-flow gutter receiving waters from the catchment. These bacteria appeared to persist over time among urban sediments. Samples highly contaminated by hydrocarbons were also shown to be heavily contaminated by these bacteria. These results demonstrated for the first time the presence of Nocardial actinomycetes in such an urban context with concentrations as high as 11,400 cfu g(-1).
Compton, C W R; Young, L; McDougall, S
2015-09-01
Firstly, to define, in dairy cows in the first 5 weeks post-calving fed a predominantly pasture-based diet, cut-points of concentrations of beta-hydroxybutyrate (BHBA) in blood, above which there were associations with purulent vaginal discharge (PVD), reduced pregnancy rates (PR) and decreased milk production, in order to better define subclinical ketosis (SCK) in such cattle; and secondly, to determine the prevalence, incidence and risk factors for SCK. An observational field study was conducted in 565 cows from 15 spring-calving and predominantly pasture-fed dairy herds in two regions of New Zealand during the 2010- 2011 dairy season. Within each herd, a cohort of randomly selected cows (approximately 40 per herd) was blood sampled to determine concentrations of BHBA on six occasions at weekly intervals starting within 5 days of calving. The key outcome variables were the presence/absence of PVD at 5 weeks post-calving, PR after 6 weeks (6-week PR) and after the completion of the breeding season (final PR), and mean daily milk solids production. Two cut-points for defining SCK were identified: firstly concentration of BHBA in blood≥1.2 mmol/L within 5 days post-calving, which was associated with an increased diagnosis of PVD (24 vs. 8%); and secondly concentration of BHBA in blood≥1.2 mmol/L at any stage within 5 weeks post-calving, which was associated with decreased 6-week PR (78 vs. 85%). The mean herd-level incidence of SCK within 5 weeks post-calving was 68 (min 12; max 100)% and large variations existed between herds in peak prevalence of SCK and the interval post-calving at which such peaks occurred. Cows>8 years of age and cows losing body condition were at increased risk of SCK within 5 weeks of calving. Cows with concentration of BHBA in blood≥1.2 mmol/L in early lactation had a higher risk of PVD and lower 6-week PR. Cow and herd-level prevalence of SCK varied widely in early lactation. Subclinical ketosis is common and is significantly associated with reproductive performance in mainly pasture-fed New Zealand dairy cattle. Controlling SCK may therefore result in improvements in herd reproductive performance. However considerable variation exists among herds in the incidence of SCK and in the timing of peak prevalence which means that herd-specific monitoring programmes are required to define herd SCK status accurately.
Efficient terrestrial laser scan segmentation exploiting data structure
NASA Astrophysics Data System (ADS)
Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa
2016-09-01
New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.
Error reduction in three-dimensional metrology combining optical and touch probe data
NASA Astrophysics Data System (ADS)
Gerde, Janice R.; Christens-Barry, William A.
2010-08-01
Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.
Wind shear modeling for aircraft hazard definition
NASA Technical Reports Server (NTRS)
Frost, W.; Camp, D. W.; Wang, S. T.
1978-01-01
Mathematical models of wind profiles were developed for use in fast time and manned flight simulation studies aimed at defining and eliminating these wind shear hazards. A set of wind profiles and associated wind shear characteristics for stable and neutral boundary layers, thunderstorms, and frontal winds potentially encounterable by aircraft in the terminal area are given. Engineering models of wind shear for direct hazard analysis are presented in mathematical formulae, graphs, tables, and computer lookup routines. The wind profile data utilized to establish the models are described as to location, how obtained, time of observation and number of data points up to 500 m. Recommendations, engineering interpretations and guidelines for use of the data are given and the range of applicability of the wind shear models is described.
NASA Technical Reports Server (NTRS)
Morelli, E. A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for lateral linear model parameter estimation at 30, 45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Strake (S) model and Strake/Thrust Vectoring (STV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specification of the time/amplitude points defining each input are included, along with plots of the input time histories.
NASA Astrophysics Data System (ADS)
Joshi, Pankaj S.; Narayan, Ramesh
2016-10-01
We propose here that the well-known black hole paradoxes such as the information loss and teleological nature of the event horizon are restricted to a particular idealized case, which is the homogeneous dust collapse model. In this case, the event horizon, which defines the boundary of the black hole, forms initially, and the singularity in the interior of the black hole at a later time. We show that, in contrast, gravitational collapse from physically more realistic initial conditions typically leads to the scenario in which the event horizon and space-time singularity form simultaneously. We point out that this apparently simple modification can mitigate the causality and teleological paradoxes, and also lends support to two recently suggested solutions to the information paradox, namely, the ‘firewall’ and ‘classical chaos’ proposals.
Topological properties of a curved spacetime
NASA Astrophysics Data System (ADS)
Agrawal, Gunjan; Shrivastava, Sampada; Godani, Nisha; Sinha, Soami Pyari
2017-12-01
The present paper aims at the study of a topology on Lorentzian manifolds, defined by Göbel [4] using the ideas of Zeeman [16]. Observing that on the Minkowski space it is the same as Zeeman's time topology, it has been found that a Lorentzian manifold with this topology is path connected, nonfirst countable and nonsimply connected while the Minkowski space with time topology is, in addition nonregular and separable. Furthermore, using the notion of Zeno sequences it is obtained that a compact set does not contain a nonempty open set and that a set is compact if and only if each of its infinite subsets has a limit point if and only if each of its sequences has a convergent subsequence.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1995-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.
A first theoretical realization of honeycomb topological magnon insulator.
Owerre, S A
2016-09-28
It has been recently shown that in the Heisenberg (anti)ferromagnet on the honeycomb lattice, the magnons (spin wave quasipacticles) realize a massless two-dimensional (2D) Dirac-like Hamiltonian. It was shown that the Dirac magnon Hamiltonian preserves time-reversal symmetry defined with the sublattice pseudo spins and the Dirac points are robust against magnon-magnon interactions. The Dirac points also occur at nonzero energy. In this paper, we propose a simple realization of nontrivial topology (magnon edge states) in this system. We show that the Dirac points are gapped when the inversion symmetry of the lattice is broken by introducing a next-nearest neighbour Dzyaloshinskii-Moriya (DM) interaction. Thus, the system realizes magnon edge states similar to the Haldane model for quantum anomalous Hall effect in electronic systems. However, in contrast to electronic spin current where dissipation can be very large due to Ohmic heating, noninteracting topological magnons can propagate for a long time without dissipation as magnons are uncharged particles. We observe the same magnon edge states for the XY model on the honeycomb lattice. Remarkably, in this case the model maps to interacting hardcore bosons on the honeycomb lattice. Quantum magnetic systems with nontrivial magnon edge states are called topological magnon insulators. They have been studied theoretically on the kagome lattice and recently observed experimentally on the kagome magnet Cu(1-3, bdc) with three magnon bulk bands. Our results for the honeycomb lattice suggests an experimental procedure to search for honeycomb topological magnon insulators within a class of 2D quantum magnets and ultracold atoms trapped in honeycomb optical lattices. In 3D lattices, Dirac and Weyl points were recently studied theoretically, however, the criteria that give rise to them were not well-understood. We argue that the low-energy Hamiltonian near the Weyl points should break time-reversal symmetry of the pseudo spins. Thus, recovering the same criteria in electronic systems.
Croome, K P; Lee, D D; Nguyen, J H; Keaveny, A P; Taner, C B
2017-09-01
Understanding of outcomes for patients relisted for ischemic cholangiopathy following a donation after cardiac death (DCD) liver transplant (LT) will help standardization of a Model for End-Stage Liver Disease exception scheme for retransplantation. Early relisting (E-RL) for DCD graft failure caused by primary nonfunction (PNF) or hepatic artery thrombosis (HAT) was defined as relisting ≤14 days after DCD LT, and late relisting (L-RL) due to biliary complications was defined as relisting 14 days to 3 years after DCD LT. Of 3908 DCD LTs performed nationally between 2002 and 2016, 540 (13.8%) patients were relisted within 3 years of transplant (168 [4.3%] in the E-RL group, 372 [9.5%] in the L-RL group). The E-RL and L-RL groups had waitlist mortality rates of 15.4% and 10.5%, respectively, at 3 mo and 16.1% and 14.3%, respectively, at 1 year. Waitlist mortality in the L-RL group was higher than mortality and delisted rates for patients with exception points for both hepatocellular carcinoma (HCC) and hepatopulmonary syndrome (HPS) at 3- to 12-mo time points (p < 0.001). Waitlist outcomes differed in patients with early DCD graft failure caused by PNF or HAT compared with those with late DCD graft failure attributed to biliary complications. In L-RL, higher rates of waitlist mortality were noted compared with patients listed with exception points for HCC or HPS. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.
Flexibility of the elderly after one-year practice of yoga and calisthenics.
Farinatti, Paulo T V; Rubini, Ercole C; Silva, Elirez B; Vanfraechem, Jacques H
Flexibility training responses to distinct stretching techniques are not well defined, especially in the elderly. This study compared the flexibility of elderly individuals before and after having practiced hatha yoga and calisthenics for 1 year (52 weeks), at least 3 times/week. Sixty-six subjects (12 men) measured and assigned to 3 groups: control (n = 24, age = 67.7±6.9 years), hatha yoga (n = 22, age = 61.2±4.8 years), and calisthenics (n = 20, age = 69.0±5.8 years). The maximal range of passive motion of 13 movements in 7 joints was assessed by the Flexitest, comparing the range obtained with standard charts representing each arc of movement on a discontinuous and non-dimensional scale from 0 to 4. Results of individual movements were summed to define 4 indexes (ankle+knee, hip+trunk, wrist+elbow, and shoulder) and total flexibility (Flexindex). Results showed significant increases of total flexibility in the hatha yoga group (by 22.5 points) and the calisthenics group (by 5.8 points) (p < 0.01 for each) and a decrease in the control group (by 2.1 points) (p < 0.01) after one year of intervention. Between-group comparison showed that increases in the hatha yoga group were greater than in the calisthenics group for most flexibility indexes, particularly the overall flexibility (p <0.05). In conclusion, the practice of hatha yoga (i.e., slow/passive movements) was more effective in improving flexibility compared to calisthenics (i.e., fast/dynamic movements), but calisthenics was able to prevent flexibility losses observed in sedentary elderly subjects.
Streaming Multiframe Deconvolutions on GPUs
NASA Astrophysics Data System (ADS)
Lee, M. A.; Budavári, T.
2015-09-01
Atmospheric turbulence distorts all ground-based observations, which is especially detrimental to faint detections. The point spread function (PSF) defining this blur is unknown for each exposure and varies significantly over time, making image analysis difficult. Lucky imaging and traditional co-adding throws away lots of information. We developed blind deconvolution algorithms that can simultaneously obtain robust solutions for the background image and all the PSFs. It is done in a streaming setting, which makes it practical for large number of big images. We implemented a new tool that runs of GPUs and achieves exceptional running times that can scale to the new time-domain surveys. Our code can quickly and effectively recover high-resolution images exceeding the quality of traditional co-adds. We demonstrate the power of the method on the repeated exposures in the Sloan Digital Sky Survey's Stripe 82.
Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets
NASA Astrophysics Data System (ADS)
Gold, P. O.; Cowgill, E.; Kreylos, O.
2009-12-01
Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point cloud, we scan from multiple locations an object of known geometry (a cylinder mounted above a square box). Preliminary results show that even in a controlled experimental scan of an object of known dimensions, there is significant variability in the precision of the registered point cloud. For example, when 3 scans of the central object are registered using 4 known points (maximum time, maximum equipment), the point clouds align to within ~1 cm (normal to the object surface). However, when the same point clouds are registered with only 1 known point (minimum time, minimum equipment), misalignment of the point clouds can range from 2.5 to 5 cm, depending on target type. The greater misalignment of the 3 point clouds when registered with fewer known points stems from the field method employed in acquiring the dataset and demonstrates the impact of field workflow on LiDAR dataset precision. By quantifying the degree of scan mismatch in results such as this, we can provide users with the information needed to maximize efficiency in remote field surveys.
Arrows of time in the bouncing universes of the no-boundary quantum state
NASA Astrophysics Data System (ADS)
Hartle, James; Hertog, Thomas
2012-05-01
We derive the arrows of time of our universe that follow from the no-boundary theory of its quantum state (NBWF) in a minisuperspace model. Arrows of time are viewed four-dimensionally as properties of the four-dimensional Lorentzian histories of the universe. Probabilities for these histories are predicted by the NBWF. For histories with a regular “bounce” at a minimum radius fluctuations are small at the bounce and grow in the direction of expansion on either side. For recollapsing classical histories with big bang and big crunch singularities the fluctuations are small near one singularity and grow through the expansion and recontraction to the other singularity. The arrow of time defined by the growth in fluctuations thus points in one direction over the whole of a recollapsing spacetime but is bidirectional in a bouncing spacetime. We argue that the electromagnetic, thermodynamic, and psychological arrows of time are aligned with the fluctuation arrow. The implications of a bidirectional arrow of time for causality are discussed.
Tseng, Wen-Hung; Huang, Yi-Jiun; Gotoh, Tadahiro; Hobiger, Thomas; Fujieda, Miho; Aida, Masanori; Li, Tingyu; Lin, Shinn-Yan; Lin, Huang-Tien; Feng, Kai-Ming
2012-03-01
Two-way satellite time and frequency transfer (TWSTFT) is one of the main techniques used to compare atomic time scales over long distances. To both improve the precision of TWSTFT and decrease the satellite link fee, a new software-defined modem with dual pseudo-random noise (DPN) codes has been developed. In this paper, we demonstrate the first international DPN-based TWSTFT experiment over a period of 6 months. The results of DPN exhibit excellent performance, which is competitive with the Global Positioning System (GPS) precise point positioning (PPP) technique in the short-term and consistent with the conventional TWSTFT in the long-term. Time deviations of less than 75 ps are achieved for averaging times from 1 s to 1 d. Moreover, the DPN data has less diurnal variation than that of the conventional TWSTFT. Because the DPN-based system has advantages of higher precision and lower bandwidth cost, it is one of the most promising methods to improve international time-transfer links.
NASA Astrophysics Data System (ADS)
Tenkès, Lucille-Marie; Hollerbach, Rainer; Kim, Eun-jin
2017-12-01
A probabilistic description is essential for understanding growth processes in non-stationary states. In this paper, we compute time-dependent probability density functions (PDFs) in order to investigate stochastic logistic and Gompertz models, which are two of the most popular growth models. We consider different types of short-correlated multiplicative and additive noise sources and compare the time-dependent PDFs in the two models, elucidating the effects of the additive and multiplicative noises on the form of PDFs. We demonstrate an interesting transition from a unimodal to a bimodal PDF as the multiplicative noise increases for a fixed value of the additive noise. A much weaker (leaky) attractor in the Gompertz model leads to a significant (singular) growth of the population of a very small size. We point out the limitation of using stationary PDFs, mean value and variance in understanding statistical properties of the growth in non-stationary states, highlighting the importance of time-dependent PDFs. We further compare these two models from the perspective of information change that occurs during the growth process. Specifically, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory quantifies the total number of different states that the system undergoes in time, and is called the information length. We show that the time-evolution of the two models become more similar when measured in units of the information length and point out the merit of using the information length in unifying and understanding the dynamic evolution of different growth processes.
In-Flight Guidance, Navigation, and Control Performance Results for the GOES-16 Spacecraft
NASA Technical Reports Server (NTRS)
Chapel, Jim; Stancliffe, Devin; Bevacqua, Tim; Winkler, Stephen; Clapp, Brian; Rood, Tim; Freesland, Doug; Reth, Alan; Early, Derrick; Walsh, Tim;
2017-01-01
The Geostationary Operational Environmental Satellite-R Series (GOES-R), which launched in November 2016, is the first of the next generation geostationary weather satellites. GOES-R provides 4 times the resolution, 5 times the observation rate, and 3 times the number of spectral bands for Earth observations compared with its predecessor spacecraft. Additionally, Earth relative and Sun-relative pointing and pointing stability requirements are maintained throughout reaction wheel desaturation events and station keeping activities, allowing GOES-R to provide continuous Earth and sun observations. This paper reviews the pointing control, pointing stability, attitude knowledge, and orbit knowledge requirements necessary to realize the ambitious Image Navigation and Registration (INR) objectives of GOES-R. This paper presents a comparison between low-frequency on-orbit pointing results and simulation predictions for both the Earth Pointed Platform (EPP) and Sun Pointed Platform (SPP). Results indicate excellent agreement between simulation predictions and observed on-orbit performance, and compliance with pointing performance requirements. The EPP instrument suite includes 6 seismic accelerometers sampled at 2 KHz, allowing in-flight verification of jitter responses and comparison back to simulation predictions. This paper presents flight results of acceleration, shock response spectrum (SRS), and instrument line of sight responses for various operational scenarios and instrument observation modes. The results demonstrate the effectiveness of the dual-isolation approach employed on GOES-R. The spacecraft provides attitude and rate data to the primary Earth-observing instrument at 100 Hz, which are used to adjust instrument scanning. The data must meet accuracy and latency numbers defined by the Integrated Rate Error (IRE) requirements. This paper discusses the on-orbit IRE results, showing compliance to these requirements with margin. During the spacecraft checkout period, IRE disturbances were observed and subsequently attributed to thermal control of the Inertial Measurement Unit (IMU) mounting interface. Adjustments of IMU thermal control and the resulting improvements in IRE are presented. Orbit knowledge represents the final element of INR performance. Extremely accurate orbital position is achieved by GPS navigation at Geosynchronous Earth Orbit (GEO). On-orbit performance results are shown demonstrating compliance with the stringent orbit position accuracy requirements of GOES-R, including during station keeping activities and momentum desaturation events. As we show in this paper, the on-orbit performance of the GNC design provides the necessary capabilities to achieve GOES-R mission objectives.
Carlson, Eric R; Schaefferkoetter, Josh; Townsend, David; McCoy, J Michael; Campbell, Paul D; Long, Misty
2013-01-01
To determine whether the time course of 18-fluorine fluorodeoxyglucose (18F-FDG) activity in multiple consecutively obtained 18F-FDG positron emission tomography (PET)/computed tomography (CT) scans predictably identifies metastatic cervical adenopathy in patients with oral/head and neck cancer. It is hypothesized that the activity will increase significantly over time only in those lymph nodes harboring metastatic cancer. A prospective cohort study was performed whereby patients with oral/head and neck cancer underwent consecutive imaging at 9 time points with PET/CT from 60 to 115 minutes after injection with (18)F-FDG. The primary predictor variable was the status of the lymph nodes based on dynamic PET/CT imaging. Metastatic lymph nodes were defined as those that showed an increase greater than or equal to 10% over the baseline standard uptake values. The primary outcome variable was the pathologic status of the lymph node. A total of 2,237 lymph nodes were evaluated histopathologically in the 83 neck dissections that were performed in 74 patients. A total of 119 lymph nodes were noted to have hypermetabolic activity on the 90-minute (static) portion of the study and were able to be assessed by time points. When we compared the PET/CT time point (dynamic) data with the histopathologic analysis of the lymph nodes, the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were 60.3%, 70.5%, 66.0%, 65.2%, and 65.5%, respectively. The use of dynamic PET/CT imaging does not permit the ablative surgeon to depend only on the results of the PET/CT study to determine which patients will benefit from neck dissection. As such, we maintain that surgeons should continue to rely on clinical judgment and maintain a low threshold for executing neck dissection in patients with oral/head and neck cancer, including those patients with N0 neck designations. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bielik, M.; Vozar, J.; Hegedus, E.; Celebration Working Group
2003-04-01
The contribution informs about the preliminary results that relate to the first arrival p-wave seismic tomographic processing of data measured along the profiles CEL01, CEL04, CEL05, CEL06, CEL09 and CEL11. These profiles were measured in a framework of the seismic project called CELEBRATION 2000. Data acquisition and geometric parameters of the processed profiles, tomographic processing’s principle, particular processing steps and program parameters are described. Characteristic data (shot points, geophone points, total length of profiles, for all profiles, sampling, sensors and record lengths) of observation profiles are given. The fast program package developed by C. Zelt was applied for tomographic velocity inversion. This process consists of several steps. First step is a creation of the starting velocity field for which the calculated arrival times are modelled by the method of finite differences. The next step is minimization of differences between the measured and modelled arrival time till the deviation is small. Elimination of equivalency problem by including a priori information in the starting velocity field was done too. A priori information consists of the depth to the pre-Tertiary basement, estimation of its overlying sedimentary velocity from well-logging and or other seismic velocity data, etc. After checking the reciprocal times, pickings were corrected. The final result of the processing is a reliable travel time curve set considering the reciprocal times. We carried out picking of travel time curves, enhancement of signal-to-noise ratio on the seismograms using the program system of PROMAX. Tomographic inversion was carried out by so called 3D/2D procedure taking into account 3D wave propagation. It means that a corridor along the profile, which contains the outlying shot points and geophone points as well was defined and we carried out 3D processing within this corridor. The preliminary results indicate the seismic anomalous zones within the crust and the uppermost part of the upper mantle in the area consists of the Western Carpathians, the North European platform, the Pannonian basin and the Bohemian Massif.
Saussele, Susanne; Hehlmann, Rüdiger; Fabarius, Alice; Jeromin, Sabine; Proetel, Ulrike; Rinaldetti, Sebastien; Kohlbrenner, Katharina; Einsele, Hermann; Falge, Christiane; Kanz, Lothar; Neubauer, Andreas; Kneba, Michael; Stegelmann, Frank; Pfreundschuh, Michael; Waller, Cornelius F; Oppliger Leibundgut, Elisabeth; Heim, Dominik; Krause, Stefan W; Hofmann, Wolf-Karsten; Hasford, Joerg; Pfirrmann, Markus; Müller, Martin C; Hochhaus, Andreas; Lauseker, Michael
2018-05-01
Major molecular remission (MMR) is an important therapy goal in chronic myeloid leukemia (CML). So far, MMR is not a failure criterion according to ELN management recommendation leading to uncertainties when to change therapy in CML patients not reaching MMR after 12 months. At monthly landmarks, for different molecular remission status Hazard ratios (HR) were estimated for patients registered to CML study IV who were divided in a learning and a validation sample. The minimum HR for MMR was found at 2.5 years with 0.28 (compared to patients without remission). In the validation sample, a significant advantage for progression-free survival (PFS) for patients in MMR could be detected (p-value 0.007). The optimal time to predict PFS in patients with MMR could be validated in an independent sample at 2.5 years. With our model we provide a suggestion when to define lack of MMR as therapy failure and thus treatment change should be considered. The optimal response time for 1% BCR-ABL at about 12-15 months was confirmed and for deep molecular remission no specific time point was detected. Nevertheless, it was demonstrated that the earlier the MMR is achieved the higher is the chance to attain deep molecular response later.
Bainbridge, Melissa L.; Cersosimo, Laura M.; Wright, André-Denis G.; Kraft, Jana
2016-01-01
Dairy products contain bioactive fatty acids (FA) and are a unique dietary source of an emerging class of bioactive FA, branched-chain fatty acids (BCFA). The objective of this study was to compare the content and profile of bioactive FA in milk, with emphasis on BCFA, among Holstein (HO), Jersey (JE), and first generation HO x JE crossbreeds (CB) across a lactation to better understand the impact of these factors on FA of interest to human health. Twenty-two primiparous cows (n = 7 HO, n = 7 CB, n = 8 JE) were followed across a lactation. All cows were fed a consistent total mixed ration (TMR) at a 70:30 forage to concentrate ratio. Time points were defined as 5 days in milk (DIM), 95 DIM, 185 DIM, and 275 DIM. HO and CB had a higher content of n-3 FA at 5 DIM than JE and a lower n-6:n-3 ratio. Time point had an effect on the n-6:n-3 ratio, with the lowest value observed at 5 DIM and the highest at 185 DIM. The content of vaccenic acid was highest at 5 DIM, yet rumenic acid was unaffected by time point or breed. Total odd and BCFA (OBCFA) were higher in JE than HO and CB at 185 and 275 DIM. Breed affected the content of individual BCFA. The content of iso-14:0 and iso-16:0 in milk was higher in JE than HO and CB from 95 to 275 DIM. Total OBCFA were affected by time point, with the highest content in milk at 275 DIM. In conclusion, HO and CB exhibited a higher content of several bioactive FA in milk than JE. Across a lactation the greatest content of bioactive FA in milk occurred at 5 DIM and OBCFA were highest at 275 DIM. PMID:26930646
Bainbridge, Melissa L; Cersosimo, Laura M; Wright, André-Denis G; Kraft, Jana
2016-01-01
Dairy products contain bioactive fatty acids (FA) and are a unique dietary source of an emerging class of bioactive FA, branched-chain fatty acids (BCFA). The objective of this study was to compare the content and profile of bioactive FA in milk, with emphasis on BCFA, among Holstein (HO), Jersey (JE), and first generation HO x JE crossbreeds (CB) across a lactation to better understand the impact of these factors on FA of interest to human health. Twenty-two primiparous cows (n = 7 HO, n = 7 CB, n = 8 JE) were followed across a lactation. All cows were fed a consistent total mixed ration (TMR) at a 70:30 forage to concentrate ratio. Time points were defined as 5 days in milk (DIM), 95 DIM, 185 DIM, and 275 DIM. HO and CB had a higher content of n-3 FA at 5 DIM than JE and a lower n-6:n-3 ratio. Time point had an effect on the n-6:n-3 ratio, with the lowest value observed at 5 DIM and the highest at 185 DIM. The content of vaccenic acid was highest at 5 DIM, yet rumenic acid was unaffected by time point or breed. Total odd and BCFA (OBCFA) were higher in JE than HO and CB at 185 and 275 DIM. Breed affected the content of individual BCFA. The content of iso-14:0 and iso-16:0 in milk was higher in JE than HO and CB from 95 to 275 DIM. Total OBCFA were affected by time point, with the highest content in milk at 275 DIM. In conclusion, HO and CB exhibited a higher content of several bioactive FA in milk than JE. Across a lactation the greatest content of bioactive FA in milk occurred at 5 DIM and OBCFA were highest at 275 DIM.
Quantitative analysis of eyes and other optical systems in linear optics.
Harris, William F; Evans, Tanya; van Gool, Radboud D
2017-05-01
To show that 14-dimensional spaces of augmented point P and angle Q characteristics, matrices obtained from the ray transference, are suitable for quantitative analysis although only the latter define an inner-product space and only on it can one define distances and angles. The paper examines the nature of the spaces and their relationships to other spaces including symmetric dioptric power space. The paper makes use of linear optics, a three-dimensional generalization of Gaussian optics. Symmetric 2 × 2 dioptric power matrices F define a three-dimensional inner-product space which provides a sound basis for quantitative analysis (calculation of changes, arithmetic means, etc.) of refractive errors and thin systems. For general systems the optical character is defined by the dimensionally-heterogeneous 4 × 4 symplectic matrix S, the transference, or if explicit allowance is made for heterocentricity, the 5 × 5 augmented symplectic matrix T. Ordinary quantitative analysis cannot be performed on them because matrices of neither of these types constitute vector spaces. Suitable transformations have been proposed but because the transforms are dimensionally heterogeneous the spaces are not naturally inner-product spaces. The paper obtains 14-dimensional spaces of augmented point P and angle Q characteristics. The 14-dimensional space defined by the augmented angle characteristics Q is dimensionally homogenous and an inner-product space. A 10-dimensional subspace of the space of augmented point characteristics P is also an inner-product space. The spaces are suitable for quantitative analysis of the optical character of eyes and many other systems. Distances and angles can be defined in the inner-product spaces. The optical systems may have multiple separated astigmatic and decentred refracting elements. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
NASA Astrophysics Data System (ADS)
Wong, Wing-Chun Godwin
This dissertation focused on Kant's conception of physical matter in the Opus postumum. In this work, Kant postulates the existence of an ether which fills the whole of space and time with its moving forces. Kant's arguments for the existence of an ether in the so-called Ubergang have been acutely criticized by commentators. Guyer, for instance, thinks that Kant pushes the technique of transcendental deduction too far in trying to deduce the empirical ether. In defense of Kant, I held that it is not the actual existence of the empirical ether, but the concept of the ether as a space-time filler that is subject to a transcendental deduction. I suggested that Kant is doing three things in the Ubergang: First, he deduces the pure concept of a space-time filler as a conceptual hybrid of the transcendental object and permanent substance to replace the category of substance in the Critique. Then he tries to prove the existence of such a space-time filler as a reworking of the First Analogy. Finally, he takes into consideration the empirical determinations of the ether by adding the concept of moving forces to the space -time filler. In reconstructing Kant's proofs, I pointed out that Kant is absolutely committed to the impossibility of action-at-a-distance. If we add this new principle of no-action-at-a-distance to the Third Analogy, the existence of a space-time filler follows. I argued with textual evidence that Kant's conception of ether satisfies the basic structure of a field: (1) the ether is a material continuum; (2) a physical quantity is definable on each point in the continuum; and (3) the ether provides a medium to support the continuous transmission of action. The thrust of Kant's conception of ether is to provide a holistic ontology for the transition to physics, which can best be understood from a field-theoretical point of view. This is the main thesis I attempted to establish in this dissertation.
Slow updating of the achromatic point after a change in illumination
Lee, R. J.; Dawson, K. A.; Smithson, H. E.
2015-01-01
For a colour constant observer, the colour appearance of a surface is independent of the spectral composition of the light illuminating it. We ask how rapidly colour appearance judgements are updated following a change in illumination. We obtained repeated binary colour classifications for a set of stimuli defined by their reflectance functions and rendered under either sunlight or skylight. We used these classifications to derive boundaries in colour space that identify the observer’s achromatic point. In steady-state conditions of illumination, the achromatic point lay close to the illuminant chromaticity. In our experiment the illuminant changed abruptly every 21 seconds (at the onset of every 10th trial), allowing us to track changes in the achromatic point that were caused by the cycle of illuminant changes. In one condition, the test reflectance was embedded in a spatial pattern of reflectance samples under consistent illumination. The achromatic point migrated across colour space between the chromaticities of the steady-state achromatic points. This update took several trials rather than being immediate. To identify the factors that governed perceptual updating of appearance judgements we used two further conditions, one in which the test reflectance was presented in isolation and one in which the surrounding reflectances were rendered under an inconsistent and unchanging illumination. Achromatic settings were not well predicted by the information available from scenes at a single time-point. Instead the achromatic points showed a strong dependence on the history of chromatic samples. The strength of this dependence differed between observers and was modulated by the spatial context. PMID:22275468
Models for the indices of thermal comfort
Adrian, Streinu-Cercel; Sergiu, Costoiu; Maria, Mârza; Anca, Streinu-Cercel; Monica, Mârza
2008-01-01
The current paper propose the analysis and extension formulation required for establishing decision in the management of the medical national system from the point of view of quality and efficiency such as: conceiving models for the indices of thermal comfort, defining the predicted mean vote (on the thermal sensation scale) „PMV”, defining the metabolism „M”, heat transfer between the human body and the environment, defining the predicted percent of dissatisfied people „PPD”, defining all indices of thermal comfort. PMID:20108461
Urban Growth Detection Using Filtered Landsat Dense Time Trajectory in an Arid City
NASA Astrophysics Data System (ADS)
Ye, Z.; Schneider, A.
2014-12-01
Among all remote sensing environment monitoring techniques, time series analysis of biophysical index is drawing increasing attention. Although many of them studied forest disturbance and land cover change detection, few focused on urban growth mapping at medium spatial resolution. As Landsat archive becomes open accessible, methods using Landsat time-series imagery to detect urban growth is possible. It is found that a time trajectory from a newly developed urban area shows a dramatic drop of vegetation index. This enable the utilization of time trajectory analysis to distinguish impervious surface and crop land that has a different temporal biophysical pattern. Also, the time of change can be estimated, yet many challenges remain. Landsat data has lower temporal resolution, which may be worse when cloud-contaminated pixels and SLC-off effect exist. It is difficult to tease apart intra-annual, inter-annual, and land cover difference in a time series. Here, several methods of time trajectory analysis are utilized and compared to find a computationally efficient and accurate way on urban growth detection. A case study city, Ankara, Turkey is chosen for its arid climate and various landscape distributions. For preliminary research, Landsat TM and ETM+ scenes from 1998 to 2002 are chosen. NDVI, EVI, and SAVI are selected as research biophysical indices. The procedure starts with a seasonality filtering. Only areas with seasonality need to be filtered so as to decompose seasonality and extract overall trend. Harmonic transform, wavelet transform, and a pre-defined bell shape filter are used to estimate the overall trend in the time trajectory for each pixel. The point with significant drop in the trajectory is tagged as change point. After an urban change is detected, forward and backward checking is undertaken to make sure it is really new urban expansion other than short time crop fallow or forest disturbance. The method proposed here can capture most of the urban growth during research time period, although the accuracy of time point determination is a bit lower than this. Results from several biophysical indices and filtering methods are similar. Some fallows and bare lands in arid area are easily confused with urban impervious surface.
Understanding survival analysis: Kaplan-Meier estimate.
Goel, Manish Kumar; Khanna, Pardeep; Kishore, Jugal
2010-10-01
Kaplan-Meier estimate is one of the best options to be used to measure the fraction of subjects living for a certain amount of time after treatment. In clinical trials or community trials, the effect of an intervention is assessed by measuring the number of subjects survived or saved after that intervention over a period of time. The time starting from a defined point to the occurrence of a given event, for example death is called as survival time and the analysis of group data as survival analysis. This can be affected by subjects under study that are uncooperative and refused to be remained in the study or when some of the subjects may not experience the event or death before the end of the study, although they would have experienced or died if observation continued, or we lose touch with them midway in the study. We label these situations as censored observations. The Kaplan-Meier estimate is the simplest way of computing the survival over time in spite of all these difficulties associated with subjects or situations. The survival curve can be created assuming various situations. It involves computing of probabilities of occurrence of event at a certain point of time and multiplying these successive probabilities by any earlier computed probabilities to get the final estimate. This can be calculated for two groups of subjects and also their statistical difference in the survivals. This can be used in Ayurveda research when they are comparing two drugs and looking for survival of subjects.
Scripting Module for the Satellite Orbit Analysis Program (SOAP)
NASA Technical Reports Server (NTRS)
Carnright, Robert; Paget, Jim; Coggi, John; Stodden, David
2008-01-01
This add-on module to the SOAP software can perform changes to simulation objects based on the occurrence of specific conditions. This allows the software to encompass simulation response of scheduled or physical events. Users can manipulate objects in the simulation environment under programmatic control. Inputs to the scripting module are Actions, Conditions, and the Script. Actions are arbitrary modifications to constructs such as Platform Objects (i.e. satellites), Sensor Objects (representing instruments or communication links), or Analysis Objects (user-defined logical or numeric variables). Examples of actions include changes to a satellite orbit ( v), changing a sensor-pointing direction, and the manipulation of a numerical expression. Conditions represent the circumstances under which Actions are performed and can be couched in If-Then-Else logic, like performing v at specific times or adding to the spacecraft power only when it is being illuminated by the Sun. The SOAP script represents the entire set of conditions being considered over a specific time interval. The output of the scripting module is a series of events, which are changes to objects at specific times. As the SOAP simulation clock runs forward, the scheduled events are performed. If the user sets the clock back in time, the events within that interval are automatically undone. This script offers an interface for defining scripts where the user does not have to remember the vocabulary of various keywords. Actions can be captured by employing the same user interface that is used to define the objects themselves. Conditions can be set to invoke Actions by selecting them from pull-down lists. Users define the script by selecting from the pool of defined conditions. Many space systems have to react to arbitrary events that can occur from scheduling or from the environment. For example, an instrument may cease to draw power when the area that it is tasked to observe is not in view. The contingency of the planetary body blocking the line of sight is a condition upon which the power being drawn is set to zero. It remains at zero until the observation objective is again in view. Computing the total power drawn by the instrument over a period of days or weeks can now take such factors into consideration. What makes the architecture especially powerful is that the scripting module can look ahead and behind in simulation time, and this temporal versatility can be leveraged in displays such as x-y plots. For example, a plot of a satellite s altitude as a function of time can take changes to the orbit into account.
Global tectonic reconstructions with continuously deforming and evolving rigid plates
NASA Astrophysics Data System (ADS)
Gurnis, Michael; Yang, Ting; Cannon, John; Turner, Mark; Williams, Simon; Flament, Nicolas; Müller, R. Dietmar
2018-07-01
Traditional plate reconstruction methodologies do not allow for plate deformation to be considered. Here we present software to construct and visualize global tectonic reconstructions with deforming plates within the context of rigid plates. Both deforming and rigid plates are defined by continuously evolving polygons. The deforming regions are tessellated with triangular meshes such that either strain rate or cumulative strain can be followed. The finite strain history, crustal thickness and stretching factor of points within the deformation zones are tracked as Lagrangian points. Integrating these tools within the interactive platform GPlates enables specialized users to build and refine deforming plate models and integrate them with other models in time and space. We demonstrate the integrated platform with regional reconstructions of Cenozoic western North America, the Mesozoic South American Atlantic margin, and Cenozoic southeast Asia, embedded within global reconstructions, using different data and reconstruction strategies.
Synthesis of regional crust and upper-mantle structure from seismic and gravity data
NASA Technical Reports Server (NTRS)
Alexander, S. S.; Lavin, P. M.
1979-01-01
Available seismic and ground based gravity data are combined to infer the three dimensional crust and upper mantle structure in selected regions. This synthesis and interpretation proceeds from large-scale average models suitable for early comparison with high-altitude satellite potential field data to more detailed delineation of structural boundaries and other variations that may be significant in natural resource assessment. Seismic and ground based gravity data are the primary focal point, but other relevant information (e.g. magnetic field, heat flow, Landsat imagery, geodetic leveling, and natural resources maps) is used to constrain the structure inferred and to assist in defining structural domains and boundaries. The seismic data consists of regional refraction lines, limited reflection coverage, surface wave dispersion, teleseismic P and S wave delay times, anelastic absorption, and regional seismicity patterns. The gravity data base consists of available point gravity determinations for the areas considered.
Boundary control of elliptic solutions to enforce local constraints
NASA Astrophysics Data System (ADS)
Bal, G.; Courdurier, M.
We present a constructive method to devise boundary conditions for solutions of second-order elliptic equations so that these solutions satisfy specific qualitative properties such as: (i) the norm of the gradient of one solution is bounded from below by a positive constant in the vicinity of a finite number of prescribed points; (ii) the determinant of gradients of n solutions is bounded from below in the vicinity of a finite number of prescribed points. Such constructions find applications in recent hybrid medical imaging modalities. The methodology is based on starting from a controlled setting in which the constraints are satisfied and continuously modifying the coefficients in the second-order elliptic equation. The boundary condition is evolved by solving an ordinary differential equation (ODE) defined via appropriate optimality conditions. Unique continuations and standard regularity results for elliptic equations are used to show that the ODE admits a solution for sufficiently long times.