Sample records for time point finally

  1. Longitudinal employment outcomes of an early intervention vocational rehabilitation service for people admitted to rehabilitation with a traumatic spinal cord injury.

    PubMed

    Hilton, G; Unsworth, C A; Murphy, G C; Browne, M; Olver, J

    2017-08-01

    Longitudinal cohort design. First, to explore the longitudinal outcomes for people who received early intervention vocational rehabilitation (EIVR); second, to examine the nature and extent of relationships between contextual factors and employment outcomes over time. Both inpatient and community-based clients of a Spinal Community Integration Service (SCIS). People of workforce age undergoing inpatient rehabilitation for traumatic spinal cord injury were invited to participate in EIVR as part of SCIS. Data were collected at the following three time points: discharge and at 1 year and 2+ years post discharge. Measures included the spinal cord independence measure, hospital anxiety and depression scale, impact on participation and autonomy scale, numerical pain-rating scale and personal wellbeing index. A range of chi square, correlation and regression tests were undertaken to look for relationships between employment outcomes and demographic, emotional and physical characteristics. Ninety-seven participants were recruited and 60 were available at the final time point where 33% (95% confidence interval (CI): 24-42%) had achieved an employment outcome. Greater social participation was strongly correlated with wellbeing (ρ=0.692), and reduced anxiety (ρ=-0.522), depression (ρ=-0.643) and pain (ρ=-0.427) at the final time point. In a generalised linear mixed effect model, education status, relationship status and subjective wellbeing increased significantly the odds of being employed at the final time point. Tertiary education prior to injury was associated with eight times increased odds of being in employment at the final time point; being in a relationship at the time of injury was associated with increased odds of being in employment of more than 3.5; subjective wellbeing, while being the least powerful predictor was still associated with increased odds (1.8 times) of being employed at the final time point. EIVR shows promise in delivering similar return-to-work rates as those traditionally reported, but sooner. The dynamics around relationships, subjective wellbeing, social participation and employment outcomes require further exploration.

  2. Orbital Maneuvers for Spacecrafts Travelling to/from the Lagrangian Points

    NASA Astrophysics Data System (ADS)

    Bertachini, A.

    The well-known Lagrangian points that appear in the planar restricted three-body problem (Szebehely, 1967) are very important for astronautical applications. They are five points of equilibrium in the equations of motion, what means that a particle located at one of those points with zero velocity will remain there indefinitely. The collinear points (L1, L2 and L3) are always unstable and the triangular points (L4 and L5) are stable in the present case studied (Sun-Earth system). They are all very good points to locate a space-station, since they require a small amount of V (and fuel), the control to be used for station-keeping. The triangular points are specially good for this purpose, since they are stable equilibrium points. In this paper, the planar restricted three-body problem is regularized (using Lemaître regularization) and combined with numerical integration and gradient methods to solve the two point boundary value problem (the Lambert's three-body problem). This combination is applied to the search of families of transfer orbits between the Lagrangian points and the Earth, in the Sun-Earth system, with the minimum possible cost of the control used. So, the final goal of this paper is to find the magnitude and direction of the two impulses to be applied in the spacecraft to complete the transfer: the first one when leaving/arriving at the Lagrangian point and the second one when arriving/living at the Earth. This paper is a continuation of two previous papers that studied transfers in the Earth-Moon system: Broucke (1979), that studied transfer orbits between the Lagrangian points and the Moon and Prado (1996), that studied transfer orbits between the Lagrangian points and the Earth. So, the equations of motion are: whereis the pseudo-potential given by: To solve the TPBVP in the regularized variables the following steps are used: i) Guess a initial velocity Vi, so together with the initial prescribed position ri the complete initial state is known; ii) Guess a final regularized time f and integrate the regularized equations of motion from 0 = 0 until f; iii) Check the final position rf obtained from the numerical integration with the prescribed final position and the final real time with the specified time of flight. If there is an agreement (difference less than a specified error allowed) the solution is found and the process can stop here. If there is no agreement, an increment in the initial guessed velocity Vi and in the guessed final regularized time is made and the process goes back to step i). The method used to find the increment in the guessed variables is the standard gradient method, as described in Press et. al., 1989. The routines available in this reference are also used in this research with minor modifications. After that this algorithm is implemented, the Lambert's three-body problem between the Earth and the Lagrangian points is solved for several values of the time of flight. Since the regularized system is used to solve this problem, there is no need to specify the final position of M3 as lying in an primary's parking orbit (to avoid the singularity). Then, to make a comparison with previous papers (Broucke, 1979 and Prado, 1996) the centre of the primary is used as the final position for M3. The results are organized in plots of the energy and the initial flight path angle (the control to be used) in the rotating frame against the time of flight. The definition of the angle is such that the zero is in the "x" axis, (pointing to the positive direction) and it increases in the counter-clock-wise sense. This problem, as well as the Lambert's original version, has two solutions for a given transfer time: one in the counter-clock-wise direction and one in the clock-wise direction in the inertial frame. In this paper, emphasis is given in finding the families with the smallest possible energy (and velocity), although many other families do exist. Broucke, R., (1979) Travelling Between the Lagrange Points and the Moon, Journal of Guidance and Control, Vol. 2, Prado, A.F.B.A., (1969) Travelling Between the Lagrangian Points and the Earth, Acta Astronautica, Vol. 39, No. 7, pp. Press, W. H.; B. P. Flannery; S. A. Teukolsky and W. T. Vetterling (1989), Numerical Recipes, Cambridge University Szebehely, V., (1967), Theory of Orbits, Academic Press, New York.

  3. 40 CFR 265.142 - Cost estimate for closure.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... must equal the cost of final closure at the point in the facility's active life when the extent and... at all times over the life of the facility. (3) The closure cost estimate may not incorporate any... facility at the time of partial or final closure. (4) The owner or operator may not incorporate a zero cost...

  4. 40 CFR 265.142 - Cost estimate for closure.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... must equal the cost of final closure at the point in the facility's active life when the extent and... at all times over the life of the facility. (3) The closure cost estimate may not incorporate any... facility at the time of partial or final closure. (4) The owner or operator may not incorporate a zero cost...

  5. Study of Huizhou architecture component point cloud in surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Runmei; Wang, Guangyin; Ma, Jixiang; Wu, Yulu; Zhang, Guangbin

    2017-06-01

    Surface reconfiguration softwares have many problems such as complicated operation on point cloud data, too many interaction definitions, and too stringent requirements for inputing data. Thus, it has not been widely popularized so far. This paper selects the unique Huizhou Architecture chuandou wooden beam framework as the research object, and presents a complete set of implementation in data acquisition from point, point cloud preprocessing and finally implemented surface reconstruction. Firstly, preprocessing the acquired point cloud data, including segmentation and filtering. Secondly, the surface’s normals are deduced directly from the point cloud dataset. Finally, the surface reconstruction is studied by using Greedy Projection Triangulation Algorithm. Comparing the reconstructed model with the three-dimensional surface reconstruction softwares, the results show that the proposed scheme is more smooth, time efficient and portable.

  6. Instantons in Self-Organizing Logic Gates

    NASA Astrophysics Data System (ADS)

    Bearden, Sean R. B.; Manukian, Haik; Traversa, Fabio L.; Di Ventra, Massimiliano

    2018-03-01

    Self-organizing logic is a recently suggested framework that allows the solution of Boolean truth tables "in reverse"; i.e., it is able to satisfy the logical proposition of gates regardless to which terminal(s) the truth value is assigned ("terminal-agnostic logic"). It can be realized if time nonlocality (memory) is present. A practical realization of self-organizing logic gates (SOLGs) can be done by combining circuit elements with and without memory. By employing one such realization, we show, numerically, that SOLGs exploit elementary instantons to reach equilibrium points. Instantons are classical trajectories of the nonlinear equations of motion describing SOLGs and connect topologically distinct critical points in the phase space. By linear analysis at those points, we show that these instantons connect the initial critical point of the dynamics, with at least one unstable direction, directly to the final fixed point. We also show that the memory content of these gates affects only the relaxation time to reach the logically consistent solution. Finally, we demonstrate, by solving the corresponding stochastic differential equations, that, since instantons connect critical points, noise and perturbations may change the instanton trajectory in the phase space but not the initial and final critical points. Therefore, even for extremely large noise levels, the gates self-organize to the correct solution. Our work provides a physical understanding of, and can serve as an inspiration for, models of bidirectional logic gates that are emerging as important tools in physics-inspired, unconventional computing.

  7. FPFH-based graph matching for 3D point cloud registration

    NASA Astrophysics Data System (ADS)

    Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua

    2018-04-01

    Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.

  8. Analyzing survival curves at a fixed point in time for paired and clustered right-censored data

    PubMed Central

    Su, Pei-Fang; Chi, Yunchan; Lee, Chun-Yi; Shyr, Yu; Liao, Yi-De

    2018-01-01

    In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Rather than comparing entire survival curves, researchers can focus on the comparison at fixed time points that may have a clinical utility for patients. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on some transformations of the Kaplan-Meier estimators of the survival function. However, to compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, their approach would need to be modified. In this paper, we extend the statistics to accommodate the possible within-paired correlation and within-clustered correlation, respectively. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets. PMID:29456280

  9. Deformation of angle profiles in forward kinematics for nullifying end-point offset while preserving movement properties.

    PubMed

    Zhang, Xudong

    2002-10-01

    This work describes a new approach that allows an angle-domain human movement model to generate, via forward kinematics, Cartesian-space human movement representation with otherwise inevitable end-point offset nullified but much of the kinematic authenticity retained. The approach incorporates a rectification procedure that determines the minimum postural angle change at the final frame to correct the end-point offset, and a deformation procedure that deforms the angle profile accordingly to preserve maximum original kinematic authenticity. Two alternative deformation schemes, named amplitude-proportional (AP) and time-proportional (TP) schemes, are proposed and formulated. As an illustration and empirical evaluation, the proposed approach, along with two deformation schemes, was applied to a set of target-directed right-hand reaching movements that had been previously measured and modeled. The evaluation showed that both deformation schemes nullified the final frame end-point offset and significantly reduced time-averaged position errors for the end-point as well as the most distal intermediate joint while causing essentially no change in the remaining joints. A comparison between the two schemes based on time-averaged joint and end-point position errors indicated that overall the TP scheme outperformed the AP scheme. In addition, no statistically significant difference in time-averaged angle error was identified between the raw prediction and either of the deformation schemes, nor between the two schemes themselves, suggesting minimal angle-domain distortion incurred by the deformation.

  10. Effects of Point Count Duration, Time-of-Day, and Aural Stimuli on Detectability of Migratory and Resident Bird Species in Quintana Roo, Mexico

    Treesearch

    James F. Lynch

    1995-01-01

    Effects of count duration, time-of-day, and aural stimuli were studied in a series of unlimited-radius point counts conducted during winter in Quintana Roo, Mexico. The rate at which new species were detected was approximately three times higher during the first 5 minutes of each 15- minute count than in the final 5 minutes. The number of individuals and species...

  11. Point-of-care diagnostics: will the hurdles be overcome this time?

    PubMed

    Huckle, David

    2006-07-01

    Point-of-care diagnostics have been proposed as the latest development in clinical diagnostics several times in the last 30 years; however, they have not yet fully developed into a business sector to match the projections. This perspective examines the reasons for past failures and the failure of technology to meet user needs. Advances have taken place in the last few years that effectively remove technology as a barrier to the development of point-of-care testing. Even regulatory issues regarding how products are developed and claims supported have been absorbed, understood and now accepted. The emphasis here is on the possible favorable aspects that are novel this time around. These changes have arisen as a result of the situation with global healthcare economics and the pressure from patients to be treated more like customers. The final hurdles relate to the conflict between diagnosis with the patient present and treated as soon as the point-of-care result is available and the entrenched positions of the central laboratory, the suppliers and their established distribution chains, and the way in which healthcare budgets are allocated. The ultimate hurdle that encapsulates all of these issues is reimbursement, which is the final barrier to a significant point-of-care diagnostics market--without reimbursement there will be no market.

  12. Lie symmetry analysis, conservation laws and exact solutions of the time-fractional generalized Hirota-Satsuma coupled KdV system

    NASA Astrophysics Data System (ADS)

    Saberi, Elaheh; Reza Hejazi, S.

    2018-02-01

    In the present paper, Lie point symmetries of the time-fractional generalized Hirota-Satsuma coupled KdV (HS-cKdV) system based on the Riemann-Liouville derivative are obtained. Using the derived Lie point symmetries, we obtain similarity reductions and conservation laws of the considered system. Finally, some analytic solutions are furnished by means of the invariant subspace method in the Caputo sense.

  13. 49 CFR 376.11 - General leasing requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... transported, and the address at which the original lease is kept by the authorized carrier. This statement... the equipment is used in its service. These documents shall contain the name and address of the owner of the equipment, the point of origin, the time and date of departure, and the point of final...

  14. Impulsive time-free transfers between halo orbits

    NASA Astrophysics Data System (ADS)

    Hiday, L. A.; Howell, K. C.

    1992-08-01

    A methodology is developed to design optimal time-free impulsive transfers between three-dimensional halo orbits in the vicinity of the interior L1 libration point of the sun-earth/moon barycenter system. The transfer trajectories are optimal in the sense that the total characteristics velocity required to implement the transfer exhibits a local minimum. Criteria are established whereby the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The optimality of a reference two-impulse transfer can be determined by examining the slope at the endpoints of a plot of the magnitude of the primer vector on the reference trajectory. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The optimal time of flight on the time-free transfer, and consequently, the departure and arrival locations on the halo orbits are determined by the unconstrained minimization of a function of two variables using a multivariable search technique. Results indicate that the cost can be substantially diminished by the allowance for coasts in the initial and final libration-point orbits.

  15. Impulsive Time-Free Transfers Between Halo Orbits

    NASA Astrophysics Data System (ADS)

    Hiday-Johnston, L. A.; Howell, K. C.

    1996-12-01

    A methodology is developed to design optimal time-free impulsive transfers between three-dimensional halo orbits in the vicinity of the interior L 1 libration point of the Sun-Earth/Moon barycenter system. The transfer trajectories are optimal in the sense that the total characteristic velocity required to implement the transfer exhibits a local minimum. Criteria are established whereby the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The optimality of a reference two-impulse transfer can be determined by examining the slope at the endpoints of a plot of the magnitude of the primer vector on the reference trajectory. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The optimal time of flight on the time-free transfer, and consequently, the departure and arrival locations on the halo orbits are determined by the unconstrained minimization of a function of two variables using a multivariable search technique. Results indicate that the cost can be substantially diminished by the allowance for coasts in the initial and final libration-point orbits.

  16. Simulator study of a pictorial display for general aviation instrument flight

    NASA Technical Reports Server (NTRS)

    Adams, J. J.

    1982-01-01

    A simulation study of a computer drawn pictorial display involved a flight task that included an en route segment, terminal area maneuvering, a final approach, a missed approach, and a hold. The pictorial display consists of the drawing of boxes which either move along the desired path or are fixed at designated way points. Two boxes may be shown at all times, one related to the active way point and the other related to the standby way point. Ground tracks and vertical profiles of the flights, time histories of the final approach, and comments were obtained from time pilots. The results demonstrate the accuracy and consistency with which the segments of the flight are executed. The pilots found that the display is easy to learn and to use; that it provides good situation awareness, and that it could improve the safety of flight. The small size of the display, the lack of numerical information on pitch, roll, and heading angles, and the lack of definition of the boundaries of the conventional glide slope and localizer areas were criticized.

  17. User's guide to four-body and three-body trajectory optimization programs

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1974-01-01

    A collection of computer programs and subroutines written in FORTRAN to calculate 4-body (sun-earth-moon-space) and 3-body (earth-moon-space) optimal trajectories is presented. The programs incorporate a variable step integration technique and a quadrature formula to correct single step errors. The programs provide capability to solve initial value problem, two point boundary value problem of a transfer from a given initial position to a given final position in fixed time, optimal 2-impulse transfer from an earth parking orbit of given inclination to a given final position and velocity in fixed time and optimal 3-impulse transfer from a given position to a given final position and velocity in fixed time.

  18. Assessment of Surgical Learning Curves in Transoral Robotic Surgery for Squamous Cell Carcinoma of the Oropharynx

    PubMed Central

    Albergotti, William G.; Gooding, William E.; Kubik, Mark W.; Geltzeiler, Mathew; Kim, Seungwon; Duvvuri, Umamaheswar; Ferris, Robert L.

    2017-01-01

    IMPORTANCE Transoral robotic surgery (TORS) is increasingly employed as a treatment option for squamous cell carcinoma of the oropharynx (OPSCC). Measures of surgical learning curves are needed particularly as clinical trials using this technology continue to evolve. OBJECTIVE To assess learning curves for the oncologic TORS surgeon and to identify the number of cases needed to identify the learning phase. DESIGN, SETTING, AND PARTICIPANTS A retrospective review of all patients who underwent TORS for OPSCC at the University of Pittsburgh Medical Center between March 2010 and March 2016. Cases were excluded for involvement of a subsite outside of the oropharynx, for nonmalignant abnormality or nonsquamous histology, unknown primary, no tumor in the main specimen, free flap reconstruction, and for an inability to define margin status. EXPOSURES Transoral robotic surgery for OPSCC. MAIN OUTCOMES AND MEASURES Primary learning measures defined by the authors include the initial and final margin status and time to resection of main surgical specimen. A cumulative sum learning curve was developed for each surgeon for each of the study variables. The inflection point of each surgeon’s curve was considered to be the point signaling the completion of the learning phase. RESULTS There were 382 transoral robotic procedures identified. Of 382 cases, 160 met our inclusion criteria: 68 for surgeon A, 37 for surgeon B, and 55 for surgeon C. Of the 160 included patients, 125 were men and 35 were women. The mean (SD) age of participants was 59.4 (9.5) years. Mean (SD) time to resection including robot set-up was 79 (36) minutes. The inflection points for the final margin status learning curves were 27 cases (surgeon A) and 25 cases (surgeon C). There was no inflection point for surgeon B for final margin status. Inflection points for mean time to resection were: 39 cases (surgeon A), 30 cases (surgeon B), and 27 cases (surgeon C). CONCLUSIONS AND RELEVANCE Using metrics of positive margin rate and time to resection of the main surgical specimen, the learning curve for TORS for OPSCC is surgeon-specific. Inflection points for most learning curves peak between 20 and 30 cases. PMID:28196200

  19. Computation of the target state and feedback controls for time optimal consensus in multi-agent systems

    NASA Astrophysics Data System (ADS)

    Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj

    2018-02-01

    N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.

  20. Speckle noise suppression method in holographic display using time multiplexing

    NASA Astrophysics Data System (ADS)

    Liu, Su-Juan; Wang, Di; Li, Song-Jie; Wang, Qiong-Hua

    2017-06-01

    We propose a method to suppress the speckle noise in holographic display using time multiplexing. The diffractive optical elements (DOEs) and the subcomputer-generated holograms (sub-CGHs) are generated, respectively. The final image is reconstructed using time multiplexing of the subimages and the final subimages. Meanwhile, the speckle noise of the final image is suppressed by reducing the coherence of the reconstructed light and separating the adjacent image points in space. Compared with the pixel separation method, the experiments demonstrate that the proposed method suppresses the speckle noise effectively with less calculation burden and lower demand for frame rate of the spatial light modulator. In addition, with increases of the DOEs and the sub-CGHs, the speckle noise is further suppressed.

  1. Accuracy of heart rate variability estimation by photoplethysmography using an smartphone: Processing optimization and fiducial point selection.

    PubMed

    Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A

    2015-08-01

    This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.

  2. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  3. The assessment of lower face morphology changes in edentulous patients after prosthodontic rehabilitation, using two methods of measurement.

    PubMed

    Jivănescu, Anca; Bratu, Dana Cristina; Tomescu, Lucian; Măroiu, Alexandra Cristina; Popa, George; Bratu, Emanuel Adrian

    2015-01-01

    Using two measurement methods (a three-dimensional laser scanning system and a digital caliper), this study compares the lower face morphology of complete edentulous patients, before and after prosthodontic rehabilitation with bimaxillary complete dentures. Fourteen edentulous patients were randomly selected from the Department of Prosthodontics, at the Faculty of Dental Medicine, "Victor Babes" University of Medicine and Pharmacy, Timisoara, Romania. The changes that occurred in the lower third of the face after prosthodontic treatment were assessed quantitatively by measuring the vertical projection of the distances between two sets of anthropometric landmarks: Subnasale - cutaneous Pogonion (D1) and Labiale superius - Labiale inferius (D2). A two-way repeated measures ANOVA model design was carried out to test for significant interactions, main effects and differences between the two types of measuring devices and between the initial and final rehabilitation time points. The main effect of the type of measuring device showed no statistically significant differences in the measured distances (p=0.24 for D1 and p=0.39 for D2), between the initial and the final rehabilitation time points. Regarding the main effect of time, there were statistically significant differences in both the measured distances D1 and D2 (p=0.001), between the initial and the final rehabilitation time points. The two methods of measurement were equally reliable in the assessment of lower face morphology changes in edentulous patients after prosthodontic rehabilitation with bimaxillary complete dentures. The differences between the measurements taken before and after prosthodontic rehabilitation proved to be statistically significant.

  4. 77 FR 26474 - Approval and Promulgation of Air Quality Implementation Plans; Maryland; Approval of 2011 Consent...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-04

    ...EPA proposes to approve State Implementation Plan (SIP) revisions submitted by the Maryland Department of the Environment (MDE). These revisions approve specific provisions of a 2011 Consent Decree between MDE and GenOn to reduce particulate matter (PM), sulfur oxides (SOX), and nitrogen oxides (NOX) from the GenOn Chalk Point generating station (Chalk Point). These revisions also remove the 1978 and 1979 Consent Orders for the Chalk Point generating station from the Maryland SIP as those Consent Orders have been superseded by the 2011 Consent Decree. In the Final Rules section of this Federal Register, EPA is approving the State's SIP submittal as a direct final rule without prior proposal because the Agency views this as a noncontroversial submittal and anticipates no adverse comments. A detailed rationale for the approval is set forth in the direct final rule. If no adverse comments are received in response to this action, no further activity is contemplated. If EPA receives adverse comments, the direct final rule will be withdrawn and all public comments received will be addressed in a subsequent final rule based on this proposed rule. EPA will not institute a second comment period. Any parties interested in commenting on this action should do so at this time.

  5. Causality in time-neutral cosmologies

    NASA Astrophysics Data System (ADS)

    Kent, Adrian

    1999-02-01

    Gell-Mann and Hartle (GMH) have recently considered time-neutral cosmological models in which the initial and final conditions are independently specified, and several authors have investigated experimental tests of such models. We point out here that GMH time-neutral models can allow superluminal signaling, in the sense that it can be possible for observers in those cosmologies, by detecting and exploiting regularities in the final state, to construct devices which send and receive signals between space-like separated points. In suitable cosmologies, any single superluminal message can be transmitted with probability arbitrarily close to one by the use of redundant signals. However, the outcome probabilities of quantum measurements generally depend on precisely which past and future measurements take place. As the transmission of any signal relies on quantum measurements, its transmission probability is similarly context dependent. As a result, the standard superluminal signaling paradoxes do not apply. Despite their unusual features, the models are internally consistent. These results illustrate an interesting conceptual point. The standard view of Minkowski causality is not an absolutely indispensable part of the mathematical formalism of relativistic quantum theory. It is contingent on the empirical observation that naturally occurring ensembles can be naturally pre-selected but not post-selected.

  6. Impacts of Satellite Orbit and Clock on Real-Time GPS Point and Relative Positioning.

    PubMed

    Shi, Junbo; Wang, Gaojing; Han, Xianquan; Guo, Jiming

    2017-06-12

    Satellite orbit and clock corrections are always treated as known quantities in GPS positioning models. Therefore, any error in the satellite orbit and clock products will probably cause significant consequences for GPS positioning, especially for real-time applications. Currently three types of satellite products have been made available for real-time positioning, including the broadcast ephemeris, the International GNSS Service (IGS) predicted ultra-rapid product, and the real-time product. In this study, these three predicted/real-time satellite orbit and clock products are first evaluated with respect to the post-mission IGS final product, which demonstrates cm to m level orbit accuracies and sub-ns to ns level clock accuracies. Impacts of real-time satellite orbit and clock products on GPS point and relative positioning are then investigated using the P3 and GAMIT software packages, respectively. Numerical results show that the real-time satellite clock corrections affect the point positioning more significantly than the orbit corrections. On the contrary, only the real-time orbit corrections impact the relative positioning. Compared with the positioning solution using the IGS final product with the nominal orbit accuracy of ~2.5 cm, the real-time broadcast ephemeris with ~2 m orbit accuracy provided <2 cm relative positioning error for baselines no longer than 216 km. As for the baselines ranging from 574 to 2982 km, the cm-dm level positioning error was identified for the relative positioning solution using the broadcast ephemeris. The real-time product could result in <5 mm relative positioning accuracy for baselines within 2982 km, slightly better than the predicted ultra-rapid product.

  7. Voice outcomes after concurrent chemoradiotherapy for advanced nonlaryngeal head and neck cancer: a prospective study.

    PubMed

    Paleri, Vinidh; Carding, Paul; Chatterjee, Sanjoy; Kelly, Charles; Wilson, Janet Ann; Welch, Andrew; Drinnan, Michael

    2012-12-01

    The voice impact of treatment for nonlaryngeal head and neck primary sites remains unknown. We conducted a prospective study of a consecutive sample of patients undergoing chemoradiation for nonlaryngeal head and neck cancer. The Voice Symptom Scale (VoiSS) was completed, and voice recordings were made at 3 time-points. Of 42 recruited patients, 34 completed the measures before and in the early posttreatment phase (mean 16.5 weeks), while 21 patients were assessed at the final time-point (mean, 20.4 months). VoiSS scores showed statistically significant progressive deterioration in the total score (p = .02) and impairment subscale (p < .0001) through to the final assessment. Acoustic measures and perceptual ratings deteriorated significantly (p < .001) in the early posttreatment weeks and improved at the final assessment, but not to the baseline. Interrater agreement was excellent for expert measures. To the best of our knowledge, this is the first prospective study to show that chemoradiation therapy for nonlaryngeal head and neck cancer has a significant effect on the patients' self-reported voice quality, even in the long term. Copyright © 2012 Wiley Periodicals, Inc.

  8. Connections between Transcription Downstream of Genes and cis-SAGe Chimeric RNA.

    PubMed

    Chwalenia, Katarzyna; Qin, Fujun; Singh, Sandeep; Tangtrongstittikul, Panjapon; Li, Hui

    2017-11-22

    cis-Splicing between adjacent genes (cis-SAGe) is being recognized as one way to produce chimeric fusion RNAs. However, its detail mechanism is not clear. Recent study revealed induction of transcriptions downstream of genes (DoGs) under osmotic stress. Here, we investigated the influence of osmotic stress on cis-SAGe chimeric RNAs and their connection to DoGs. We found,the absence of induction of at least some cis-SAGe fusions and/or their corresponding DoGs at early time point(s). In fact, these DoGs and their cis-SAGe fusions are inversely correlated. This negative correlation was changed to positive at a later time point. These results suggest a direct competition between the two categories of transcripts when total pool of readthrough transcripts is limited at an early time point. At a later time point, DoGs and corresponding cis-SAGe fusions are both induced, indicating that total readthrough transcripts become more abundant. Finally, we observed overall enhancement of cis-SAGe chimeric RNAs in KCl-treated samples by RNA-Seq analysis.

  9. Collective efficacy: How is it conceptualized, how is it measured, and does it really matter for understanding perceived neighborhood crime and disorder?

    PubMed Central

    Hipp, John R.

    2016-01-01

    Building on the insights of the self-efficacy literature, this study highlights that collective efficacy is a collective perception that comes from a process. This study emphasizes that 1) there is updating, as there are feedback effects from success or failure by the group to the perception of collective efficacy, and 2) this updating raises the importance of accounting for members' degree of uncertainty regarding neighborhood collective efficacy. Using a sample of 113 block groups in three rural North Carolina counties, this study finds evidence of updating as neighborhoods perceiving more crime or disorder reported less collective efficacy at the next time point. Furthermore, collective efficacy was only associated with lower perceived disorder at the next time point when it occurred in highly cohesive neighborhoods. Finally, neighborhoods with more perceived disorder and uncertainty regarding collective efficacy at one time point had lower levels of collective efficacy at the next time point, illustrating the importance of uncertainty along with updating. PMID:27069285

  10. Time-free transfers between libration-point orbits in the elliptic restricted problem

    NASA Astrophysics Data System (ADS)

    Howell, K. C.; Hiday-Johnston, L. A.

    This work is part of a larger research effort directed toward the formulation of a strategy to design optimal time-free impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior LI libration point of the Sun-Earth/Moon barycenter system. Inferior transfers that move a spacecraft from a large halo orbit to a smaller halo orbit are considered here. Primer vector theory is applied to non-optimal impulsive trajectories in the elliptic restricted three-body problem in order to establish whether the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The addition of interior impulses is also considered. Results indicate that a substantial savings in fuel can be achieved by the allowance for coastal periods on the specified libration-point orbits. The resulting time-free inferior transfers are compared to time-free superior transfers between halo orbits of equal z-amplitude separation.

  11. Time-free transfers between libration-point orbits in the elliptic restricted problem

    NASA Astrophysics Data System (ADS)

    Howell, K. C.; Hiday, L. A.

    1992-08-01

    This work is directed toward the formulation of a strategy to design optimal time-free impulsive transfers between 3D libration-point orbits in the vicinity of the interior L1 libration point of the sun-earth/moon barycenter system. Inferior transfers that move a spacecraft from a large halo orbit to a smaller halo orbit are considered here. Primer vector theory is applied to nonoptimal impulsive trajectories in the elliptic restricted three-body problem in order to establish whether the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The addition of interior impulses is also considered. Results indicate that a substantial savings in fuel can be achieved by the allowance for coastal periods on the specified libration-point orbits. The resulting time-free inferior transfers are compared to time-free superior transfers between halo orbits of equal z-amplitude separation.

  12. Stability Analysis of Continuous-Time and Discrete-Time Quaternion-Valued Neural Networks With Linear Threshold Neurons.

    PubMed

    Chen, Xiaofeng; Song, Qiankun; Li, Zhongshan; Zhao, Zhenjiang; Liu, Yurong

    2018-07-01

    This paper addresses the problem of stability for continuous-time and discrete-time quaternion-valued neural networks (QVNNs) with linear threshold neurons. Applying the semidiscretization technique to the continuous-time QVNNs, the discrete-time analogs are obtained, which preserve the dynamical characteristics of their continuous-time counterparts. Via the plural decomposition method of quaternion, homeomorphic mapping theorem, as well as Lyapunov theorem, some sufficient conditions on the existence, uniqueness, and global asymptotical stability of the equilibrium point are derived for the continuous-time QVNNs and their discrete-time analogs, respectively. Furthermore, a uniform sufficient condition on the existence, uniqueness, and global asymptotical stability of the equilibrium point is obtained for both continuous-time QVNNs and their discrete-time version. Finally, two numerical examples are provided to substantiate the effectiveness of the proposed results.

  13. The effect of different control point sampling sequences on convergence of VMAT inverse planning

    NASA Astrophysics Data System (ADS)

    Pardo Montero, Juan; Fenwick, John D.

    2011-04-01

    A key component of some volumetric-modulated arc therapy (VMAT) optimization algorithms is the progressive addition of control points to the optimization. This idea was introduced in Otto's seminal VMAT paper, in which a coarse sampling of control points was used at the beginning of the optimization and new control points were progressively added one at a time. A different form of the methodology is also present in the RapidArc optimizer, which adds new control points in groups called 'multiresolution levels', each doubling the number of control points in the optimization. This progressive sampling accelerates convergence, improving the results obtained, and has similarities with the ordered subset algorithm used to accelerate iterative image reconstruction. In this work we have used a VMAT optimizer developed in-house to study the performance of optimization algorithms which use different control point sampling sequences, most of which fall into three different classes: doubling sequences, which add new control points in groups such that the number of control points in the optimization is (roughly) doubled; Otto-like progressive sampling which adds one control point at a time, and equi-length sequences which contain several multiresolution levels each with the same number of control points. Results are presented in this study for two clinical geometries, prostate and head-and-neck treatments. A dependence of the quality of the final solution on the number of starting control points has been observed, in agreement with previous works. We have found that some sequences, especially E20 and E30 (equi-length sequences with 20 and 30 multiresolution levels, respectively), generate better results than a 5 multiresolution level RapidArc-like sequence. The final value of the cost function is reduced up to 20%, such reductions leading to small improvements in dosimetric parameters characterizing the treatments—slightly more homogeneous target doses and better sparing of the organs at risk.

  14. Voronoi Tessellation for reducing the processing time of correlation functions

    NASA Astrophysics Data System (ADS)

    Cárdenas-Montes, Miguel; Sevilla-Noarbe, Ignacio

    2018-01-01

    The increase of data volume in Cosmology is motivating the search of new solutions for solving the difficulties associated with the large processing time and precision of calculations. This is specially true in the case of several relevant statistics of the galaxy distribution of the Large Scale Structure of the Universe, namely the two and three point angular correlation functions. For these, the processing time has critically grown with the increase of the size of the data sample. Beyond parallel implementations to overcome the barrier of processing time, space partitioning algorithms are necessary to reduce the computational load. These can delimit the elements involved in the correlation function estimation to those that can potentially contribute to the final result. In this work, Voronoi Tessellation is used to reduce the processing time of the two-point and three-point angular correlation functions. The results of this proof-of-concept show a significant reduction of the processing time when preprocessing the galaxy positions with Voronoi Tessellation.

  15. Conformal structure of massless scalar amplitudes beyond tree level

    NASA Astrophysics Data System (ADS)

    Banerjee, Nabamita; Banerjee, Shamik; Bhatkar, Sayali Atul; Jain, Sachin

    2018-04-01

    We show that the one-loop on-shell four-point scattering amplitude of massless ϕ 4 scalar field theory in 4D Minkowski space time, when Mellin transformed to the Celestial sphere at infinity, transforms covariantly under the global conformal group (SL(2, ℂ)) on the sphere. The unitarity of the four-point scalar amplitudes is recast into this Mellin basis. We show that the same conformal structure also appears for the two-loop Mellin amplitude. Finally we comment on some universal structure for all loop four-point Mellin amplitudes specific to this theory.

  16. Determination of the boiling-point distribution by simulated distillation from n-pentane through n-tetratetracontane in 70 to 80 seconds.

    PubMed

    Lubkowitz, Joaquin A; Meneghini, Roberto I

    2002-01-01

    This work presents the carrying out of boiling-point distributions by simulated distillation with direct-column heating rather than oven-column heating. Column-heating rates of 300 degrees C/min are obtained yielding retention times of 73 s for n-tetratetracontane. The calibration curves of the retention time versus the boiling point, in the range of n-pentane to n-tetratetracontane, are identical to those obtained by slower oven-heating rates. The boiling-point distribution of the reference gas oil is compared with that obtained with column oven heating at rates of 15 to 40 degrees C/min. The results show boiling-point distribution values nearly the same (1-2 degrees F) as those obtained with oven column heating from the initial boiling point to 80% distilled off. Slightly higher differences are obtained (3-4 degrees F) for the 80% distillation to final boiling-point interval. Nonetheless, allowed consensus differences are never exceeded. Precision of the boiling-point distributions (expressed as standard deviations) are 0.1-0.3% for the data obtained in the direct column-heating mode.

  17. An approach of point cloud denoising based on improved bilateral filtering

    NASA Astrophysics Data System (ADS)

    Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin

    2018-04-01

    An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.

  18. SOFIA pointing history

    NASA Astrophysics Data System (ADS)

    Kärcher, Hans J.; Kunz, Nans; Temi, Pasquale; Krabbe, Alfred; Wagner, Jörg; Süß, Martin

    2014-07-01

    The original pointing accuracy requirement of the Stratospheric Observatory for Infrared Astronomy SOFIA was defined at the beginning of the program in the late 1980s as very challenging 0.2 arcsec rms. The early science flights of the observatory started in December 2010 and the observatory has reached in the mean time nearly 0.7 arcsec rms, which is sufficient for most of the SOFIA science instruments. NASA and DLR, the owners of SOFIA, are planning now a future 4 year program to bring the pointing down to the ultimate 0.2 arcsec rms. This may be the right time to recall the history of the pointing requirement and its verification and the possibility of its achievement via early computer models and wind tunnel tests, later computer aided end-to-end simulations up to the first commissioning flights some years ago. The paper recollects the tools used in the different project phases for the verification of the pointing performance, explains the achievements and may give hints for the planning of the upcoming final pointing improvement phase.

  19. Moments in time: metacognition, trust, and outcomes in dyadic negotiations.

    PubMed

    Olekalns, Mara; Smith, Philip L

    2005-12-01

    This research tested the relationships between turning points, cognitive and affective trust, and negotiation outcomes. After completing a simulated negotiation, participants identified turning points from videotape. Turning points were then classified as substantive (interest, offer), characterization (positive, negative), or procedural (positive, negative). Prenegotiation affective trust predicted subsequent turning points, whereas prenegotiation cognitive trust did not, suggesting that different cues influence the two types of trust. Postnegotiation cognitive trust was increased by the occurrence of interest, positive characterization, and positive procedural turning points and decreased by negative characterization turning points. Affective trust was increased by positive procedural turning points. Finally, interest turning points resulted in higher joint outcomes, whereas negative characterization turning points resulted in lower joint outcomes. We conclude that there are two paths to building trust and increasing joint gain, one through insight and one through signaling good faith intentions.

  20. Exploring the utility of measures of critical thinking dispositions and professional behavior development in an audiology education program.

    PubMed

    Ng, Stella L; Bartlett, Doreen J; Lucy, S Deborah

    2013-05-01

    Discussions about professional behaviors are growing increasingly prevalent across health professions, especially as a central component to education programs. A strong critical thinking disposition, paired with critical consciousness, may provide future health professionals with a foundation for solving challenging practice problems through the application of sound technical skill and scientific knowledge without sacrificing sensitive, empathic, client-centered practice. In this article, we describe an approach to monitoring student development of critical thinking dispositions and key professional behaviors as a way to inform faculty members' and clinical supervisors' support of students and ongoing curriculum development. We designed this exploratory study to describe the trajectory of change for a cohort of audiology students' critical thinking dispositions (measured by the California Critical Thinking Disposition Inventory: [CCTDI]) and professional behaviors (using the Comprehensive Professional Behaviors Development Log-Audiology [CPBDL-A]) in an audiology program. Implications for the CCTDI and CPBDL-A in audiology entry-to-practice curricula and professional development will be discussed. This exploratory study involved a cohort of audiology students, studied over a two-year period, using a one-group repeated measures design. Eighteen audiology students (two male and 16 female), began the study. At the third and final data collection point, 15 students completed the CCTDI, and nine students completed the CPBDL-A. The CCTDI and CPBDL-A were each completed at three time points: at the beginning, at the middle, and near the end of the audiology education program. Data are presented descriptively in box plots to examine the trends of development for each critical thinking disposition dimension and each key professional behavior as well as for an overall critical thinking disposition score. For the CCTDI, there was a general downward trend from time point 1 to time point 2 and a general upward trend from time point 2 to time point 3. Students demonstrated upward trends from the initial to final time point for their self-assessed development of professional behaviors as indicated on the CPBDL-A. The CCTDI and CPBDL-A can be used by audiology education programs as mechanisms for inspiring, fostering, and monitoring the development of critical thinking dispositions and key professional behaviors in students. Feedback and mentoring about dispositions and behaviors in conjunction with completion of these measures is recommended for inspiring and fostering these key professional attributes. American Academy of Audiology.

  1. Functional renormalization group for the U (1 )-T56 tensorial group field theory with closure constraint

    NASA Astrophysics Data System (ADS)

    Lahoche, Vincent; Ousmane Samary, Dine

    2017-02-01

    This paper is focused on the functional renormalization group applied to the T56 tensor model on the Abelian group U (1 ) with closure constraint. For the first time, we derive the flow equations for the couplings and mass parameters in a suitable truncation around the marginal interactions with respect to the perturbative power counting. For the second time, we study the behavior around the Gaussian fixed point, and show that the theory is nonasymptotically free. Finally, we discuss the UV completion of the theory. We show the existence of several nontrivial fixed points, study the behavior of the renormalization group flow around them, and point out evidence in favor of an asymptotically safe theory.

  2. 78 FR 8577 - Final Environmental Impact Statement; Izembek National Wildlife Refuge Proposed Land Exchange...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-06

    ... world. In recognition of this, and for its importance to internationally migrating birds, it was.... During that time, we held four face-to-face public meetings in Anchorage, Sand Point, Cold Bay, and King...

  3. Comparison Between One-Point Calibration and Two-Point Calibration Approaches in a Continuous Glucose Monitoring Algorithm

    PubMed Central

    Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl

    2014-01-01

    Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420

  4. Characteristic Boundary Conditions for ARO-1

    DTIC Science & Technology

    1983-05-01

    I As shown in Fig. 3, the point designated II is the interior point that was used to define the barred coordinate system , evaluated at time t=. All...L. Jacocks Calspan Field Services, Inc. May 1983 Final Report for Period October 1981 - September 1982 r Approved for public release; destribut ...on unlimited I ARNOLD ENGINEERING DEVELOPMENT CENTER ARNOLD AIR FORCE STATION, TENNESSEE AIR FORCE SYSTEMS COMMAND UNITED STATES AIR FORCE N O T I

  5. The point of no return: A fundamental limit on the ability to control thought and action.

    PubMed

    Logan, Gordon D

    2015-01-01

    Bartlett (1958. Thinking. New York: Basic Books) described the point of no return as a point of irrevocable commitment to action, which was preceded by a period of gradually increasing commitment. As such, the point of no return reflects a fundamental limit on the ability to control thought and action. I review the literature on the point of no return, taking three perspectives. First, I consider the point of no return from the perspective of the controlled act, as a locus in the architecture and anatomy of the underlying processes. I review experiments from the stop-signal paradigm that suggest that the point of no return is located late in the response system. Then I consider the point of no return from the perspective of the act of control that tries to change the controlled act before it becomes irrevocable. From this perspective, the point of no return is a point in time that provides enough "lead time" for the act of control to take effect. I review experiments that measure the response time to the stop signal as the lead time required for response inhibition in the stop-signal paradigm. Finally, I consider the point of no return in hierarchically controlled tasks, in which there may be many points of no return at different levels of the hierarchy. I review experiments on skilled typing that suggest different points of no return for the commands that determine what is typed and the countermands that inhibit typing, with increasing commitment to action the lower the level in the hierarchy. I end by considering the point of no return in perception and thought as well as action.

  6. Combined Vocal Exercises for Rehabilitation After Supracricoid Laryngectomy: Evaluation of Different Execution Times.

    PubMed

    Silveira, Hevely Saray Lima; Simões-Zenari, Marcia; Kulcsar, Marco Aurélio; Cernea, Claudio Roberto; Nemr, Kátia

    2017-10-27

    The supracricoid partial laryngectomy allows the preservation of laryngeal functions with good local cancer control. To assess laryngeal configuration and voice analysis data following the performance of a combination of two vocal exercises: the prolonged /b/vocal exercise combined with the vowel /e/ using chest and arm pushing with different durations among individuals who have undergone supracricoid laryngectomy. Eleven patients undergoing partial laryngectomy supracricoid with cricohyoidoepiglottopexy (CHEP) were evaluated using voice recording. Four judges performed separately a perceptive-vocal analysis of hearing voices, with random samples. For the analysis of intrajudge reliability, repetitions of 70% of the voices were done. Intraclass correlation coefficient was used to analyze the reliability of the judges. For an analysis of each judge to the comparison between zero time (time point 0), after the first series of exercises (time point 1), after the second series (time point 2), after the third series (time point 3), after the fourth series (time point 4), and after the fifth and final series (time point 5), the Friedman test was used with a significance level of 5%. The data relative to the configuration of the larynx were subjected to a descriptive analysis. In the evaluation, were considered the judge results 1 which have greater reliability. There was an improvement in the general level of vocal, roughness, and breathiness deviations from time point 4 [T4]. The prolonged /b/vocal exercise, combined with the vowel /e/ using chest- and arm-pushing exercises, was associated with an improvement in the overall grade of vocal deviation, roughness, and breathiness starting at minute 4 among patients who had undergone supracricoid laryngectomy with CHEP reconstruction. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  7. An interpretation model of GPR point data in tunnel geological prediction

    NASA Astrophysics Data System (ADS)

    He, Yu-yao; Li, Bao-qi; Guo, Yuan-shu; Wang, Teng-na; Zhu, Ya

    2017-02-01

    GPR (Ground Penetrating Radar) point data plays an absolutely necessary role in the tunnel geological prediction. However, the research work on the GPR point data is very little and the results does not meet the actual requirements of the project. In this paper, a GPR point data interpretation model which is based on WD (Wigner distribution) and deep CNN (convolutional neural network) is proposed. Firstly, the GPR point data is transformed by WD to get the map of time-frequency joint distribution; Secondly, the joint distribution maps are classified by deep CNN. The approximate location of geological target is determined by observing the time frequency map in parallel; Finally, the GPR point data is interpreted according to the classification results and position information from the map. The simulation results show that classification accuracy of the test dataset (include 1200 GPR point data) is 91.83% at the 200 iteration. Our model has the advantages of high accuracy and fast training speed, and can provide a scientific basis for the development of tunnel construction and excavation plan.

  8. Instance-Based Learning: Integrating Sampling and Repeated Decisions from Experience

    ERIC Educational Resources Information Center

    Gonzalez, Cleotilde; Dutt, Varun

    2011-01-01

    In decisions from experience, there are 2 experimental paradigms: sampling and repeated-choice. In the sampling paradigm, participants sample between 2 options as many times as they want (i.e., the stopping point is variable), observe the outcome with no real consequences each time, and finally select 1 of the 2 options that cause them to earn or…

  9. 77 FR 421 - Drawbridge Operation Regulation; Long Island, New York Inland Waterway From East Rockaway Inlet...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-05

    ..., mile 6.1, across Narrow Bay, between Smith Point and Fire Island, New York. This temporary final rule..., (76 FR 60733) and that work was unexpectedly delayed. This rule provides a time extension so that the rehabilitation can be completed in the shortest possible time frame. Without this rule the work would have to be...

  10. Pointing to double-step visual stimuli from a standing position: motor corrections when the speed-accuracy trade-off is unexpectedly modified in-flight. A breakdown of the perception-action coupling.

    PubMed

    Fautrelle, L; Barbieri, G; Ballay, Y; Bonnetblanc, F

    2011-10-27

    The time required to complete a fast and accurate movement is a function of its amplitude and the target size. This phenomenon refers to the well known speed-accuracy trade-off. Some interpretations have suggested that the speed-accuracy trade-off is already integrated into the movement planning phase. More specifically, pointing movements may be planned to minimize the variance of the final hand position. However, goal-directed movements can be altered at any time, if for instance, the target location is changed during execution. Thus, one possible limitation of these interpretations may be that they underestimate feedback processes. To further investigate this hypothesis we designed an experiment in which the speed-accuracy trade-off was unexpectedly varied at the hand movement onset by modifying separately the target distance or size, or by modifying both of them simultaneously. These pointing movements were executed from an upright standing position. Our main results showed that the movement time increased when there was a change to the size or location of the target. In addition, the terminal variability of finger position did not change. In other words, it showed that the movement velocity is modulated according to the target size and distance during motor programming or during the final approach, independently of the final variability of the hand position. It suggests that when the speed-accuracy trade-off is unexpectedly modified, terminal feedbacks based on intermediate representations of the endpoint velocity are used to monitor and control the hand displacement. There is clearly no obvious perception-action coupling in this case but rather intermediate processing that may be involved. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  11. Detecting P and S-wave of Mt. Rinjani seismic based on a locally stationary autoregressive (LSAR) model

    NASA Astrophysics Data System (ADS)

    Nurhaida, Subanar, Abdurakhman, Abadi, Agus Maman

    2017-08-01

    Seismic data is usually modelled using autoregressive processes. The aim of this paper is to find the arrival times of the seismic waves of Mt. Rinjani in Indonesia. Kitagawa algorithm's is used to detect the seismic P and S-wave. Householder transformation used in the algorithm made it effectively finding the number of change points and parameters of the autoregressive models. The results show that the use of Box-Cox transformation on the variable selection level makes the algorithm works well in detecting the change points. Furthermore, when the basic span of the subinterval is set 200 seconds and the maximum AR order is 20, there are 8 change points which occur at 1601, 2001, 7401, 7601,7801, 8001, 8201 and 9601. Finally, The P and S-wave arrival times are detected at time 1671 and 2045 respectively using a precise detection algorithm.

  12. The Point of No Return

    PubMed Central

    Logan, Gordon D.

    2015-01-01

    Bartlett (1958) described the point of no return as a point of irrevocable commitment to action, which was preceded by a period of gradually increasing commitment. As such, the point of no return reflects a fundamental limit on the ability to control thought and action. I review the literature on the point of no return, taking three perspectives. First, I consider the point of no return from the perspective of the controlled act, as a locus in the architecture and anatomy of the underlying processes. I review experiments from the stop-signal paradigm that suggest that the point of no return is located late in the response system. Then I consider the point of no return from the perspective of the act of control that tries to change the controlled act before it becomes irrevocable. From this perspective, the point of no return is a point in time that provides enough “lead time” for the act of control to take effect. I review experiments that measure the response time to the stop signal as the lead time required for response inhibition in the stop-signal paradigm. Finally, I consider the point of no return in hierarchically controlled tasks, in which there may be many points of no return at different levels of the hierarchy. I review experiments on skilled typing that suggest different points of no return for the commands that determine what is typed and the countermands that inhibit typing, with increasing commitment to action the lower the level in the hierarchy. I end by considering the point of no return in perception and thought as well as action. PMID:25633089

  13. Real-time EEG-based detection of fatigue driving danger for accident prediction.

    PubMed

    Wang, Hong; Zhang, Chi; Shi, Tianwei; Wang, Fuwang; Ma, Shujun

    2015-03-01

    This paper proposes a real-time electroencephalogram (EEG)-based detection method of the potential danger during fatigue driving. To determine driver fatigue in real time, wavelet entropy with a sliding window and pulse coupled neural network (PCNN) were used to process the EEG signals in the visual area (the main information input route). To detect the fatigue danger, the neural mechanism of driver fatigue was analyzed. The functional brain networks were employed to track the fatigue impact on processing capacity of brain. The results show the overall functional connectivity of the subjects is weakened after long time driving tasks. The regularity is summarized as the fatigue convergence phenomenon. Based on the fatigue convergence phenomenon, we combined both the input and global synchronizations of brain together to calculate the residual amount of the information processing capacity of brain to obtain the dangerous points in real time. Finally, the danger detection system of the driver fatigue based on the neural mechanism was validated using accident EEG. The time distributions of the output danger points of the system have a good agreement with those of the real accident points.

  14. An adaptive clustering algorithm for image matching based on corner feature

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-04-01

    The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.

  15. Dynamic analysis of suspension cable based on vector form intrinsic finite element method

    NASA Astrophysics Data System (ADS)

    Qin, Jian; Qiao, Liang; Wan, Jiancheng; Jiang, Ming; Xia, Yongjun

    2017-10-01

    A vector finite element method is presented for the dynamic analysis of cable structures based on the vector form intrinsic finite element (VFIFE) and mechanical properties of suspension cable. Firstly, the suspension cable is discretized into different elements by space points, the mass and external forces of suspension cable are transformed into space points. The structural form of cable is described by the space points at different time. The equations of motion for the space points are established according to the Newton’s second law. Then, the element internal forces between the space points are derived from the flexible truss structure. Finally, the motion equations of space points are solved by the central difference method with reasonable time integration step. The tangential tension of the bearing rope in a test ropeway with the moving concentrated loads is calculated and compared with the experimental data. The results show that the tangential tension of suspension cable with moving loads is consistent with the experimental data. This method has high calculated precision and meets the requirements of engineering application.

  16. Job Loss and Infrastructure Job Creation Spending During the Recession

    DTIC Science & Technology

    2010-05-26

    the clay product and refractory manufacturing industry used by the construction industry to erect residential buildings. The output requirements from...a particular point in time. The total employment requirement associated with a given type of final demand (e.g., a water reuse program) is the

  17. On the Jupiter's ephemeris in the Ch'i-Yao Jang-Tsai-Chüch.

    NASA Astrophysics Data System (ADS)

    Niu, Weixing; Jiang, Xiaoyuan

    Jupiter's ephemeris preserved in the Ch'i-Yao Jang-Tsai-Chüch is interpreted. Then the time and position coordinates of Jupiter's first stationary point, second stationary point, first visibility in the east and last visibility in the west, which recorded in the ephemeris are analysed. The accuracy of the ephemeris is also discussed. Finally, it is identified that the ephemeris has been used as an astrological handbook by Japanese astrologers in 973 - 1132.

  18. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  19. Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor

    NASA Astrophysics Data System (ADS)

    Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul

    2017-05-01

    Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.

  20. Serial MRI evaluation following arthroscopic rotator cuff repair in double-row technique.

    PubMed

    Stahnke, Katharina; Nikulka, Constanze; Diederichs, Gerd; Haneveld, Hendrik; Scheibel, Markus; Gerhardt, Christian

    2016-05-01

    So far, recurrent rotator cuff defects are described to occur in the early postoperative period after arthroscopic repair. The aim of this study was to evaluate the musculotendinous structure of the supraspinatus, as well as bone marrow edema or osteolysis after arthroscopic double-row repair. Therefore, magnetic resonance (MR) images were performed at defined intervals up to 2 years postoperatively. Case series; Level of evidence, 3. MR imaging was performed within 7 days, 3, 6, 12, 26, 52 and 108 weeks after surgery. All patients were operated using an arthroscopic modified suture bridge technique. Tendon integrity, tendon retraction ["foot-print-coverage" (FPC)], muscular atrophy and fatty infiltration (signal intensity analysis) were measured at all time points. Furthermore, postoperative bone marrow edema and signs of osteolysis were assessed. MR images of 13 non-consecutive patients (6f/7m, ∅ age 61.05 ± 7.7 years) could be evaluated at all time points until ∅ 108 weeks postoperatively. 5/6 patients with recurrent defect at final follow-up displayed a time of failure between 12 and 24 months after surgery. Predominant mode of failure was medial cuff failures in 4/6 cases. The initial FPC increased significantly up to 2 years follow-up (p = 0.004). Evaluations of muscular atrophy or fatty infiltration were not significant different comparing the results of all time points (p > 0.05). Postoperative bone marrow edema disappeared completely at 6 months after surgery, whereas signs of osteolysis appeared at 3 months follow-up and increased to final follow-up. Recurrent defects after arthroscopic reconstruction of supraspinatus tears in modified suture bridge technique seem to occur between 12 and 24 months after surgery. Serial MRI evaluation shows good muscle structure at all time points. Postoperative bone marrow edema disappears completely several months after surgery. Signs of osteolysis seem to appear caused by bio-absorbable anchor implantations.

  1. Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.

    PubMed

    Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.

  2. Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface

    PubMed Central

    Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321

  3. Trajectory data privacy protection based on differential privacy mechanism

    NASA Astrophysics Data System (ADS)

    Gu, Ke; Yang, Lihao; Liu, Yongzhi; Liao, Niandong

    2018-05-01

    In this paper, we propose a trajectory data privacy protection scheme based on differential privacy mechanism. In the proposed scheme, the algorithm first selects the protected points from the user’s trajectory data; secondly, the algorithm forms the polygon according to the protected points and the adjacent and high frequent accessed points that are selected from the accessing point database, then the algorithm calculates the polygon centroids; finally, the noises are added to the polygon centroids by the differential privacy method, and the polygon centroids replace the protected points, and then the algorithm constructs and issues the new trajectory data. The experiments show that the running time of the proposed algorithms is fast, the privacy protection of the scheme is effective and the data usability of the scheme is higher.

  4. Experimental results for the rapid determination of the freezing point of fuels

    NASA Technical Reports Server (NTRS)

    Mathiprakasam, B.

    1984-01-01

    Two methods for the rapid determination of the freezing point of fuels were investigated: an optical method, which detected the change in light transmission from the disappearance of solid particles in the melted fuel; and a differential thermal analysis (DTA) method, which sensed the latent heat of fusion. A laboratory apparatus was fabricated to test the two methods. Cooling was done by thermoelectric modules using an ice-water bath as a heat sink. The DTA method was later modified to eliminate the reference fuel. The data from the sample were digitized and a point of inflection, which corresponds to the ASTM D-2386 freezing point (final melting point), was identified from the derivative. The apparatus was modifified to cool the fuel to -60 C and controls were added for maintaining constant cooling rate, rewarming rate, and hold time at minimum temperature. A parametric series of tests were run for twelve fuels with freezing points from -10 C to -50 C, varying cooling rate, rewarming rate, and hold time. Based on the results, an optimum test procedure was established. The results showed good agreement with ASTM D-2386 freezing point and differential scanning calorimetry results.

  5. Time-frequency approach to underdetermined blind source separation.

    PubMed

    Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong

    2012-02-01

    This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.

  6. Forecast Method of Solar Irradiance with Just-In-Time Modeling

    NASA Astrophysics Data System (ADS)

    Suzuki, Takanobu; Goto, Yusuke; Terazono, Takahiro; Wakao, Shinji; Oozeki, Takashi

    PV power output mainly depends on the solar irradiance which is affected by various meteorological factors. So, it is required to predict solar irradiance in the future for the efficient operation of PV systems. In this paper, we develop a novel approach for solar irradiance forecast, in which we introduce to combine the black-box model (JIT Modeling) with the physical model (GPV data). We investigate the predictive accuracy of solar irradiance over wide controlled-area of each electric power company by utilizing the measured data on the 44 observation points throughout Japan offered by JMA and the 64 points around Kanto by NEDO. Finally, we propose the application forecast method of solar irradiance to the point which is difficulty in compiling the database. And we consider the influence of different GPV default time on solar irradiance prediction.

  7. Fundamentals of continuum mechanics – classical approaches and new trends

    NASA Astrophysics Data System (ADS)

    Altenbach, H.

    2018-04-01

    Continuum mechanics is a branch of mechanics that deals with the analysis of the mechanical behavior of materials modeled as a continuous manifold. Continuum mechanics models begin mostly by introducing of three-dimensional Euclidean space. The points within this region are defined as material points with prescribed properties. Each material point is characterized by a position vector which is continuous in time. Thus, the body changes in a way which is realistic, globally invertible at all times and orientation-preserving, so that the body cannot intersect itself and as transformations which produce mirror reflections are not possible in nature. For the mathematical formulation of the model it is also assumed to be twice continuously differentiable, so that differential equations describing the motion may be formulated. Finally, the kinematical relations, the balance equations, the constitutive and evolution equations and the boundary and/or initial conditions should be defined. If the physical fields are non-smooth jump conditions must be taken into account. The basic equations of continuum mechanics are presented following a short introduction. Additionally, some examples of solid deformable continua will be discussed within the presentation. Finally, advanced models of continuum mechanics will be introduced. The paper is dedicated to Alexander Manzhirov’s 60th birthday.

  8. NMR diffusion simulation based on conditional random walk.

    PubMed

    Gudbjartsson, H; Patz, S

    1995-01-01

    The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.

  9. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  10. Human action recognition based on spatial-temporal descriptors using key poses

    NASA Astrophysics Data System (ADS)

    Hu, Shuo; Chen, Yuxin; Wang, Huaibao; Zuo, Yaqing

    2014-11-01

    Human action recognition is an important area of pattern recognition today due to its direct application and need in various occasions like surveillance and virtual reality. In this paper, a simple and effective human action recognition method is presented based on the key poses of human silhouette and the spatio-temporal feature. Firstly, the contour points of human silhouette have been gotten, and the key poses are learned by means of K-means clustering based on the Euclidean distance between each contour point and the centre point of the human silhouette, and then the type of each action is labeled for further match. Secondly, we obtain the trajectories of centre point of each frame, and create a spatio-temporal feature value represented by W to describe the motion direction and speed of each action. The value W contains the information of location and temporal order of each point on the trajectories. Finally, the matching stage is performed by comparing the key poses and W between training sequences and test sequences, the nearest neighbor sequences is found and its label supplied the final result. Experiments on the public available Weizmann datasets show the proposed method can improve accuracy by distinguishing amphibious poses and increase suitability for real-time applications by reducing the computational cost.

  11. Dynamic Control of Particle Deposition in Evaporating Droplets by an External Point Source of Vapor.

    PubMed

    Malinowski, Robert; Volpe, Giovanni; Parkin, Ivan P; Volpe, Giorgio

    2018-02-01

    The deposition of particles on a surface by an evaporating sessile droplet is important for phenomena as diverse as printing, thin-film deposition, and self-assembly. The shape of the final deposit depends on the flows within the droplet during evaporation. These flows are typically determined at the onset of the process by the intrinsic physical, chemical, and geometrical properties of the droplet and its environment. Here, we demonstrate deterministic emergence and real-time control of Marangoni flows within the evaporating droplet by an external point source of vapor. By varying the source location, we can modulate these flows in space and time to pattern colloids on surfaces in a controllable manner.

  12. Plastic catalytic pyrolysis to fuels as tertiary polymer recycling method: effect of process conditions.

    PubMed

    Gulab, Hussain; Jan, Muhammad Rasul; Shah, Jasmin; Manos, George

    2010-01-01

    This paper presents results regarding the effect of various process conditions on the performance of a zeolite catalyst in pyrolysis of high density polyethylene. The results show that polymer catalytic degradation can be operated at relatively low catalyst content reducing the cost of a potential industrial process. As the polymer to catalyst mass ratio increases, the system becomes less active, but high temperatures compensate for this activity loss resulting in high conversion values at usual batch times and even higher yields of liquid products due to less overcracking. The results also show that high flow rate of carrier gas causes evaporation of liquid products falsifying results, as it was obvious from liquid yield results at different reaction times as well as the corresponding boiling point distributions. Furthermore, results are presented regarding temperature effects on liquid selectivity. Similar values resulted from different final reactor temperatures, which are attributed to the batch operation of the experimental equipment. Since polymer and catalyst both undergo the same temperature profile, which is the same up to a specific time independent of the final temperature. Obviously, this common temperature step determines the selectivity to specific products. However, selectivity to specific products is affected by the temperature, as shown in the corresponding boiling point distributions, with higher temperatures showing an increased selectivity to middle boiling point components (C(8)-C(9)) and lower temperatures increased selectivity to heavy components (C(14)-C(18)).

  13. Thermalization of Wightman functions in AdS/CFT and quasinormal modes

    NASA Astrophysics Data System (ADS)

    Keränen, Ville; Kleinert, Philipp

    2016-07-01

    We study the time evolution of Wightman two-point functions of scalar fields in AdS3 -Vaidya, a spacetime undergoing gravitational collapse. In the boundary field theory, the collapse corresponds to a quench process where the dual 1 +1 -dimensional CFT is taken out of equilibrium and subsequently thermalizes. From the two-point function, we extract an effective occupation number in the boundary theory and study how it approaches the thermal Bose-Einstein distribution. We find that the Wightman functions, as well as the effective occupation numbers, thermalize with a rate set by the lowest quasinormal mode of the scalar field in the BTZ black hole background. We give a heuristic argument for the quasinormal decay, which is expected to apply to more general Vaidya spacetimes also in higher dimensions. This suggests a unified picture in which thermalization times of one- and two-point functions are determined by the lowest quasinormal mode. Finally, we study how these results compare to previous calculations of two-point functions based on the geodesic approximation.

  14. Dynamic Analysis of a Reaction-Diffusion Rumor Propagation Model

    NASA Astrophysics Data System (ADS)

    Zhao, Hongyong; Zhu, Linhe

    2016-06-01

    The rapid development of the Internet, especially the emergence of the social networks, leads rumor propagation into a new media era. Rumor propagation in social networks has brought new challenges to network security and social stability. This paper, based on partial differential equations (PDEs), proposes a new SIS rumor propagation model by considering the effect of the communication between the different rumor infected users on rumor propagation. The stabilities of a nonrumor equilibrium point and a rumor-spreading equilibrium point are discussed by linearization technique and the upper and lower solutions method, and the existence of a traveling wave solution is established by the cross-iteration scheme accompanied by the technique of upper and lower solutions and Schauder’s fixed point theorem. Furthermore, we add the time delay to rumor propagation and deduce the conditions of Hopf bifurcation and stability switches for the rumor-spreading equilibrium point by taking the time delay as the bifurcation parameter. Finally, numerical simulations are performed to illustrate the theoretical results.

  15. Precise Point Positioning technique for short and long baselines time transfer

    NASA Astrophysics Data System (ADS)

    Lejba, Pawel; Nawrocki, Jerzy; Lemanski, Dariusz; Foks-Ryznar, Anna; Nogas, Pawel; Dunst, Piotr

    2013-04-01

    In this work the clock parameters determination of several timing receivers TTS-4 (AOS), ASHTECH Z-XII3T (OP, ORB, PTB, USNO) and SEPTENTRIO POLARX4TR (ORB, since February 11, 2012) by use of the Precise Point Positioning (PPP) technique were presented. The clock parameters were determined for several time links based on the data delivered by time and frequency laboratories mentioned above. The computations cover the period from January 1 to December 31, 2012 and were performed in two modes with 7-day and one-month solution for all links. All RINEX data files which include phase and code GPS data were recorded in 30-second intervals. All calculations were performed by means of Natural Resource Canada's GPS Precise Point Positioning (GPS-PPP) software based on high-quality precise satellite coordinates and satellite clock delivered by IGS as the final products. The used independent PPP technique is a very powerful and simple method which allows for better control of antenna positions in AOS and a verification of other time transfer techniques like GPS CV, GLONASS CV and TWSTFT. The PPP technique is also a very good alternative for calibration of a glass fiber link PL-AOS realized at present by AOS. Currently PPP technique is one of the main time transfer methods used at AOS what considerably improve and strengthen the quality of the Polish time scales UTC(AOS), UTC(PL), and TA(PL). KEY-WORDS: Precise Point Positioning, time transfer, IGS products, GNSS, time scales.

  16. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics (“order parameters”) inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. PMID:28336305

  17. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining.

    PubMed

    Truccolo, Wilson

    2016-11-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.

  18. A Time-Domain CMOS Oscillator-Based Thermostat with Digital Set-Point Programming

    PubMed Central

    Chen, Chun-Chi; Lin, Shih-Hao

    2013-01-01

    This paper presents a time-domain CMOS oscillator-based thermostat with digital set-point programming [without a digital-to-analog converter (DAC) or external resistor] to achieve on-chip thermal management of modern VLSI systems. A time-domain delay-line-based thermostat with multiplexers (MUXs) was used to substantially reduce the power consumption and chip size, and can benefit from the performance enhancement due to the scaling down of fabrication processes. For further cost reduction and accuracy enhancement, this paper proposes a thermostat using two oscillators that are suitable for time-domain curvature compensation instead of longer linear delay lines. The final time comparison was achieved using a time comparator with a built-in custom hysteresis to generate the corresponding temperature alarm and control. The chip size of the circuit was reduced to 0.12 mm2 in a 0.35-μm TSMC CMOS process. The thermostat operates from 0 to 90 °C, and achieved a fine resolution better than 0.05 °C and an improved inaccuracy of ± 0.6 °C after two-point calibration for eight packaged chips. The power consumption was 30 μW at a sample rate of 10 samples/s. PMID:23385403

  19. A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising

    NASA Astrophysics Data System (ADS)

    Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua

    2018-04-01

    In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.

  20. Nasal symptoms following endoscopic transsphenoidal pituitary surgery: assessment using the General Nasal Patient Inventory.

    PubMed

    Wang, Yi Yuen; Srirathan, Vinothan; Tirr, Erica; Kearney, Tara; Gnanalingham, Kanna K

    2011-04-01

    The endoscopic approach for pituitary tumors is a recent innovation and is said to reduce the nasal trauma associated with transnasal transsphenoidal surgery. The authors assessed the temporal changes in the rhinological symptoms following endoscopic transsphenoidal surgery for pituitary lesions, using the General Nasal Patient Inventory (GNPI). The GNPI was administered to 88 consecutive patients undergoing endoscopic transsphenoidal surgery at 3 time points (presurgery, 3-6 months postsurgery, and at final follow-up). The total GNPI score and the scores for the individual GNPI questions were calculated and differences between groups were assessed once before surgery, several months after surgery, and at final follow-up. Of a maximum possible score of 135, the mean GNPI score at 3-6 months postsurgery was only 12.9 ± 12 and was not significantly different from the preoperative score (10.4 ± 13) or final follow-up score (10.3 ± 10). Patients with functioning tumors had higher GNPI scores than those with nonfunctioning tumors for each of these time points (p < 0.05). Individually, a mild increase in symptom severity was seen for symptoms attributable to the nasal trauma of surgery, with partial recovery (nasal sores and bleeding) or complete recovery (nasal blockage, painful sinuses, and unpleasant nasal smell) by final follow-up (p < 0.05). Progressive improvements in symptom severity were seen for symptoms more attributable to tumor mass preoperatively (for example, headaches and painkiller use [p < 0.05]). In total, by final follow-up 8 patients (9%) required further treatment or advice for ongoing nasal symptoms. Endoscopic transsphenoidal surgery is a well-tolerated minimally invasive procedure for pituitary fossa lesions. Overall patient-assessed nasal symptoms do not change, but some individual symptoms may show a mild worsening or overall improvement.

  1. The Influence of Academic Momentum and Associate Degree Credit Requirements on Community College Student Degree Completion

    ERIC Educational Resources Information Center

    Knox, Daniel

    2017-01-01

    This study addresses gaps in the theoretical and policy literature by examining the relationship between associate degree program credit requirements and four student outcomes: associate degree attainment, time to degree, final associate degree grade point average, and persistence. Using student unit record data, a longitudinal quantitative study…

  2. 14 CFR 1216.312 - Timing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... EPA notice of a NASA draft EIS. (2) Thirty days after publication of an EPA notice of a NASA final EIS. (b) When necessary to comply with other specific statutory requirements, NASA shall consult with and... the proposed action (the major decision point of § 1216.304(b)) shall be made by NASA until the later...

  3. 14 CFR 1216.312 - Timing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... EPA notice of a NASA draft EIS. (2) Thirty days after publication of an EPA notice of a NASA final EIS. (b) When necessary to comply with other specific statutory requirements, NASA shall consult with and... the proposed action (the major decision point of § 1216.304(b)) shall be made by NASA until the later...

  4. 14 CFR 1216.312 - Timing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... EPA notice of a NASA draft EIS. (2) Thirty days after publication of an EPA notice of a NASA final EIS. (b) When necessary to comply with other specific statutory requirements, NASA shall consult with and... the proposed action (the major decision point of § 1216.304(b)) shall be made by NASA until the later...

  5. Differential games.

    NASA Technical Reports Server (NTRS)

    Varaiya, P. P.

    1972-01-01

    General discussion of the theory of differential games with two players and zero sum. Games starting at a fixed initial state and ending at a fixed final time are analyzed. Strategies for the games are defined. The existence of saddle values and saddle points is considered. A stochastic version of a differential game is used to examine the synthesis problem.

  6. Using phenomenological models for forecasting the 2015 Ebola challenge.

    PubMed

    Pell, Bruce; Kuang, Yang; Viboud, Cecile; Chowell, Gerardo

    2018-03-01

    The rising number of novel pathogens threatening the human population has motivated the application of mathematical modeling for forecasting the trajectory and size of epidemics. We summarize the real-time forecasting results of the logistic equation during the 2015 Ebola challenge focused on predicting synthetic data derived from a detailed individual-based model of Ebola transmission dynamics and control. We also carry out a post-challenge comparison of two simple phenomenological models. In particular, we systematically compare the logistic growth model and a recently introduced generalized Richards model (GRM) that captures a range of early epidemic growth profiles ranging from sub-exponential to exponential growth. Specifically, we assess the performance of each model for estimating the reproduction number, generate short-term forecasts of the epidemic trajectory, and predict the final epidemic size. During the challenge the logistic equation consistently underestimated the final epidemic size, peak timing and the number of cases at peak timing with an average mean absolute percentage error (MAPE) of 0.49, 0.36 and 0.40, respectively. Post-challenge, the GRM which has the flexibility to reproduce a range of epidemic growth profiles ranging from early sub-exponential to exponential growth dynamics outperformed the logistic growth model in ascertaining the final epidemic size as more incidence data was made available, while the logistic model underestimated the final epidemic even with an increasing amount of data of the evolving epidemic. Incidence forecasts provided by the generalized Richards model performed better across all scenarios and time points than the logistic growth model with mean RMS decreasing from 78.00 (logistic) to 60.80 (GRM). Both models provided reasonable predictions of the effective reproduction number, but the GRM slightly outperformed the logistic growth model with a MAPE of 0.08 compared to 0.10, averaged across all scenarios and time points. Our findings further support the consideration of transmission models that incorporate flexible early epidemic growth profiles in the forecasting toolkit. Such models are particularly useful for quickly evaluating a developing infectious disease outbreak using only case incidence time series of the early phase of an infectious disease outbreak. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Can the displacement of a conservatively treated distal radius fracture be predicted at the beginning of treatment?

    PubMed Central

    Einsiedel, T.; Freund, W.; Sander, S.; Trnavac, S.; Gebhard, F.

    2008-01-01

    The aim of this study was to investigate whether the final displacement of conservatively treated distal radius fractures can be predicted after primary reduction. We analysed the radiographic documents of 311 patients with a conservatively treated distal radius fracture at the time of injury, after reduction and after bony consolidation. We measured the dorsal angulation (DA), the radial angle (RA) and the radial shortening (RS) at each time point. The parameters were analysed separately for metaphyseally “stable” (A2, C1) and “unstable” (A3, C2, C3) fractures, according to the AO classification system. Spearman’s rank correlations and regression functions were determined for the analysis. The highest correlations were found for the DA between the time points ‘reduction’ and ‘complete healing’ (r = 0.75) and for the RA between the time points ‘reduction’ and ‘complete healing’ (r = 0.80). The DA and the RA after complete healing can be predicted from the regression functions. PMID:18504577

  8. Migrational Characteristics of Columbia Basin Salmon and Steelhead Trout, Part II, Smolt Monitoring Program, 1984 Annual Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McConnaha, Willis E.

    1985-07-01

    The report describes the travel time of marked yearling and sub-yearling chinook salmon (Oncorhynchus tshawytscha), sockeye salmon (O. nerka), and steelhead trout (Salmo gairdneri) between points within the system, and reports the arrival timing and duration of the migrations for these species as well as coho salmon (O. kisutch). A final listing of 1984 hatchery releases is also included. 8 refs., 26 figs., 20 tabs.

  9. Iterative algorithms for computing the feedback Nash equilibrium point for positive systems

    NASA Astrophysics Data System (ADS)

    Ivanov, I.; Imsland, Lars; Bogdanova, B.

    2017-03-01

    The paper studies N-player linear quadratic differential games on an infinite time horizon with deterministic feedback information structure. It introduces two iterative methods (the Newton method as well as its accelerated modification) in order to compute the stabilising solution of a set of generalised algebraic Riccati equations. The latter is related to the Nash equilibrium point of the considered game model. Moreover, we derive the sufficient conditions for convergence of the proposed methods. Finally, we discuss two numerical examples so as to illustrate the performance of both of the algorithms.

  10. Chaos and complexity by design

    DOE PAGES

    Roberts, Daniel A.; Yoshida, Beni

    2017-04-20

    We study the relationship between quantum chaos and pseudorandomness by developing probes of unitary design. A natural probe of randomness is the “frame poten-tial,” which is minimized by unitary k-designs and measures the 2-norm distance between the Haar random unitary ensemble and another ensemble. A natural probe of quantum chaos is out-of-time-order (OTO) four-point correlation functions. We also show that the norm squared of a generalization of out-of-time-order 2k-point correlators is proportional to the kth frame potential, providing a quantitative connection between chaos and pseudorandomness. In addition, we prove that these 2k-point correlators for Pauli operators completely determine the k-foldmore » channel of an ensemble of unitary operators. Finally, we use a counting argument to obtain a lower bound on the quantum circuit complexity in terms of the frame potential. This provides a direct link between chaos, complexity, and randomness.« less

  11. Another Look at the Great Area-Coverage Controversy of the 1950's

    NASA Astrophysics Data System (ADS)

    Blanchard, Walter

    2005-09-01

    In the immediate aftermath of WW2 there sprang up an international argument over the relative merits for aerial navigation of area-coverage radio navaids versus point-source systems. The United States was in favour of point-source whereas the UK proposed area-coverage, systems for which had successfully been demonstrated under very adverse conditions during the war. It rumbled on for many years, not being finally settled until the ICAO Montreal Conference of 1959 decided for point-source. Since then, VOR/DME/ADF/ILS have been the standard aviation radio navaids and there seems little likelihood of any change in the near future, GNSS notwithstanding, if one discounts the phasing-out of ADF. It now seems sufficiently in the past to perhaps allow a dispassionate evaluation of the technical arguments used at the time the political ones can be left to another place and time.

  12. Research on fully distributed optical fiber sensing security system localization algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen

    2013-12-01

    A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.

  13. Chaos control in delayed phase space constructed by the Takens embedding theory

    NASA Astrophysics Data System (ADS)

    Hajiloo, R.; Salarieh, H.; Alasty, A.

    2018-01-01

    In this paper, the problem of chaos control in discrete-time chaotic systems with unknown governing equations and limited measurable states is investigated. Using the time-series of only one measurable state, an algorithm is proposed to stabilize unstable fixed points. The approach consists of three steps: first, using Takens embedding theory, a delayed phase space preserving the topological characteristics of the unknown system is reconstructed. Second, a dynamic model is identified by recursive least squares method to estimate the time-series data in the delayed phase space. Finally, based on the reconstructed model, an appropriate linear delayed feedback controller is obtained for stabilizing unstable fixed points of the system. Controller gains are computed using a systematic approach. The effectiveness of the proposed algorithm is examined by applying it to the generalized hyperchaotic Henon system, prey-predator population map, and the discrete-time Lorenz system.

  14. Wind-influenced projectile motion

    NASA Astrophysics Data System (ADS)

    Bernardo, Reginald Christian; Perico Esguerra, Jose; Day Vallejos, Jazmine; Jerard Canda, Jeff

    2015-03-01

    We solved the wind-influenced projectile motion problem with the same initial and final heights and obtained exact analytical expressions for the shape of the trajectory, range, maximum height, time of flight, time of ascent, and time of descent with the help of the Lambert W function. It turns out that the range and maximum horizontal displacement are not always equal. When launched at a critical angle, the projectile will return to its starting position. It turns out that a launch angle of 90° maximizes the time of flight, time of ascent, time of descent, and maximum height and that the launch angle corresponding to maximum range can be obtained by solving a transcendental equation. Finally, we expressed in a parametric equation the locus of points corresponding to maximum heights for projectiles launched from the ground with the same initial speed in all directions. We used the results to estimate how much a moderate wind can modify a golf ball’s range and suggested other possible applications.

  15. Real-time global illumination on mobile device

    NASA Astrophysics Data System (ADS)

    Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.

    2014-02-01

    We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.

  16. Universality of fast quenches from the conformal perturbation theory

    NASA Astrophysics Data System (ADS)

    Dymarsky, Anatoly; Smolkin, Michael

    2018-01-01

    We consider global quantum quenches, a protocol when a continuous field theoretic system in the ground state is driven by a homogeneous time-dependent external interaction. When the typical inverse time scale of the interaction is much larger than all relevant scales except for the UV-cutoff the system's response exhibits universal scaling behavior. We provide both qualitative and quantitative explanations of this universality and argue that physics of the response during and shortly after the quench is governed by the conformal perturbation theory around the UV fixed point. We proceed to calculate the response of one and two-point correlation functions confirming and generalizing universal scalings found previously. Finally, we discuss late time behavior after the quench and argue that all local quantities will equilibrate to their thermal values specified by an excess energy acquired by the system during the quench.

  17. Dynamic Control of Particle Deposition in Evaporating Droplets by an External Point Source of Vapor

    PubMed Central

    2018-01-01

    The deposition of particles on a surface by an evaporating sessile droplet is important for phenomena as diverse as printing, thin-film deposition, and self-assembly. The shape of the final deposit depends on the flows within the droplet during evaporation. These flows are typically determined at the onset of the process by the intrinsic physical, chemical, and geometrical properties of the droplet and its environment. Here, we demonstrate deterministic emergence and real-time control of Marangoni flows within the evaporating droplet by an external point source of vapor. By varying the source location, we can modulate these flows in space and time to pattern colloids on surfaces in a controllable manner. PMID:29363979

  18. Parcellation of the Healthy Neonatal Brain into 107 Regions Using Atlas Propagation through Intermediate Time Points in Childhood.

    PubMed

    Blesa, Manuel; Serag, Ahmed; Wilkinson, Alastair G; Anblagan, Devasuda; Telford, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Macnaught, Gillian; Semple, Scott I; Bastin, Mark E; Boardman, James P

    2016-01-01

    Neuroimage analysis pipelines rely on parcellated atlases generated from healthy individuals to provide anatomic context to structural and diffusion MRI data. Atlases constructed using adult data introduce bias into studies of early brain development. We aimed to create a neonatal brain atlas of healthy subjects that can be applied to multi-modal MRI data. Structural and diffusion 3T MRI scans were acquired soon after birth from 33 typically developing neonates born at term (mean postmenstrual age at birth 39(+5) weeks, range 37(+2)-41(+6)). An adult brain atlas (SRI24/TZO) was propagated to the neonatal data using temporal registration via childhood templates with dense temporal samples (NIH Pediatric Database), with the final atlas (Edinburgh Neonatal Atlas, ENA33) constructed using the Symmetric Group Normalization (SyGN) method. After this step, the computed final transformations were applied to T2-weighted data, and fractional anisotropy, mean diffusivity, and tissue segmentations to provide a multi-modal atlas with 107 anatomical regions; a symmetric version was also created to facilitate studies of laterality. Volumes of each region of interest were measured to provide reference data from normal subjects. Because this atlas is generated from step-wise propagation of adult labels through intermediate time points in childhood, it may serve as a useful starting point for modeling brain growth during development.

  19. The Thurgood Marshall School of Law Empirical Findings: A Report of the Relationship between Graduate GPAs and First-Time Texas Bar Scores of February 2010 and July 2009

    ERIC Educational Resources Information Center

    Kadhi, T.; Holley, D.; Palasota, A.

    2010-01-01

    The following report gives descriptive and correlational statistical findings of the Grade Point Averages (GPAs) of the February 2010 and July 2009 TMSL First Time Texas Bar Test Takers to their TMSL Final GPA. Data was pre-existing and was given to the Evaluator by email from the Dean and Registrar. Statistical analyses were run using SPSS 17 to…

  20. Radar - ESRL Wind Profiler with RASS, Wasco Airport - Derived Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaffrey, Katherine

    Profiles of turbulence dissipation rate for 15-minute intervals, time-stamped at the beginning of the 15-minute period, during the final 30 minutes of each hour. During that time, the 915-MHz wind profiling radar was in an optimized configuration with a vertically pointing beam only for measuring accurate spectral widths of vertical velocity. A bias-corrected dissipation rate also was profiled (described in McCaffrey et al. 2017). Hourly files contain two 15-minute profiles.

  1. 40 CFR 264.142 - Cost estimate for closure.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the cost of final closure at the point in the facility's active life when the extent and manner of its... costs for on-site disposal if he can demonstrate that on-site disposal capacity will exist at all times over the life of the facility. (3) The closure cost estimate may not incorporate any salvage value that...

  2. Extended protection capabilities of an immature dendritic-cell targeting malaria sporozoite vaccine.

    PubMed

    Luo, Kun; Zavala, Fidel; Gordy, James; Zhang, Hong; Markham, Richard B

    2017-04-25

    Mouse studies evaluating candidate malaria vaccines have typically examined protective efficacy over the relatively short time frames of several weeks after the final of multiple immunizations. The current study examines the protective ability in a mouse model system of a novel protein vaccine construct in which the adjuvant polyinosinic polycytidilic acid (poly(I:C)) is used in combination with a vaccine in which the immature dendritic cell targeting chemokine, macrophage inflammatory protein 3 alpha (MIP3α), is fused to the circumsporozoite protein (CSP) of Plasmodium falciparum (P. falciparum). Two vaccinations, three weeks apart, elicited extraordinarily high, MIP3α-dependent antibody responses. MIP3α was able to target the vaccine to the CCR6 receptor found predominantly on immature dendritic cells and significantly enhanced the cellular influx at the vaccination site. At three and 23 weeks after the final of two immunizations, mice were challenged by intravenous injection of 5×10 3 transgenic Plasmodium berghei sporozoites expressing P. falciparum CSP, a challenge dose approximately one order of magnitude greater than that which is encountered after mosquito bite in the clinical setting. A ninety-seven percent reduction in liver sporozoite load was observed at both time points, 23 weeks being the last time point tested. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Grade inflation at a north american college of veterinary medicine: 1985-2006.

    PubMed

    Rush, Bonnie R; Elmore, Ronnie G; Sanderson, Michael W

    2009-01-01

    Grade inflation, an upward shift in student grade-point averages without a similar rise in achievement, is considered pervasive by most experts in post-secondary education in the United States. Grade-point averages (GPAs) at US universities have increased by roughly 0.15 points per decade since the 1960s, with a 0.6-point increase since 1967. In medical education, grade inflation has been documented and is particularly evident in the clinical setting. The purpose of this study was to evaluate grade inflation over a 22-year period in a college of veterinary medicine. Academic records from 2,060 students who graduated from the College of Veterinary Medicine at Kansas State University between 1985 and 2006 were evaluated, including cumulative GPAs earned during pre-clinical professional coursework, during clinical rotations, and at graduation. Grade inflation was documented at a rate of approximately 0.2 points per decade at this college of veterinary medicine. The difference in mean final GPA between the minimum (1986) and maximum (2003) years of graduation was 0.47 points. Grade inflation was similar for didactic coursework (years 1-3) and clinical rotations (final year). Demographic shifts, student qualifications, and tuition do not appear to have contributed to grade inflation over time. A change in academic standards and student evaluation of teaching may have contributed to relaxed grading standards, and technology in the classroom may have led to higher (earned) grades as a result of improved student learning.

  4. Comparison of distal soft-tissue procedures combined with a distal chevron osteotomy for moderate to severe hallux valgus: first web-space versus transarticular approach.

    PubMed

    Park, Yu-Bok; Lee, Keun-Bae; Kim, Sung-Kyu; Seon, Jong-Keun; Lee, Jun-Young

    2013-11-06

    There are two surgical approaches for distal soft-tissue procedures for the correction of hallux valgus-the dorsal first web-space approach, and the medial transarticular approach. The purpose of this study was to compare the outcomes achieved after use of either of these approaches combined with a distal chevron osteotomy in patients with moderate to severe hallux valgus. One hundred and twenty-two female patients (122 feet) who underwent a distal chevron osteotomy as part of a distal soft-tissue procedure for the treatment of symptomatic unilateral moderate to severe hallux valgus constituted the study cohort. The 122 feet were randomly divided into two groups: namely, a dorsal first web-space approach (group D; sixty feet) and a medial transarticular approach (group M; sixty-two feet). The clinical and radiographic results of the two groups were compared at a mean follow-up time of thirty-eight months. The American Orthopaedic Foot & Ankle Society (AOFAS) hindfoot scale hallux metatarsophalangeal-interphalangeal scores improved from a mean and standard deviation of 55.5 ± 12.8 points preoperatively to 93.5 ± 6.3 points at the final follow-up in group D and from 54.9 ± 12.6 points preoperatively to 93.6 ± 6.2 points at the final follow-up in group M. The mean hallux valgus angle in groups D and M was reduced from 32.2° ± 6.3° and 33.1° ± 8.4° preoperatively to 10.5° ± 5.5° and 9.9° ± 5.5°, respectively, at the time of final follow-up. The mean first intermetatarsal angle in groups D and M was reduced from 15.0° ± 2.8° and 15.3° ± 2.7° preoperatively to 6.5° ± 2.2° and 6.3° ± 2.4°, respectively, at the final follow-up. The clinical and radiographic outcomes were not significantly different between the two groups. The final clinical and radiographic outcomes between the two approaches for distal soft-tissue procedures were comparable and equally successful. Accordingly, the results of this study suggest that the medial transarticular approach is an effective and reliable means of lateral soft-tissue release compared with the dorsal first web-space approach. Therapeutic level II. See Instructions for Authors for a complete description of levels of evidence.

  5. Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation

    NASA Astrophysics Data System (ADS)

    An, Lu; Guo, Baolong

    2018-03-01

    Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).

  6. Simple Synthesis Hydrogenated Castor Oil Fatty Amide Wax and Its Coating Characterization.

    PubMed

    Yu, Xiuzhu; Wang, Ning; Zhang, Rui; Zhao, Zhong

    2017-07-01

    A simple method for incorporating amine groups in hydrogenated castor oil (HCO) to produce wax for beeswax or carnauba wax substitution in packaging and coating was developed. From the conversion rate of the products, HCO was reacted with ethanolamine at 150°C for 5 h, and the molar ratio of HCO and ethanolamine was 1:4. The hardness of the final product was seven times higher than that of beeswax, the cohesiveness of the final product was 1.3 times higher than that of beeswax and approximately one half of that of carnauba wax, and the melting point of the final product is 98°C. The Fourier transform Infrared spectroscopy showed that the amide groups were incorporated to form the amide products. In coating application, the results showed that the force of the final product coating cardboard was higher than that of beeswax and paraffin wax and less than that of carnauba wax. After 24 h soaking, the compression forces were decreased. HCO fatty acid wax can be an alternative wax for carnauba wax and beeswax in coating applications.

  7. Evaluating the Variations in the Flood Susceptibility Maps Accuracies due to the Alterations in the Type and Extent of the Flood Inventory

    NASA Astrophysics Data System (ADS)

    Tehrany, M. Sh.; Jones, S.

    2017-10-01

    This paper explores the influence of the extent and density of the inventory data on the final outcomes. This study aimed to examine the impact of different formats and extents of the flood inventory data on the final susceptibility map. An extreme 2011 Brisbane flood event was used as the case study. LR model was applied using polygon and point formats of the inventory data. Random points of 1000, 700, 500, 300, 100 and 50 were selected and susceptibility mapping was undertaken using each group of random points. To perform the modelling Logistic Regression (LR) method was selected as it is a very well-known algorithm in natural hazard modelling due to its easily understandable, rapid processing time and accurate measurement approach. The resultant maps were assessed visually and statistically using Area under Curve (AUC) method. The prediction rates measured for susceptibility maps produced by polygon, 1000, 700, 500, 300, 100 and 50 random points were 63 %, 76 %, 88 %, 80 %, 74 %, 71 % and 65 % respectively. Evidently, using the polygon format of the inventory data didn't lead to the reasonable outcomes. In the case of random points, raising the number of points consequently increased the prediction rates, except for 1000 points. Hence, the minimum and maximum thresholds for the extent of the inventory must be set prior to the analysis. It is concluded that the extent and format of the inventory data are also two of the influential components in the precision of the modelling.

  8. A systematic approach for studying the signs and symptoms of fever in adult patients: the fever assessment tool (FAST).

    PubMed

    Ames, Nancy J; Powers, John H; Ranucci, Alexandra; Gartrell, Kyungsook; Yang, Li; VanRaden, Mark; Leidy, Nancy Kline; Wallen, Gwenyth R

    2017-04-27

    Although body temperature is one of four key vital signs routinely monitored and treated in clinical practice, relatively little is known about the symptoms associated with febrile states. The purpose of this study was to assess the validity, reliability and feasibility of the Fever Assessment Tool (FAST) in an acute care research setting. Qualitative: To assess content validity and finalize the FAST instrument, 12 adults from an inpatient medical-surgical unit at the National Institutes of Health (NIH) Clinical Center participated in cognitive interviews within approximately 12 h of a febrile state (tympanic temperature ≥ 38° Celsius). Quantitative: To test reliability, validity and feasibility, 56 new adult inpatients completed the 21-item FAST. The cognitive interviews clarified and validated the content of the final 21-item FAST. Fifty-six patients completed the FAST from two to 133 times during routine vital sign assessment, yielding 1,699 temperature time points. Thirty-four percent of the patients (N = 19) experienced fever at one or more time points, with a total of 125 febrile time points. Kuder-Richardson 20 (KR-20) reliability of the FAST was 0.70. Four nonspecific symptom categories, Tired or Run-Down (12), Sleepy (13), Weak or Lacking Energy (11), and Thirsty (9) were among the most frequently reported symptoms in all participants. Using Generalized Estimating Equations (GEE), the odds of reporting eight symptoms, Warm (4), Sweating (5), Thirsty (9), General Body Aches (10), Weak or Lacking Energy (11), Tired or Run Down (12) and Difficulty Breathing (17), were increased when patients had a fever (Fever Now), compared to the two other subgroups-patients who had a fever, but not at that particular time point, (Fever Not Now) and patients who never had a fever (Fever Never). Many, but not all, of the comparisons were significant in both groups. Results suggest the FAST is reliable, valid and easy to administer. In addition to symptoms usually associated with fever (e.g. feeling warm), symptoms such as Difficulty Breathing (17) were identified with fever. Further study in a larger, more diverse patient population is warranted. Clinical Trials Number: NCT01287143 (January 2011).

  9. One-loop gravitational wave spectrum in de Sitter spacetime

    NASA Astrophysics Data System (ADS)

    Fröb, Markus B.; Roura, Albert; Verdaguer, Enric

    2012-08-01

    The two-point function for tensor metric perturbations around de Sitter spacetime including one-loop corrections from massless conformally coupled scalar fields is calculated exactly. We work in the Poincaré patch (with spatially flat sections) and employ dimensional regularization for the renormalization process. Unlike previous studies we obtain the result for arbitrary time separations rather than just equal times. Moreover, in contrast to existing results for tensor perturbations, ours is manifestly invariant with respect to the subgroup of de Sitter isometries corresponding to a simultaneous time translation and rescaling of the spatial coordinates. Having selected the right initial state for the interacting theory via an appropriate iepsilon prescription is crucial for that. Finally, we show that although the two-point function is a well-defined spacetime distribution, the equal-time limit of its spatial Fourier transform is divergent. Therefore, contrary to the well-defined distribution for arbitrary time separations, the power spectrum is strictly speaking ill-defined when loop corrections are included.

  10. A real-time ionospheric model based on GNSS Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Zhang, Hongping; Ge, Maorong; Huang, Guanwen

    2013-09-01

    This paper proposes a method of real-time monitoring and modeling the ionospheric Total Electron Content (TEC) by Precise Point Positioning (PPP). Firstly, the ionospheric TEC and receiver’s Differential Code Biases (DCB) are estimated with the undifferenced raw observation in real-time, then the ionospheric TEC model is established based on the Single Layer Model (SLM) assumption and the recovered ionospheric TEC. In this study, phase observations with high precision are directly used instead of phase smoothed code observations. In addition, the DCB estimation is separated from the establishment of the ionospheric model which will limit the impacts of the SLM assumption impacts. The ionospheric model is established at every epoch for real time application. The method is validated with three different GNSS networks on a local, regional, and global basis. The results show that the method is feasible and effective, the real-time ionosphere and DCB results are very consistent with the IGS final products, with a bias of 1-2 TECU and 0.4 ns respectively.

  11. The Improvement of the Closed Bounded Volume (CBV) Evaluation Methods to Compute a Feasible Rough Machining Area Based on Faceted Models

    NASA Astrophysics Data System (ADS)

    Hadi Sutrisno, Himawan; Kiswanto, Gandjar; Istiyanto, Jos

    2017-06-01

    The rough machining is aimed at shaping a workpiece towards to its final form. This process takes up a big proportion of the machining time due to the removal of the bulk material which may affect the total machining time. In certain models, the rough machining has limitations especially on certain surfaces such as turbine blade and impeller. CBV evaluation is one of the concepts which is used to detect of areas admissible in the process of machining. While in the previous research, CBV area detection used a pair of normal vectors, in this research, the writer simplified the process to detect CBV area with a slicing line for each point cloud formed. The simulation resulted in three steps used for this method and they are: 1. Triangulation from CAD design models, 2. Development of CC point from the point cloud, 3. The slicing line method which is used to evaluate each point cloud position (under CBV and outer CBV). The result of this evaluation method can be used as a tool for orientation set-up on each CC point position of feasible areas in rough machining.

  12. Preconditioning 2D Integer Data for Fast Convex Hull Computations.

    PubMed

    Cadenas, José Oswaldo; Megson, Graham M; Luengo Hendriks, Cris L

    2016-01-01

    In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.

  13. Optimal transfers between libration-point orbits in the elliptic restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Hiday, Lisa Ann

    1992-09-01

    A strategy is formulated to design optimal impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L(1) libration point of the Sun-Earth/Moon barycenter system. Two methods of constructing nominal transfers, for which the fuel cost is to be minimized, are developed; both inferior and superior transfers between two halo orbits are considered. The necessary conditions for an optimal transfer trajectory are stated in terms of the primer vector. The adjoint equation relating reference and perturbed trajectories in this formulation of the elliptic restricted three-body problem is shown to be distinctly different from that obtained in the analysis of trajectories in the two-body problem. Criteria are established whereby the cost on a nominal transfer can be improved by the addition of an interior impulse or by the implementation of coastal arcs in the initial and final orbits. The necessary conditions for the local optimality of a time-fixed transfer trajectory possessing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. The optimality of a time-free transfer containing coastal arcs is surmised by examination of the slopes at the endpoints of a plot of the magnitude of the primer vector over the duration of the transfer path. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The position and timing of each interior impulse applied to a time-fixed transfer as well as the direction and length of coastal periods implemented on a time-free transfer are specified by the unconstrained minimization of the appropriate variation in cost utilizing a multivariable search technique. Although optimal solutions in some instances are elusive, the time-fixed and time-free optimization algorithms prove to be very successful in diminishing costs on nominal transfer trajectories. The inclusion of coastal arcs on time-free superior and inferior transfers results in significant modification of the transfer time of flight caused by shifts in departure and arrival locations on the halo orbits.

  14. A unified procedure for meta-analytic evaluation of surrogate end points in randomized clinical trials

    PubMed Central

    Dai, James Y.; Hughes, James P.

    2012-01-01

    The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448

  15. The algorithm of fast image stitching based on multi-feature extraction

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  16. Ancho Canyon RF Collect, March 2, 2017: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Junor, William; Layne, John Preston; Gamble, Thomas Kirk

    2017-09-21

    We report the results from the March 2, 2017, Ancho Canyon RF collection. While bright electromagnetic signals were seen nearby the firing point, there were no detections of signals from the explosively-fired fuse at a collection point about 570m distant on the East Mesa. However, "liveness" tests of the East Mesa data acquisition system and checks of the timing both suggest that the collection system was working correctly. We examine possible reasons for the lack of detection. Principal among these is that the impulsive signal may be small compared to the radio frequency background on the East Mesa.

  17. Activity versus outcome maximization in time management.

    PubMed

    Malkoc, Selin A; Tonietto, Gabriela N

    2018-04-30

    Feeling time-pressed has become ubiquitous. Time management strategies have emerged to help individuals fit in more of their desired and necessary activities. We provide a review of these strategies. In doing so, we distinguish between two, often competing, motives people have in managing their time: activity maximization and outcome maximization. The emerging literature points to an important dilemma: a given strategy that maximizes the number of activities might be detrimental to outcome maximization. We discuss such factors that might hinder performance in work tasks and enjoyment in leisure tasks. Finally, we provide theoretically grounded recommendations that can help balance these two important goals in time management. Published by Elsevier Ltd.

  18. A shape-based segmentation method for mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen

    2013-07-01

    Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.

  19. Series-nonuniform rational B-spline signal feedback: From chaos to any embedded periodic orbit or target point.

    PubMed

    Shao, Chenxi; Xue, Yong; Fang, Fang; Bai, Fangzhou; Yin, Peifeng; Wang, Binghong

    2015-07-01

    The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedback control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.

  20. A work-family conflict/subjective well-being process model: a test of competing theories of longitudinal effects.

    PubMed

    Matthews, Russell A; Wayne, Julie Holliday; Ford, Michael T

    2014-11-01

    In the present study, we examine competing predictions of stress reaction models and adaptation theories regarding the longitudinal relationship between work-family conflict and subjective well-being. Based on data from 432 participants over 3 time points with 2 lags of varying lengths (i.e., 1 month, 6 months), our findings suggest that in the short term, consistent with prior theory and research, work-family conflict is associated with poorer subjective well-being. Counter to traditional work-family predictions but consistent with adaptation theories, after accounting for concurrent levels of work-family conflict as well as past levels of subjective well-being, past exposure to work-family conflict was associated with higher levels of subjective well-being over time. Moreover, evidence was found for reverse causation in that greater subjective well-being at 1 point in time was associated with reduced work-family conflict at a subsequent point in time. Finally, the pattern of results did not vary as a function of using different temporal lags. We discuss the theoretical, research, and practical implications of our findings. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  1. Series-nonuniform rational B-spline signal feedback: From chaos to any embedded periodic orbit or target point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Chenxi, E-mail: cxshao@ustc.edu.cn; Xue, Yong; Fang, Fang

    2015-07-15

    The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedbackmore » control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.« less

  2. Wall shear stress fixed points in cardiovascular fluid mechanics.

    PubMed

    Arzani, Amirhossein; Shadden, Shawn C

    2018-05-17

    Complex blood flow in large arteries creates rich wall shear stress (WSS) vectorial features. WSS acts as a link between blood flow dynamics and the biology of various cardiovascular diseases. WSS has been of great interest in a wide range of studies and has been the most popular measure to correlate blood flow to cardiovascular disease. Recent studies have emphasized different vectorial features of WSS. However, fixed points in the WSS vector field have not received much attention. A WSS fixed point is a point on the vessel wall where the WSS vector vanishes. In this article, WSS fixed points are classified and the aspects by which they could influence cardiovascular disease are reviewed. First, the connection between WSS fixed points and the flow topology away from the vessel wall is discussed. Second, the potential role of time-averaged WSS fixed points in biochemical mass transport is demonstrated using the recent concept of Lagrangian WSS structures. Finally, simple measures are proposed to quantify the exposure of the endothelial cells to WSS fixed points. Examples from various arterial flow applications are demonstrated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Time efficient Gabor fused master slave optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Cernat, Ramona; Bradu, Adrian; Rivet, Sylvain; Podoleanu, Adrian

    2018-02-01

    In this paper the benefits in terms of operation time that Master/Slave (MS) implementation of optical coherence tomography can bring in comparison to Gabor fused (GF) employing conventional fast Fourier transform based OCT are presented. The Gabor Fusion/Master Slave Optical Coherence Tomography architecture proposed here does not need any data stitching. Instead, a subset of en-face images is produced for each focus position inside the sample to be imaged, using a reduced number of theoretically inferred Master masks. These en-face images are then assembled into a final volume. When the channelled spectra are digitized into 1024 sampling points, and more than 4 focus positions are required to produce the final volume, the Master Slave implementation of the instrument is faster than the conventional fast Fourier transform based procedure.

  4. A new procedure for the determination of distillation temperature distribution of high-boiling petroleum products and fractions.

    PubMed

    Boczkaj, Grzegorz; Przyjazny, Andrzej; Kamiński, Marian

    2011-03-01

    The distribution of distillation temperatures of liquid and semi-fluid products, including petroleum fractions and products, is an important process and practical parameter. It provides information on properties of crude oil and content of particular fractions, classified on the basis of their boiling points, as well as the optimum conditions of atmospheric or vacuum distillation. At present, the distribution of distillation temperatures is often investigated by simulated distillation (SIMDIS) using capillary gas chromatography (CGC) with a short capillary column with polydimethylsiloxane as the stationary phase. This paper presents the results of investigations on the possibility of replacing currently used CGC columns for SIMDIS with a deactivated fused silica capillary tube without any stationary phase. The SIMDIS technique making use of such an empty fused silica column allows a considerable lowering of elution temperature of the analytes, which results in a decrease of the final oven temperature while ensuring a complete separation of the mixture. This eliminates the possibility of decomposition of less thermally stable mixture components and bleeding of the stationary phase which would result in an increase of the detector signal. It also improves the stability of the baseline, which is especially important in the determination of the end point of elution, which is the basis for finding the final temperature of distillation. This is the key parameter for the safety process of hydrocracking, where an excessively high final temperature of distillation of a batch can result in serious damage to an expensive catalyst bed. This paper compares the distribution of distillation temperatures of the fraction from vacuum distillation of petroleum obtained using SIMDIS with that obtained by the proposed procedure. A good agreement between the two procedures was observed. In addition, typical values of elution temperatures of n-paraffin standards obtained by the two procedures were compared. Finally, the agreement between boiling points of polar compounds determined from their retention times and actual boiling points was investigated.

  5. Asymmetries in the Acquisition of Word-Initial and Word-Final Consonant Clusters

    ERIC Educational Resources Information Center

    Kirk, Cecilia; Demuth, Katherine

    2005-01-01

    Effects of negative input for 13 categories of grammatical error were assessed in a longitudinal study of naturalistic adult-child discourse. Two-hour samples of conversational interaction were obtained at two points in time, separated by a lag of 12 weeks, for 12 children (mean age 2;0 at the start). The data were interpreted within the framework…

  6. Longitudinal Effects of Job Change upon Interest, Utilization, and Satisfaction Attitudes. Final Report.

    ERIC Educational Resources Information Center

    Finstuen, Kenn; Edwards, John O., Jr.

    This research was designed to identify and assess the effects of a job's context and naturally occurring job content changes upon the job attitudes of 709 Air Force radio operators. The first of two phases of the investigation identified specific job types within the radio operator career field at two points in time, and determined the flow of…

  7. An approximate, maximum terminal velocity descent to a point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisler, G.R.; Hull, D.G.

    1987-01-01

    No closed form control solution exists for maximizing the terminal velocity of a hypersonic glider at an arbitrary point. As an alternative, this study uses neighboring extremal theory to provide a sampled data feedback law to guide the vehicle to a constrained ground range and altitude. The guidance algorithm is divided into two parts: 1) computation of a nominal, approximate, maximum terminal velocity trajectory to a constrained final altitude and computation of the resulting unconstrained groundrange, and 2) computation of the neighboring extremal control perturbation at the sample value of flight path angle to compensate for changes in the approximatemore » physical model and enable the vehicle to reach the on-board computed groundrange. The trajectories are characterized by glide and dive flight to the target to minimize the time spent in the denser parts of the atmosphere. The proposed on-line scheme successfully brings the final altitude and range constraints together, as well as compensates for differences in flight model, atmosphere, and aerodynamics at the expense of guidance update computation time. Comparison with an independent, parameter optimization solution for the terminal velocity is excellent. 6 refs., 3 figs.« less

  8. Multivariate random regression analysis for body weight and main morphological traits in genetically improved farmed tilapia (Oreochromis niloticus).

    PubMed

    He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Han, Dandan; Xu, Pao; Yang, Runqing

    2017-11-02

    Because of their high economic importance, growth traits in fish are under continuous improvement. For growth traits that are recorded at multiple time-points in life, the use of univariate and multivariate animal models is limited because of the variable and irregular timing of these measures. Thus, the univariate random regression model (RRM) was introduced for the genetic analysis of dynamic growth traits in fish breeding. We used a multivariate random regression model (MRRM) to analyze genetic changes in growth traits recorded at multiple time-point of genetically-improved farmed tilapia. Legendre polynomials of different orders were applied to characterize the influences of fixed and random effects on growth trajectories. The final MRRM was determined by optimizing the univariate RRM for the analyzed traits separately via penalizing adaptively the likelihood statistical criterion, which is superior to both the Akaike information criterion and the Bayesian information criterion. In the selected MRRM, the additive genetic effects were modeled by Legendre polynomials of three orders for body weight (BWE) and body length (BL) and of two orders for body depth (BD). By using the covariance functions of the MRRM, estimated heritabilities were between 0.086 and 0.628 for BWE, 0.155 and 0.556 for BL, and 0.056 and 0.607 for BD. Only heritabilities for BD measured from 60 to 140 days of age were consistently higher than those estimated by the univariate RRM. All genetic correlations between growth time-points exceeded 0.5 for either single or pairwise time-points. Moreover, correlations between early and late growth time-points were lower. Thus, for phenotypes that are measured repeatedly in aquaculture, an MRRM can enhance the efficiency of the comprehensive selection for BWE and the main morphological traits.

  9. Comparison of viscous-shock-layer solutions by time-asymptotic and steady-state methods. [flow distribution around a Jupiter entry probe

    NASA Technical Reports Server (NTRS)

    Gupta, R. N.; Moss, J. N.; Simmonds, A. L.

    1982-01-01

    Two flow-field codes employing the time- and space-marching numerical techniques were evaluated. Both methods were used to analyze the flow field around a massively blown Jupiter entry probe under perfect-gas conditions. In order to obtain a direct point-by-point comparison, the computations were made by using identical grids and turbulence models. For the same degree of accuracy, the space-marching scheme takes much less time as compared to the time-marching method and would appear to provide accurate results for the problems with nonequilibrium chemistry, free from the effect of local differences in time on the final solution which is inherent in time-marching methods. With the time-marching method, however, the solutions are obtainable for the realistic entry probe shapes with massive or uniform surface blowing rates; whereas, with the space-marching technique, it is difficult to obtain converged solutions for such flow conditions. The choice of the numerical method is, therefore, problem dependent. Both methods give equally good results for the cases where results are compared with experimental data.

  10. Effects of exercise training and detraining in patients with fibromyalgia syndrome: a 3-yr longitudinal study.

    PubMed

    Sañudo, Borja; Carrasco, Luis; de Hoyo, Moisés; McVeigh, Joseph G

    2012-07-01

    This study aimed to evaluate the immediate effects of a 6-mo combined exercise program on quality-of-life, physical function, depression, and aerobic capacity in women with fibromyalgia syndrome and to determine the impact of repeated delivery of the intervention. Forty-one women with fibromyalgia were randomly assigned to a training group (EG; n = 21) and a control group (CG; n = 20). Quality-of-life and physical function were assessed using the 36-item Short-Form Health Survey (SF-36) and the Fibromyalgia Impact Questionnaire, and depression was measured using the Beck Depression Inventory. Physical fitness was measured using the 6-min Walk Test. Outcomes were assessed at baseline and after each 6-mo intervention, which was delivered over 30 mos (6 mos of training and 6 mos of detraining). After a 6-mo combined exercise program, there was a significant improvement in the Fibromyalgia Impact Questionnaire (P < 0.0005) for the training group over the control group. Repeated-measures analysis of variance across all time points demonstrated significant main effects for time for the Fibromyalgia Impact Questionnaire, SF-36, Beck Depression Inventory and the 6-min Walk Test, but there were no between-group interaction effects. For the EG, there were significant within-group changes in the Fibromyalgia Impact Questionnaire, SF-36, and Beck Depression Inventory at the final time point; however, there were no within-group changes for the control group. Improvement achieved for the training group were maintained during the detraining period. A long-term exercise program can produce immediate improvements in key health domains in women with fibromyalgia. The benefits achieved with regular training can be maintained for 30 mos. The lack of difference between groups over time may be caused by attrition and consequent lack of power at the final time point.

  11. Whole-body tissue distribution of total radioactivity in rats after oral administration of [¹⁴C]-bilastine.

    PubMed

    Lucero, María Luisa; Patterson, Andrew B

    2012-06-01

    This study evaluated the tissue distribution of total radioactivity in male albino, male pigmented, and time-mated female albino rats after oral administration of a single dose of [¹⁴C]-bilastine (20 mg/kg). Although only 1 animal was analyzed at each time point, there were apparent differences in bilastine distribution. Radioactivity was distributed to only a few tissues at low levels in male rats, whereas distribution was more extensive and at higher levels in female rats. This may be a simple sex-related difference. In each group and at each time point, concentrations of radioactivity were high in the liver and kidney, reflecting the role of these organs in the elimination process. In male albino rats, no radioactivity was measurable by 72 hours postdose. In male pigmented rats, only the eye and uveal tract had measurable levels of radioactivity at 24 hours. Measureable levels of radioactivity were retained in these tissues at the final sampling time point (336 hours postdose), indicating a degree of melanin-associated binding. In time-mated female rats, but not in albino or pigmented male rats, there was evidence of low-level passage of radioactivity across the placental barrier into fetal tissues as well as low-level transfer of radioactivity into the brain.

  12. Critical Slowing Down in Time-to-Extinction: An Example of Critical Phenomena in Ecology

    NASA Technical Reports Server (NTRS)

    Gandhi, Amar; Levin, Simon; Orszag, Steven

    1998-01-01

    We study a model for two competing species that explicitly accounts for effects due to discreteness, stochasticity and spatial extension of populations. The two species are equally preferred by the environment and do better when surrounded by others of the same species. We observe that the final outcome depends on the initial densities (uniformly distributed in space) of the two species. The observed phase transition is a continuous one and key macroscopic quantities like the correlation length of clusters and the time-to-extinction diverge at a critical point. Away from the critical point, the dynamics can be described by a mean-field approximation. Close to the critical point, however, there is a crossover to power-law behavior because of the gross mismatch between the largest and smallest scales in the system. We have developed a theory based on surface effects, which is in good agreement with the observed behavior. The course-grained reaction-diffusion system obtained from the mean-field dynamics agrees well with the particle system.

  13. A simple, remote, video based breathing monitor.

    PubMed

    Regev, Nir; Wulich, Dov

    2017-07-01

    Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.

  14. Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo

    2018-04-01

    In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.

  15. Flippin' Fluid Mechanics - Using Online Technology to Enhance the In-Class Learning Experience

    NASA Astrophysics Data System (ADS)

    Webster, D. R.; Majerich, D. M.

    2013-11-01

    This study provides an empirical analysis of using online technologies and team problem solving sessions to shift an undergraduate fluid mechanics course from a traditional lecture format to a collaborative learning environment. Students were from two consecutive semesters of the same course taught by the same professor. One group used online technologies and solved problems in class and the other did not. Out of class, the treatment group watched 72 short (11 minutes, average) video lectures covering course topics and example problems being solved. Three times a week students worked in teams of two to solve problems on desktop whiteboard tablets while the instructor and graduate assistants provided ``just-in-time'' tutoring. The number of team problems assigned during the semester exceeded 100. Weekly online homework was assigned to reinforce topics. The WileyPlus online system generated unique problem parameters for each student. The control group received three-50 minute weekly lectures. Data include three midterms and a final exam. Regression results indicate that controlling for all of the entered variables, for every one more problem solving session the student attended, the final grade was raised by 0.327 points. Thus, if a student participated in all 25 of the team problem solving sessions, the final grade would have been 8.2 points higher, a difference of nearly a grade. Using online technologies and teamwork appeared to result in improved achievement, but more research is needed to support these findings.

  16. A Statistical Guide to the Design of Deep Mutational Scanning Experiments

    PubMed Central

    Matuszewski, Sebastian; Hildebrandt, Marcel E.; Ghenu, Ana-Hermina; Jensen, Jeffrey D.; Bank, Claudia

    2016-01-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. PMID:27412710

  17. Sediment storage time in a simulated meandering river's floodplain, comparisons of point bar and overbank deposits

    NASA Astrophysics Data System (ADS)

    Ackerman, T. R.; Pizzuto, J. E.

    2016-12-01

    Sediment may be stored briefly or for long periods in alluvial deposits adjacent to rivers. The duration of sediment storage may affect diagenesis, and controls the timing of sediment delivery, affecting the propagation of upland sediment signals caused by tectonics, climate change, and land use, and the efficacy of watershed management strategies designed to reduce sediment loading to estuaries and reservoirs. Understanding the functional form of storage time distributions can help to extrapolate from limited field observations and improve forecasts of sediment loading. We simulate stratigraphy adjacent to a modeled river where meander migration is driven by channel curvature. The basal unit is built immediately as the channel migrates away, analogous to a point bar; rules for overbank (flood) deposition create thicker deposits at low elevations and near the channel, forming topographic features analogous to natural levees, scroll bars, and terraces. Deposit age is tracked everywhere throughout the simulation, and the storage time is recorded when the channel returns and erodes the sediment at each pixel. 210 ky of simulated run time is sufficient for the channel to migrate 10,500 channel widths, but only the final 90 ky are analyzed. Storage time survivor functions are well fit by exponential functions until 500 years (point bar) or 600 years (overbank) representing the youngest 50% of eroded sediment. Then (until an age of 12 ky, representing the next 48% (point bar) or 45% (overbank) of eroding sediment), the distributions are well fit by heavy tailed power functions with slopes of -1 (point bar) and -0.75 (overbank). After 12 ky (6% of model run time) the remainder of the storage time distributions become exponential (light tailed). Point bar sediment has the greatest chance (6%) of eroding at 120 years, as the river reworks recently deposited point bars. Overbank sediment has an 8% chance of eroding after 1 time step, a chance that declines by half after 3 time steps. The high probability of eroding young overbank deposits occurs as the river reworks recently formed natural levees. These results show that depositional environment affects river floodplain storage times shorter than a few centuries, and suggest that a power law distribution with a truncated tail may be the most reasonable functional fit.

  18. Optimal Low Energy Earth-Moon Transfers

    NASA Technical Reports Server (NTRS)

    Griesemer, Paul Ricord; Ocampo, Cesar; Cooley, D. S.

    2010-01-01

    The optimality of a low-energy Earth-Moon transfer is examined for the first time using primer vector theory. An optimal control problem is formed with the following free variables: the location, time, and magnitude of the transfer insertion burn, and the transfer time. A constraint is placed on the initial state of the spacecraft to bind it to a given initial orbit around a first body, and on the final state of the spacecraft to limit its Keplerian energy with respect to a second body. Optimal transfers in the system are shown to meet certain conditions placed on the primer vector and its time derivative. A two point boundary value problem containing these necessary conditions is created for use in targeting optimal transfers. The two point boundary value problem is then applied to the ballistic lunar capture problem, and an optimal trajectory is shown. Additionally, the ballistic lunar capture trajectory is examined to determine whether one or more additional impulses may improve on the cost of the transfer.

  19. A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.

    PubMed

    Quan, Quan; Cai, Kai-Yuan

    2016-02-01

    In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.

  20. A Note on the Problem of Proper Time in Weyl Space-Time

    NASA Astrophysics Data System (ADS)

    Avalos, R.; Dahia, F.; Romero, C.

    2018-02-01

    We discuss the question of whether or not a general Weyl structure is a suitable mathematical model of space-time. This is an issue that has been in debate since Weyl formulated his unified field theory for the first time. We do not present the discussion from the point of view of a particular unification theory, but instead from a more general standpoint, in which the viability of such a structure as a model of space-time is investigated. Our starting point is the well known axiomatic approach to space-time given by Elhers, Pirani and Schild (EPS). In this framework, we carry out an exhaustive analysis of what is required for a consistent definition for proper time and show that such a definition leads to the prediction of the so-called "second clock effect". We take the view that if, based on experience, we were to reject space-time models predicting this effect, this could be incorporated as the last axiom in the EPS approach. Finally, we provide a proof that, in this case, we are led to a Weyl integrable space-time as the most general structure that would be suitable to model space-time.

  1. Optimal ballistically captured Earth-Moon transfers

    NASA Astrophysics Data System (ADS)

    Ricord Griesemer, Paul; Ocampo, Cesar; Cooley, D. S.

    2012-07-01

    The optimality of a low-energy Earth-Moon transfer terminating in ballistic capture is examined for the first time using primer vector theory. An optimal control problem is formed with the following free variables: the location, time, and magnitude of the transfer insertion burn, and the transfer time. A constraint is placed on the initial state of the spacecraft to bind it to a given initial orbit around a first body, and on the final state of the spacecraft to limit its Keplerian energy with respect to a second body. Optimal transfers in the system are shown to meet certain conditions placed on the primer vector and its time derivative. A two point boundary value problem containing these necessary conditions is created for use in targeting optimal transfers. The two point boundary value problem is then applied to the ballistic lunar capture problem, and an optimal trajectory is shown. Additionally, the problem is then modified to fix the time of transfer, allowing for optimal multi-impulse transfers. The tradeoff between transfer time and fuel cost is shown for Earth-Moon ballistic lunar capture transfers.

  2. Prediction and Stability of Mathematics Skill and Difficulty

    PubMed Central

    Martin, Rebecca B.; Cirino, Paul T.; Barnes, Marcia A.; Ewing-Cobbs, Linda; Fuchs, Lynn S.; Stuebing, Karla K.; Fletcher, Jack M.

    2016-01-01

    The present study evaluated the stability of math learning difficulties over a 2-year period and investigated several factors that might influence this stability (categorical vs. continuous change, liberal vs. conservative cut point, broad vs. specific math assessment); the prediction of math performance over time and by performance level was also evaluated. Participants were 144 students initially identified as having a math difficulty (MD) or no learning difficulty according to low achievement criteria in the spring of Grade 3 or Grade 4. Students were reassessed 2 years later. For both measure types, a similar proportion of students changed whether assessed categorically or continuously. However, categorical change was heavily dependent on distance from the cut point and so more common for MD, who started closer to the cut point; reliable change index change was more similar across groups. There were few differences with regard to severity level of MD on continuous metrics or in terms of prediction. Final math performance on a broad computation measure was predicted by behavioral inattention and working memory while considering initial performance; for a specific fluency measure, working memory was not uniquely related, and behavioral inattention more variably related to final performance, again while considering initial performance. PMID:22392890

  3. Design and FPGA implementation for MAC layer of Ethernet PON

    NASA Astrophysics Data System (ADS)

    Zhu, Zengxi; Lin, Rujian; Chen, Jian; Ye, Jiajun; Chen, Xinqiao

    2004-04-01

    Ethernet passive optical network (EPON), which represents the convergence of low-cost, high-bandwidth and supporting multiple services, appears to be one of the best candidates for the next-generation access network. The work of standardizing EPON as a solution for access network is still underway in the IEEE802.3ah Ethernet in the first mile (EFM) task force. The final release is expected in 2004. Up to now, there has been no standard application specific integrated circuit (ASIC) chip available which fulfills the functions of media access control (MAC) layer of EPON. The MAC layer in EPON system has many functions, such as point-to-point emulation (P2PE), Ethernet MAC functionality, multi-point control protocol (MPCP), network operation, administration and maintenance (OAM) and link security. To implement those functions mentioned above, an embedded real-time operating system (RTOS) and a flexible programmable logic device (PLD) with an embedded processor are used. The software and hardware functions in MAC layer are realized through programming embedded microprocessor and field programmable gate array(FPGA). Finally, some experimental results are given in this paper. The method stated here can provide a valuable reference for developing EPON MAC layer ASIC.

  4. Prediction and stability of mathematics skill and difficulty.

    PubMed

    Martin, Rebecca B; Cirino, Paul T; Barnes, Marcia A; Ewing-Cobbs, Linda; Fuchs, Lynn S; Stuebing, Karla K; Fletcher, Jack M

    2013-01-01

    The present study evaluated the stability of math learning difficulties over a 2-year period and investigated several factors that might influence this stability (categorical vs. continuous change, liberal vs. conservative cut point, broad vs. specific math assessment); the prediction of math performance over time and by performance level was also evaluated. Participants were 144 students initially identified as having a math difficulty (MD) or no learning difficulty according to low achievement criteria in the spring of Grade 3 or Grade 4. Students were reassessed 2 years later. For both measure types, a similar proportion of students changed whether assessed categorically or continuously. However, categorical change was heavily dependent on distance from the cut point and so more common for MD, who started closer to the cut point; reliable change index change was more similar across groups. There were few differences with regard to severity level of MD on continuous metrics or in terms of prediction. Final math performance on a broad computation measure was predicted by behavioral inattention and working memory while considering initial performance; for a specific fluency measure, working memory was not uniquely related, and behavioral inattention more variably related to final performance, again while considering initial performance.

  5. Automatic registration of panoramic image sequence and mobile laser scanning data using semantic features

    NASA Astrophysics Data System (ADS)

    Li, Jianping; Yang, Bisheng; Chen, Chi; Huang, Ronggang; Dong, Zhen; Xiao, Wen

    2018-02-01

    Inaccurate exterior orientation parameters (EoPs) between sensors obtained by pre-calibration leads to failure of registration between panoramic image sequence and mobile laser scanning data. To address this challenge, this paper proposes an automatic registration method based on semantic features extracted from panoramic images and point clouds. Firstly, accurate rotation parameters between the panoramic camera and the laser scanner are estimated using GPS and IMU aided structure from motion (SfM). The initial EoPs of panoramic images are obtained at the same time. Secondly, vehicles in panoramic images are extracted by the Faster-RCNN as candidate primitives to be matched with potential corresponding primitives in point clouds according to the initial EoPs. Finally, translation between the panoramic camera and the laser scanner is refined by maximizing the overlapping area of corresponding primitive pairs based on the Particle Swarm Optimization (PSO), resulting in a finer registration between panoramic image sequences and point clouds. Two challenging urban scenes were experimented to assess the proposed method, and the final registration errors of these two scenes were both less than three pixels, which demonstrates a high level of automation, robustness and accuracy.

  6. Transition amplitude for two-time physics

    NASA Astrophysics Data System (ADS)

    Frederico, João E.; Rivelles, Victor O.

    2010-07-01

    We present the transition amplitude for a particle moving in a space with two times and D space dimensions having an Sp(2,R) local symmetry and an SO(D,2) rigid symmetry. It was obtained from the BRST-BFV quantization with a unique gauge choice. We show that by constraining the initial and final points of this amplitude to lie on some hypersurface of the D+2 space the resulting amplitude reproduces well-known systems in lower dimensions. This work provides an alternative way to derive the effects of two-time physics where all the results come from a single transition amplitude.

  7. Transition amplitude for two-time physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frederico, Joao E.; Rivelles, Victor O.; Instituto de Fisica, Universidade de Sao Paulo, Caixa Postal 66318, 05314-970, Sao Paulo, SP

    2010-07-15

    We present the transition amplitude for a particle moving in a space with two times and D space dimensions having an Sp(2,R) local symmetry and an SO(D,2) rigid symmetry. It was obtained from the BRST-BFV quantization with a unique gauge choice. We show that by constraining the initial and final points of this amplitude to lie on some hypersurface of the D+2 space the resulting amplitude reproduces well-known systems in lower dimensions. This work provides an alternative way to derive the effects of two-time physics where all the results come from a single transition amplitude.

  8. Pseudo-random bit generator based on lag time series

    NASA Astrophysics Data System (ADS)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  9. Tuning the presence of dynamical phase transitions in a generalized XY spin chain.

    PubMed

    Divakaran, Uma; Sharma, Shraddha; Dutta, Amit

    2016-05-01

    We study an integrable spin chain with three spin interactions and the staggered field (λ) while the latter is quenched either slowly [in a linear fashion in time (t) as t/τ, where t goes from a large negative value to a large positive value and τ is the inverse rate of quenching] or suddenly. In the process, the system crosses quantum critical points and gapless phases. We address the question whether there exist nonanalyticities [known as dynamical phase transitions (DPTs)] in the subsequent real-time evolution of the state (reached following the quench) governed by the final time-independent Hamiltonian. In the case of sufficiently slow quenching (when τ exceeds a critical value τ_{1}), we show that DPTs, of the form similar to those occurring for quenching across an isolated critical point, can occur even when the system is slowly driven across more than one critical point and gapless phases. More interestingly, in the anisotropic situation we show that DPTs can completely disappear for some values of the anisotropy term (γ) and τ, thereby establishing the existence of boundaries in the (γ-τ) plane between the DPT and no-DPT regions in both isotropic and anisotropic cases. Our study therefore leads to a unique situation when DPTs may not occur even when an integrable model is slowly ramped across a QCP. On the other hand, considering sudden quenches from an initial value λ_{i} to a final value λ_{f}, we show that the condition for the presence of DPTs is governed by relations involving λ_{i},λ_{f}, and γ, and the spin chain must be swept across λ=0 for DPTs to occur.

  10. Relationship Between Final Performance and Block Times with the Traditional and the New Starting Platforms with A Back Plate in International Swimming Championship 50-M and 100-M Freestyle Events

    PubMed Central

    Garcia-Hermoso, Antonio; Escalante, Yolanda; Arellano, Raul; Navarro, Fernando; Domínguez, Ana M.; Saavedra, Jose M.

    2013-01-01

    The purpose of this study was to investigate the association between block time and final performance for each sex in 50-m and 100-m individual freestyle, distinguishing between classification (1st to 3rd, 4th to 8th, 9th to 16th) and type of starting platform (old and new) in international competitions. Twenty-six international competitions covering a 13-year period (2000-2012) were analysed retrospectively. The data corresponded to a total of 1657 swimmers’ competition histories. A two-way ANOVA (sex x classification) was performed for each event and starting platform with the Bonferroni post-hoc test, and another two-way ANOVA for sex and starting platform (sex x starting platform). Pearson’s simple correlation coefficient was used to determine correlations between the block time and the final performance. Finally, a simple linear regression analysis was done between the final time and the block time for each sex and platform. The men had shorter starting block times than the women in both events and from both platforms. For 50-m event, medalists had shorter block times than semi- finalists with the old starting platforms. Block times were directly related to performance with the old starting platforms. With the new starting platforms, however, the relationship was inverse, notably in the women’s 50-m event. The block time was related for final performance in the men’s 50- m event with the old starting platform, but with the new platform it was critical only for the women’s 50-m event. Key Points The men had shorter block times than the women in both events and with both platforms. For both distances, the swimmers had shorter block times in their starts from the new starting platform with a back plate than with the old platform. For the 50-m event with the old starting platform, the medalists had shorter block times than the semi-finalists. The new starting platform block time was only determinant in the women’s 50-m event. In order to improve performance, specific training with the new platform with a back plate should be considered. PMID:24421729

  11. Smoke-Point Properties of Non-Buoyant Round Laminar Jet Diffusion Flames. Appendix J

    NASA Technical Reports Server (NTRS)

    Urban, D. L.; Yuan, Z.-G.; Sunderland, P. B.; Lin, K.-C.; Dai, Z.; Faeth, G. M.

    2000-01-01

    The laminar smoke-point properties of non-buoyant round laminar jet diffusion flames were studied emphasizing results from long-duration (100-230 s) experiments at microgravity carried out in orbit aboard the space shuttle Columbia. Experimental conditions included ethylene- and propane-fueled flames burning in still air at an ambient temperature of 300 K, pressures of 35-130 kPa, jet exit diameters of 1.6 and 2.7 mm, jet exit velocities of 170-690 mm/s, jet exit Reynolds numbers of 46-172, characteristic flame residence times of 40-302 ms, and luminous flame lengths of 15-63 mm. Contrary to the normal-gravity laminar smoke point, in microgravity, the onset of laminar smoke-point conditions involved two flame configurations: closed-tip flames with soot emissions along the flame axis and open-tip flames with soot emissions from an annular ring about the flame axis. Open-tip flames were observed at large characteristic flame residence times with the onset of soot emissions associated with radiative quenching near the flame tip: nevertheless, unified correlations of laminar smoke-point properties were obtained that included both flame configurations. Flame lengths at laminar smoke-point conditions were well correlated in terms of a corrected fuel flow rate suggested by a simplified analysis of flame shape. The present steady and non-buoyant flames emitted soot more readily than non-buoyant flames in earlier tests using ground-based microgravity facilities and than buoyant flames at normal gravity, as a result of reduced effects of unsteadiness, flame disturbances, and buoyant motion. For example, present measurements of laminar smoke-point flame lengths at comparable conditions were up to 2.3 times shorter than ground-based microgravity measurements and up to 6.4 times shorter than buoyant flame measurements. Finally, present laminar smoke-point flame lengths were roughly inversely proportional to pressure to a degree that is a somewhat smaller than observed during earlier tests both at microgravity (using ground-based facilities) and at normal gravity.

  12. The final word. OSHA's final ruling offers firm deadlines for infection control.

    PubMed

    West, K

    1992-03-01

    Departments that have put off program development while waiting for the final ruling to be published have a lot of work to do. Many departments have been cited and fined by OSHA in the past year for failure to begin infection-control programs or provide hepatitis-B vaccines to personnel. Under the new budget, OSHA was granted permission to up its fine structure sevenfold--thus, a small fine is $7,000, and the highest fine for a single violation is $70,000. Fines can have a greater impact on a department's budget than implementation of the program over time. A key point to remember is that a strong infection-control program will reduce exposure follow-up costs and worker-compensation claims. Infection control is a win-win situation.

  13. First-order reactant in homogeneous turbulence before the final period of decay. [contaminant fluctuations in chemical reaction

    NASA Technical Reports Server (NTRS)

    Kumar, P.; Patel, S. R.

    1974-01-01

    A method is described for studying theoretically the concentration fluctuations of a dilute contaminate undergoing a first-order chemical reaction. The method is based on Deissler's (1958) theory for homogeneous turbulence for times before the final period, and it follows the approach used by Loeffler and Deissler (1961) to study temperature fluctuations in homogeneous turbulence. Four-point correlation equations are obtained; it is assumed that terms containing fifth-order correlation are very small in comparison with those containing fourth-order correlations, and can therefore be neglected. A spectrum equation is obtained in a form which can be solved numerically, yielding the decay law for the concentration fluctuations in homogeneous turbulence for the period much before the final period of decay.

  14. Fourth-order numerical solutions of diffusion equation by using SOR method with Crank-Nicolson approach

    NASA Astrophysics Data System (ADS)

    Muhiddin, F. A.; Sulaiman, J.

    2017-09-01

    The aim of this paper is to investigate the effectiveness of the Successive Over-Relaxation (SOR) iterative method by using the fourth-order Crank-Nicolson (CN) discretization scheme to derive a five-point Crank-Nicolson approximation equation in order to solve diffusion equation. From this approximation equation, clearly, it can be shown that corresponding system of five-point approximation equations can be generated and then solved iteratively. In order to access the performance results of the proposed iterative method with the fourth-order CN scheme, another point iterative method which is Gauss-Seidel (GS), also presented as a reference method. Finally the numerical results obtained from the use of the fourth-order CN discretization scheme, it can be pointed out that the SOR iterative method is superior in terms of number of iterations, execution time, and maximum absolute error.

  15. Bayesian model-emulation of stochastic gravitational-wave spectra for probes of the final-parsec problem with pulsar-timing arrays

    NASA Astrophysics Data System (ADS)

    Taylor, Stephen R.; Simon, Joseph; Sampson, Laura

    2017-01-01

    The final parsec of supermassive black-hole binary evolution is subject to the complex interplay of stellar loss-cone scattering, circumbinary disk accretion, and gravitational-wave emission, with binary eccentricity affected by all of these. The strain spectrum of gravitational-waves in the pulsar-timing band thus encodes rich information about the binary population's response to these various environmental mechanisms. Current spectral models have heretofore followed basic analytic prescriptions, and attempt to investigate these final-parsec mechanisms in an indirect fashion. Here we describe a new technique to directly probe the environmental properties of supermassive black-hole binaries through "Bayesian model-emulation". We perform black-hole binary population synthesis simulations at a restricted set of environmental parameter combinations, compute the strain spectra from these, then train a Gaussian process to learn the shape of the spectrum at any point in parameter space. We describe this technique, demonstrate its efficacy with a program of simulated datasets, then illustrate its power by directly constraining final-parsec physics in a Bayesian analysis of the NANOGrav 5-year dataset. The technique is fast, flexible, and robust.

  16. Bayesian model-emulation of stochastic gravitational-wave spectra for probes of the final-parsec problem with pulsar-timing arrays

    NASA Astrophysics Data System (ADS)

    Taylor, Stephen; Simon, Joseph; Sampson, Laura

    2017-01-01

    The final parsec of supermassive black-hole binary evolution is subject to the complex interplay of stellar loss-cone scattering, circumbinary disk accretion, and gravitational-wave emission, with binary eccentricity affected by all of these. The strain spectrum of gravitational-waves in the pulsar-timing band thus encodes rich information about the binary population's response to these various environmental mechanisms. Current spectral models have heretofore followed basic analytic prescriptions, and attempt to investigate these final-parsec mechanisms in an indirect fashion. Here we describe a new technique to directly probe the environmental properties of supermassive black-hole binaries through ``Bayesian model-emulation''. We perform black-hole binary population synthesis simulations at a restricted set of environmental parameter combinations, compute the strain spectra from these, then train a Gaussian process to learn the shape of spectrum at any point in parameter space. We describe this technique, demonstrate its efficacy with a program of simulated datasets, then illustrate its power by directly constraining final-parsec physics in a Bayesian analysis of the NANOGrav 5-year dataset. The technique is fast, flexible, and robust.

  17. Surface acquisition through virtual milling

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1993-01-01

    Surface acquisition deals with the reconstruction of three dimensional objects from a set of data points. The most straightforward techniques require human intervention, a time consuming proposition. It is desirable to develop a fully automated alternative. Such a method is proposed in this paper. It makes use of surface measurements obtained from a 3-D laser digitizer - an instrument which provides the (x,y,z) coordinates of surface data points from various viewpoints. These points are assembled into several partial surfaces using a visibility constraint and a 2-D triangulation technique. Reconstruction of the final object requires merging these partial surfaces. This is accomplished through a procedure that emulates milling, a standard machining operation. From a geometrical standpoint the problem reduces to constructing the intersection of two or more non-convex polyhedra.

  18. Stereo camera based virtual cane system with identifiable distance tactile feedback for the blind.

    PubMed

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-06-13

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  19. Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

    PubMed Central

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-01-01

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind. PMID:24932864

  20. Combination of Tls Point Clouds and 3d Data from Kinect v2 Sensor to Complete Indoor Models

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.

  1. Controlled chemical stabilization of polyvinyl precursor fiber, and high strength carbon fiber produced therefrom

    DOEpatents

    Naskar, Amit K.

    2016-12-27

    Method for the preparation of carbon fiber, which comprises: (i) immersing functionalized polyvinyl precursor fiber into a liquid solution having a boiling point of at least 60.degree. C.; (ii) heating the liquid solution to a first temperature of at least 25.degree. C. at which the functionalized precursor fiber engages in an elimination-addition equilibrium while a tension of at least 0.1 MPa is applied to the fiber; (iii) gradually raising the first temperature to a final temperature that is at least 20.degree. C. above the first temperature and up to the boiling point of the liquid solution for sufficient time to convert the functionalized precursor fiber to a pre-carbonized fiber; and (iv) subjecting the pre-carbonized fiber produced according to step (iii) to high temperature carbonization conditions to produce the final carbon fiber. Articles and devices containing the fibers, including woven and non-woven mats or paper forms of the fibers, are also described.

  2. An adhered-particle analysis system based on concave points

    NASA Astrophysics Data System (ADS)

    Wang, Wencheng; Guan, Fengnian; Feng, Lin

    2018-04-01

    Particles adhered together will influence the image analysis in computer vision system. In this paper, a method based on concave point is designed. First, corner detection algorithm is adopted to obtain a rough estimation of potential concave points after image segmentation. Then, it computes the area ratio of the candidates to accurately localize the final separation points. Finally, it uses the separation points of each particle and the neighboring pixels to estimate the original particles before adhesion and provides estimated profile images. The experimental results have shown that this approach can provide good results that match the human visual cognitive mechanism.

  3. Computer simulation of the probability that endangered whales will interact with oil spills, Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, M.; Jayko, K.; Bowles, A.

    1986-10-01

    A numerical model system was developed to assess quantitatively the probability that endangered bowhead and gray whales will encounter spilled oil in Alaskan waters. Bowhead and gray whale migration diving-surfacing models, and an oil-spill-trajectory model comprise the system. The migration models were developed from conceptual considerations, then calibrated with and tested against observations. The distribution of animals is represented in space and time by discrete points, each of which may represent one or more whales. The movement of a whale point is governed by a random-walk algorithm which stochastically follows a migratory pathway.

  4. Preconditioning 2D Integer Data for Fast Convex Hull Computations

    PubMed Central

    2016-01-01

    In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved. PMID:26938221

  5. Magnetic Reconnection during Turbulence: Statistics of X-Points and Heating

    NASA Astrophysics Data System (ADS)

    Shay, M. A.; Haggerty, C. C.; Parashar, T.; Matthaeus, W. H.; Phan, T.; Drake, J. F.; Servidio, S.; Wan, M.

    2017-12-01

    Magnetic reconnection is a ubiquitous plasma phenomenon that has been observed in turbulent plasma systems. It is an important part of the turbulent dynamics and heating of space, laboratory and astrophysical plasmas. Recent simulation and observational studies have detailed how magnetic reconnection heats plasma and this work has developed to the point where it can be applied to larger and more complex plasma systems. In this context, we examine the statistics of magnetic reconnection in fully kinetic PIC simulations to quantify the role of magnetic reconnection on energy dissipation and plasma heating. Most notably, we study the time evolution of these x-line statistics in decaying turbulence. First, we examine the distribution of reconnection rates at the x-points found in the simulation and find that their distribution is broader than the MHD counterpart, and the average value is approximately 0.1. Second, we study the time evolution of the x-points to determine when reconnection is most active in the turbulence. Finally, using our findings on these statistics, reconnection heating predictions are applied to the regions surrounding the identified x-points and this is used to study the role of magnetic reconnection in turbulent heating of plasma. The ratio of ion to electron heating rates is found to be consistent with magnetic reconnection predictions.

  6. Automatic Detection and Classification of Pole-Like Objects for Urban Cartography Using Mobile Laser Scanning Data

    PubMed Central

    Ordóñez, Celestino; Cabo, Carlos; Sanz-Ablanedo, Enoc

    2017-01-01

    Mobile laser scanning (MLS) is a modern and powerful technology capable of obtaining massive point clouds of objects in a short period of time. Although this technology is nowadays being widely applied in urban cartography and 3D city modelling, it has some drawbacks that need to be avoided in order to strengthen it. One of the most important shortcomings of MLS data is concerned with the fact that it provides an unstructured dataset whose processing is very time-consuming. Consequently, there is a growing interest in developing algorithms for the automatic extraction of useful information from MLS point clouds. This work is focused on establishing a methodology and developing an algorithm to detect pole-like objects and classify them into several categories using MLS datasets. The developed procedure starts with the discretization of the point cloud by means of a voxelization, in order to simplify and reduce the processing time in the segmentation process. In turn, a heuristic segmentation algorithm was developed to detect pole-like objects in the MLS point cloud. Finally, two supervised classification algorithms, linear discriminant analysis and support vector machines, were used to distinguish between the different types of poles in the point cloud. The predictors are the principal component eigenvalues obtained from the Cartesian coordinates of the laser points, the range of the Z coordinate, and some shape-related indexes. The performance of the method was tested in an urban area with 123 poles of different categories. Very encouraging results were obtained, since the accuracy rate was over 90%. PMID:28640189

  7. Methane oxidation at a surface-sealed boreal landfill.

    PubMed

    Einola, Juha; Sormunen, Kai; Lensu, Anssi; Leiskallio, Antti; Ettala, Matti; Rintala, Jukka

    2009-07-01

    Methane oxidation was studied at a closed boreal landfill (area 3.9 ha, amount of deposited waste 200,000 tonnes) equipped with a passive gas collection and distribution system and a methane oxidative top soil cover integrated in a European Union landfill directive-compliant, multilayer final cover. Gas wells and distribution pipes with valves were installed to direct landfill gas through the water impermeable layer into the top soil cover. Mean methane emissions at the 25 measuring points at four measurement times (October 2005-June 2006) were 0.86-6.2 m(3) ha(-1) h(-1). Conservative estimates indicated that at least 25% of the methane flux entering the soil cover at the measuring points was oxidized in October and February, and at least 46% in June. At each measurement time, 1-3 points showed significantly higher methane fluxes into the soil cover (20-135 m(3) ha(-1) h(-1)) and methane emissions (6-135 m(3) ha(-1) h(-1)) compared to the other points (< 20 m(3) ha(-1) h(-1) and < 10 m(3) ha(-1) h(-1), respectively). These points of methane overload had a high impact on the mean methane oxidation at the measuring points, resulting in zero mean oxidation at one measurement time (November). However, it was found that by adjusting the valves in the gas distribution pipes the occurrence of methane overload can be to some extent moderated which may increase methane oxidation. Overall, the investigated landfill gas treatment concept may be a feasible option for reducing methane emissions at landfills where a water impermeable cover system is used.

  8. The long-term outcome of tick-borne encephalitis in Central Europe.

    PubMed

    Bogovič, Petra; Stupica, Daša; Rojko, Tereza; Lotrič-Furlan, Stanka; Avšič-Županc, Tatjana; Kastrin, Andrej; Lusa, Lara; Strle, Franc

    2018-02-01

    Information on the long-term outcome of tick-borne encephalitis (TBE) is limited. To assess the frequency and severity of post-encephalitic syndrome (PES) at different time points after TBE, and to determine the parameters associated with unfavourable outcome. Adult patients diagnosed with TBE in Slovenia in the period 2007-2012 were followed-up for 12 months and also examined 2-7 years after TBE. Each patient was asked to refer a person of similar age without a history of TBE to serve as control. A total of 420 patients and 295 control persons participated in the study. The proportion of patients with PES (defined as the presence of ≥ 2 subjective symptoms that newly developed or worsened since the onset of TBE and which had no other known medical explanation, and/or ≥ 1 objective neurological sign) was higher (P < 0.001) at the follow-up visit 6 months after the acute illness (127/304, 42%, 95% CI: 36-47%) than at 12 months (68/207, 33%, 95% CI: 26-40%); the proportion at 12 months was the same as at 2-7 years after TBE (137/420, 33%, 95% CI: 28-37%). However, the proportion of severe PES at the last two time points differed (9.7% vs 4.3%, P = 0.008). Multivariate logistic regression showed that unfavourable outcome at 6 months was associated with CSF leukocyte count (OR = 1.003, 95% CI: 1.001-1.005%, P = 0.017), at 12 months with the disease outcome at 6 months (OR = 115.473, 95% CI: 26.009-512.667%, P < 0.001), and at the final visit with disease outcome at 6 months (OR = 3.808, 95% CI: 1.151-12.593%, P = 0.028) and 12 months (OR = 26.740, 95% CI: 8.648-82.680%, P < 0.001). Unspecific symptoms that occurred within the four weeks before the final examination were more frequent and more constant in patients than in the control group. The frequency of PES diminished over time and stabilized 12 months after the acute illness, whereas the severity of PES continued to decline. Unfavourable outcomes at 12 months and at the final visit were strongly associated with the presence of PES at previous time points. Copyright © 2017 Elsevier GmbH. All rights reserved.

  9. Chiral dynamics in the low-temperature phase of QCD

    NASA Astrophysics Data System (ADS)

    Brandt, Bastian B.; Francis, Anthony; Meyer, Harvey B.; Robaina, Daniel

    2014-09-01

    We investigate the low-temperature phase of QCD and the crossover region with two light flavors of quarks. The chiral expansion around the point (T,m=0) in the temperature vs quark-mass plane indicates that a sharp real-time excitation exists with the quantum numbers of the pion. An exact sum rule is derived for the thermal modification of the spectral function associated with the axial charge density; the (dominant) pion pole contribution obeys the sum rule. We determine the two parameters of the pion dispersion relation using lattice QCD simulations and test the applicability of the chiral expansion. The time-dependent correlators are also analyzed using the maximum entropy method, yielding consistent results. Finally, we test the predictions of the chiral expansion around the point (T=0,m=0) for the temperature dependence of static observables.

  10. Shape and rotational elements of comet 67P/ Churyumov-Gerasimenko derived by stereo-photogrammetric analysis of OSIRIS NAC image data

    NASA Astrophysics Data System (ADS)

    Preusker, Frank; Scholten, Frank; Matz, Klaus-Dieter; Roatsch, Thomas; Willner, Konrad; Hviid, Stubbe; Knollenberg, Jörg; Kührt, Ekkehard; Sierks, Holger

    2015-04-01

    The European Space Agency's Rosetta spacecraft is equipped with the OSIRIS imaging system which consists of a wide-angle and a narrow-angle camera (WAC and NAC). After the approach phase, Rosetta was inserted into a descent trajectory of comet 67P/Churyumov-Gerasimenko (C-G) in early August 2014. Until early September, OSIRIS acquired several hundred NAC images of C-G's surface at different scales (from ~5 m/pixel during approach to ~0.9 m/pixel during descent). In that one month observation period, the surface was imaged several times within different mapping sequences. With the comet's rotation period of ~12.4 h and the low spacecraft velocity (< 1 m/s), the entire NAC dataset provides multiple NAC stereo coverage, adequate for stereo-photogrammetric (SPG) analysis towards the derivation of 3D surface models. We constrained the OSIRIS NAC images with our stereo requirements (15° < stereo angles < 45°, incidence angles <85°, emission angles <45°, differences in illumination < 10°, scale better than 5 m/pixel) and extracted about 220 NAC images that provide at least triple stereo image coverage for the entire illuminated surface in about 250 independent multi-stereo image combinations. For each image combination we determined tie points by multi-image matching in order to set-up a 3D control network and a dense surface point cloud for the precise reconstruction of C-G's shape. The control point network defines the input for a stereo-photogrammetric least squares adjustment. Based on the statistical analysis of adjustments we first refined C-G's rotational state (pole orientation and rotational period) and its behavior over time. Based upon this description of the orientation of C-G's body-fixed reference frame, we derived corrections for the nominal navigation data (pointing and position) within a final stereo-photogrammetric block adjustment where the mean 3D point accuracy of more than 100 million surface points has been improved from ~10 m to the sub-meter range. We finally applied point filtering and interpolation techniques to these surface 3D points and show the resulting SPG-based 3D surface model with a lateral sampling rate of about 2 m.

  11. Two-phase strategy of neural control for planar reaching movements: I. XY coordination variability and its relation to end-point variability.

    PubMed

    Rand, Miya K; Shimansky, Yury P

    2013-03-01

    A quantitative model of optimal transport-aperture coordination (TAC) during reach-to-grasp movements has been developed in our previous studies. The utilization of that model for data analysis allowed, for the first time, to examine the phase dependence of the precision demand specified by the CNS for neurocomputational information processing during an ongoing movement. It was shown that the CNS utilizes a two-phase strategy for movement control. That strategy consists of reducing the precision demand for neural computations during the initial phase, which decreases the cost of information processing at the expense of lower extent of control optimality. To successfully grasp the target object, the CNS increases precision demand during the final phase, resulting in higher extent of control optimality. In the present study, we generalized the model of optimal TAC to a model of optimal coordination between X and Y components of point-to-point planar movements (XYC). We investigated whether the CNS uses the two-phase control strategy for controlling those movements, and how the strategy parameters depend on the prescribed movement speed, movement amplitude and the size of the target area. The results indeed revealed a substantial similarity between the CNS's regulation of TAC and XYC. First, the variability of XYC within individual trials was minimal, meaning that execution noise during the movement was insignificant. Second, the inter-trial variability of XYC was considerable during the majority of the movement time, meaning that the precision demand for information processing was lowered, which is characteristic for the initial phase. That variability significantly decreased, indicating higher extent of control optimality, during the shorter final movement phase. The final phase was the longest (shortest) under the most (least) challenging combination of speed and accuracy requirements, fully consistent with the concept of the two-phase control strategy. This paper further discussed the relationship between motor variability and XYC variability.

  12. Microwave assisted extraction of iodine and bromine from edible seaweed for inductively coupled plasma-mass spectrometry determination.

    PubMed

    Romarís-Hortas, Vanessa; Moreda-Piñeiro, Antonio; Bermejo-Barrera, Pilar

    2009-08-15

    The feasibility of microwave energy to assist the solubilisation of edible seaweed samples by tetramethylammonium hydroxide (TMAH) has been investigated to extract iodine and bromine. Inductively coupled plasma-mass spectrometry (ICP-MS) has been used as a multi-element detector. Variables affecting the microwave assisted extraction/solubilisation (temperature, TMAH volume, ramp time and hold time) were firstly screened by applying a fractional factorial design (2(5-1)+2), resolution V and 2 centre points. When extracting both halogens, results showed statistical significance (confidence interval of 95%) for TMAH volume and temperature, and also for the two order interaction between both variables. Therefore, these two variables were finally optimized by a 2(2)+star orthogonal central composite design with 5 centre points and 2 replicates, and optimum values of 200 degrees C and 10 mL for temperature and TMAH volume, respectively, were found. The extraction time (ramp and hold times) was found statistically non-significant, and values of 10 and 5 min were chosen for the ramp time and the hold time, respectively. This means a fast microwave heating cycle. Repeatability of the over-all procedure has been found to be 6% for both elements, while iodine and bromine concentrations of 24.6 and 19.9 ng g(-1), respectively, were established for the limit of detection. Accuracy of the method was assessed by analyzing the NIES-09 (Sargasso, Sargassum fulvellum) certified reference material (CRM) and the iodine and bromine concentrations found have been in good agreement with the indicative values for this CRM. Finally, the method was applied to several edible dried and canned seaweed samples.

  13. Local spectrum analysis of field propagation in an anisotropic medium. Part II. Time-dependent fields.

    PubMed

    Tinkelman, Igor; Melamed, Timor

    2005-06-01

    In Part I of this two-part investigation [J. Opt. Soc. Am. A 22, 1200 (2005)], we presented a theory for phase-space propagation of time-harmonic electromagnetic fields in an anisotropic medium characterized by a generic wave-number profile. In this Part II, these investigations are extended to transient fields, setting a general analytical framework for local analysis and modeling of radiation from time-dependent extended-source distributions. In this formulation the field is expressed as a superposition of pulsed-beam propagators that emanate from all space-time points in the source domain and in all directions. Using time-dependent quadratic-Lorentzian windows, we represent the field by a phase-space spectral distribution in which the propagating elements are pulsed beams, which are formulated by a transient plane-wave spectrum over the extended-source plane. By applying saddle-point asymptotics, we extract the beam phenomenology in the anisotropic environment resulting from short-pulsed processing. Finally, the general results are applied to the special case of uniaxial crystal and compared with a reference solution.

  14. Stabilization of time domain acoustic boundary element method for the exterior problem avoiding the nonuniqueness.

    PubMed

    Jang, Hae-Won; Ih, Jeong-Guon

    2013-03-01

    The time domain boundary element method (TBEM) to calculate the exterior sound field using the Kirchhoff integral has difficulties in non-uniqueness and exponential divergence. In this work, a method to stabilize TBEM calculation for the exterior problem is suggested. The time domain CHIEF (Combined Helmholtz Integral Equation Formulation) method is newly formulated to suppress low order fictitious internal modes. This method constrains the surface Kirchhoff integral by forcing the pressures at the additional interior points to be zero when the shortest retarded time between boundary nodes and an interior point elapses. However, even after using the CHIEF method, the TBEM calculation suffers the exponential divergence due to the remaining unstable high order fictitious modes at frequencies higher than the frequency limit of the boundary element model. For complete stabilization, such troublesome modes are selectively adjusted by projecting the time response onto the eigenspace. In a test example for a transiently pulsating sphere, the final average error norm of the stabilized response compared to the analytic solution is 2.5%.

  15. Voronoi distance based prospective space-time scans for point data sets: a dengue fever cluster analysis in a southeast Brazilian town

    PubMed Central

    2011-01-01

    Background The Prospective Space-Time scan statistic (PST) is widely used for the evaluation of space-time clusters of point event data. Usually a window of cylindrical shape is employed, with a circular or elliptical base in the space domain. Recently, the concept of Minimum Spanning Tree (MST) was applied to specify the set of potential clusters, through the Density-Equalizing Euclidean MST (DEEMST) method, for the detection of arbitrarily shaped clusters. The original map is cartogram transformed, such that the control points are spread uniformly. That method is quite effective, but the cartogram construction is computationally expensive and complicated. Results A fast method for the detection and inference of point data set space-time disease clusters is presented, the Voronoi Based Scan (VBScan). A Voronoi diagram is built for points representing population individuals (cases and controls). The number of Voronoi cells boundaries intercepted by the line segment joining two cases points defines the Voronoi distance between those points. That distance is used to approximate the density of the heterogeneous population and build the Voronoi distance MST linking the cases. The successive removal of edges from the Voronoi distance MST generates sub-trees which are the potential space-time clusters. Finally, those clusters are evaluated through the scan statistic. Monte Carlo replications of the original data are used to evaluate the significance of the clusters. An application for dengue fever in a small Brazilian city is presented. Conclusions The ability to promptly detect space-time clusters of disease outbreaks, when the number of individuals is large, was shown to be feasible, due to the reduced computational load of VBScan. Instead of changing the map, VBScan modifies the metric used to define the distance between cases, without requiring the cartogram construction. Numerical simulations showed that VBScan has higher power of detection, sensitivity and positive predicted value than the Elliptic PST. Furthermore, as VBScan also incorporates topological information from the point neighborhood structure, in addition to the usual geometric information, it is more robust than purely geometric methods such as the elliptic scan. Those advantages were illustrated in a real setting for dengue fever space-time clusters. PMID:21513556

  16. Entanglement in a model for Hawking radiation: An application of quadratic algebras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bambah, Bindu A., E-mail: bbsp@uohyd.ernet.in; Mukku, C., E-mail: mukku@iiit.ac.in; Shreecharan, T., E-mail: shreecharan@gmail.com

    2013-03-15

    Quadratic polynomially deformed su(1,1) and su(2) algebras are utilized in model Hamiltonians to show how the gravitational system consisting of a black hole, infalling radiation and outgoing (Hawking) radiation can be solved exactly. The models allow us to study the long-time behaviour of the black hole and its outgoing modes. In particular, we calculate the bipartite entanglement entropies of subsystems consisting of (a) infalling plus outgoing modes and (b) black hole modes plus the infalling modes, using the Janus-faced nature of the model. The long-time behaviour also gives us glimpses of modifications in the character of Hawking radiation. Finally, wemore » study the phenomenon of superradiance in our model in analogy with atomic Dicke superradiance. - Highlights: Black-Right-Pointing-Pointer We examine a toy model for Hawking radiation with quantized black hole modes. Black-Right-Pointing-Pointer We use quadratic polynomially deformed su(1,1) algebras to study its entanglement properties. Black-Right-Pointing-Pointer We study the 'Dicke Superradiance' in black hole radiation using quadratically deformed su(2) algebras. Black-Right-Pointing-Pointer We study the modification of the thermal character of Hawking radiation due to quantized black hole modes.« less

  17. Stepwise drying of medicinal plants as alternative to reduce time and energy processing

    NASA Astrophysics Data System (ADS)

    Cuervo-Andrade, S. P.; Hensel, O.

    2016-07-01

    The objective of drying medicinal plants is to extend the shelf life and conserving the fresh characteristics. This is achieved by reducing the water activity (aw) of the product to a value which will inhibit the growth and development of pathogenic and spoilage microorganisms, significantly reducing enzyme activity and the rate at which undesirable chemical reactions occur. The technical drying process requires an enormous amount of thermal and electrical energy. An improvement in the quality of the product to be dried and at the same time a decrease in the drying cost and time are achieved through the utilization of a controlled conventional drying method, which is based on a good utilization of the renewable energy or looking for other alternatives which achieve lower processing times without sacrificing the final product quality. In this work the method of stepwise drying of medicinal plants is presented as an alternative to the conventional drying that uses a constant temperature during the whole process. The objective of stepwise drying is the decrease of drying time and reduction in energy consumption. In this process, apart from observing the effects on decreases the effective drying process time and energy, the influence of the different combinations of drying phases on several characteristics of the product are considered. The tests were carried out with Melissa officinalis L. variety citronella, sowed in greenhouse. For the stepwise drying process different combinations of initial and final temperature, 40/50°C, are evaluated, with different transition points associated to different moisture contents (20, 30, 40% and 50%) of the product during the process. Final quality of dried foods is another important issue in food drying. Drying process has effect in quality attributes drying products. This study was determining the color changes and essential oil loses by reference the measurement of the color and essential oil content of the fresh product was used. Drying curves were obtained to observe the dynamics of the process for different combinations of temperature and points of change, corresponding to different conditions of moisture content of the product.

  18. Optimization of fixed-range trajectories for supersonic transport aircraft

    NASA Astrophysics Data System (ADS)

    Windhorst, Robert Dennis

    1999-11-01

    This thesis develops near-optimal guidance laws that generate minimum fuel, time, or direct operating cost fixed-range trajectories for supersonic transport aircraft. The approach uses singular perturbation techniques to time-scale de-couple the equations of motion into three sets of dynamics, two of which are analyzed in the main body of this thesis and one of which is analyzed in the Appendix. The two-point-boundary-value-problems obtained by application of the maximum principle to the dynamic systems are solved using the method of matched asymptotic expansions. Finally, the two solutions are combined using the matching principle and an additive composition rule to form a uniformly valid approximation of the full fixed-range trajectory. The approach is used on two different time-scale formulations. The first holds weight constant, and the second allows weight and range dynamics to propagate on the same time-scale. Solutions for the first formulation are only carried out to zero order in the small parameter, while solutions for the second formulation are carried out to first order. Calculations for a HSCT design were made to illustrate the method. Results show that the minimum fuel trajectory consists of three segments: a minimum fuel energy-climb, a cruise-climb, and a minimum drag glide. The minimum time trajectory also has three segments: a maximum dynamic pressure ascent, a constant altitude cruise, and a maximum dynamic pressure glide. The minimum direct operating cost trajectory is an optimal combination of the two. For realistic costs of fuel and flight time, the minimum direct operating cost trajectory is very similar to the minimum fuel trajectory. Moreover, the HSCT has three local optimum cruise speeds, with the globally optimum cruise point at the highest allowable speed, if range is sufficiently long. The final range of the trajectory determines which locally optimal speed is best. Ranges of 500 to 6,000 nautical miles, subsonic and supersonic mixed flight, and varying fuel efficiency cases are analyzed. Finally, the payload-range curve of the HSCT design is determined.

  19. Size-exclusion chromatography for the determination of the boiling point distribution of high-boiling petroleum fractions.

    PubMed

    Boczkaj, Grzegorz; Przyjazny, Andrzej; Kamiński, Marian

    2015-03-01

    The paper describes a new procedure for the determination of boiling point distribution of high-boiling petroleum fractions using size-exclusion chromatography with refractive index detection. Thus far, the determination of boiling range distribution by chromatography has been accomplished using simulated distillation with gas chromatography with flame ionization detection. This study revealed that in spite of substantial differences in the separation mechanism and the detection mode, the size-exclusion chromatography technique yields similar results for the determination of boiling point distribution compared with simulated distillation and novel empty column gas chromatography. The developed procedure using size-exclusion chromatography has a substantial applicability, especially for the determination of exact final boiling point values for high-boiling mixtures, for which a standard high-temperature simulated distillation would have to be used. In this case, the precision of final boiling point determination is low due to the high final temperatures of the gas chromatograph oven and an insufficient thermal stability of both the gas chromatography stationary phase and the sample. Additionally, the use of high-performance liquid chromatography detectors more sensitive than refractive index detection allows a lower detection limit for high-molar-mass aromatic compounds, and thus increases the sensitivity of final boiling point determination. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Observed Measures of Negative Parenting Predict Brain Development during Adolescence.

    PubMed

    Whittle, Sarah; Vijayakumar, Nandita; Dennison, Meg; Schwartz, Orli; Simmons, Julian G; Sheeber, Lisa; Allen, Nicholas B

    2016-01-01

    Limited attention has been directed toward the influence of non-abusive parenting behaviour on brain structure in adolescents. It has been suggested that environmental influences during this period are likely to impact the way that the brain develops over time. The aim of this study was to investigate the association between aggressive and positive parenting behaviors on brain development from early to late adolescence, and in turn, psychological and academic functioning during late adolescence, using a multi-wave longitudinal design. Three hundred and sixty seven magnetic resonance imaging (MRI) scans were obtained over three time points from 166 adolescents (11-20 years). At the first time point, observed measures of maternal aggressive and positive behaviors were obtained. At the final time point, measures of psychological and academic functioning were obtained. Results indicated that a higher frequency of maternal aggressive behavior was associated with alterations in the development of right superior frontal and lateral parietal cortical thickness, and of nucleus accumbens volume, in males. Development of the superior frontal cortex in males mediated the relationship between maternal aggressive behaviour and measures of late adolescent functioning. We suggest that our results support an association between negative parenting and adolescent functioning, which may be mediated by immature or delayed brain maturation.

  1. Observed Measures of Negative Parenting Predict Brain Development during Adolescence

    PubMed Central

    Whittle, Sarah; Vijayakumar, Nandita; Dennison, Meg; Schwartz, Orli; Simmons, Julian G.; Sheeber, Lisa; Allen, Nicholas B.

    2016-01-01

    Limited attention has been directed toward the influence of non-abusive parenting behaviour on brain structure in adolescents. It has been suggested that environmental influences during this period are likely to impact the way that the brain develops over time. The aim of this study was to investigate the association between aggressive and positive parenting behaviors on brain development from early to late adolescence, and in turn, psychological and academic functioning during late adolescence, using a multi-wave longitudinal design. Three hundred and sixty seven magnetic resonance imaging (MRI) scans were obtained over three time points from 166 adolescents (11–20 years). At the first time point, observed measures of maternal aggressive and positive behaviors were obtained. At the final time point, measures of psychological and academic functioning were obtained. Results indicated that a higher frequency of maternal aggressive behavior was associated with alterations in the development of right superior frontal and lateral parietal cortical thickness, and of nucleus accumbens volume, in males. Development of the superior frontal cortex in males mediated the relationship between maternal aggressive behaviour and measures of late adolescent functioning. We suggest that our results support an association between negative parenting and adolescent functioning, which may be mediated by immature or delayed brain maturation. PMID:26824348

  2. Research on facial expression simulation based on depth image

    NASA Astrophysics Data System (ADS)

    Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao

    2017-11-01

    Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reister, D.B.; Pin, F.G.

    This paper addresses the problem of time-optional motions for a mobile platform in a planar environment. The platform has two non-steerable independently driven wheels. The overall mission of the robot is expressed in terms of a sequence of via points at which the platform must be at rest in a given configuration (position and orientation). The objective is to plan time-optimal trajectories between these configurations assuming an unobstructed environment. Using Pontryagin's maximum principle (PMP), we formally demonstrate that all time optimal motions of the platform for this problem occur for bang-bang controls on the wheels (at each instant, the accelerationmore » on each wheel is either at its upper or lower limit). The PMP, however, only provides necessary conditions for time optimality. To find the time optimal robot trajectories, we first parameterize the bang-bang trajectories using the switch times on the wheels (the times at which the wheel accelerations change sign). With this parameterization, we can fully search the robot trajectory space and find the switch times that will produce particular paths to a desired final configuration of the platform. We show numerically that robot trajectories with three switch times (two on one wheel, one on the other) can reach any position, while trajectories with four switch times can reach any configuration. By numerical comparison with other trajectories involving similar or greater numbers of switch times, we then identify the sets of time-optimal trajectories. These are uniquely defined using ranges of the parameters, and consist of subsets of trajectories with three switch times for the problem when the final orientation of the robot is not specified, and four switch times when a full final configuration is specified. We conclude with a description of the use of the method for trajectory planning for one of our robots.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reister, D.B.; Pin, F.G.

    This paper addresses the problem of time-optional motions for a mobile platform in a planar environment. The platform has two non-steerable independently driven wheels. The overall mission of the robot is expressed in terms of a sequence of via points at which the platform must be at rest in a given configuration (position and orientation). The objective is to plan time-optimal trajectories between these configurations assuming an unobstructed environment. Using Pontryagin`s maximum principle (PMP), we formally demonstrate that all time optimal motions of the platform for this problem occur for bang-bang controls on the wheels (at each instant, the accelerationmore » on each wheel is either at its upper or lower limit). The PMP, however, only provides necessary conditions for time optimality. To find the time optimal robot trajectories, we first parameterize the bang-bang trajectories using the switch times on the wheels (the times at which the wheel accelerations change sign). With this parameterization, we can fully search the robot trajectory space and find the switch times that will produce particular paths to a desired final configuration of the platform. We show numerically that robot trajectories with three switch times (two on one wheel, one on the other) can reach any position, while trajectories with four switch times can reach any configuration. By numerical comparison with other trajectories involving similar or greater numbers of switch times, we then identify the sets of time-optimal trajectories. These are uniquely defined using ranges of the parameters, and consist of subsets of trajectories with three switch times for the problem when the final orientation of the robot is not specified, and four switch times when a full final configuration is specified. We conclude with a description of the use of the method for trajectory planning for one of our robots.« less

  5. Chaotic Dynamics in a Low-Energy Transfer Strategy to the Equilateral Equilibrium Points in the Earth-Moon System

    NASA Astrophysics Data System (ADS)

    Salazar, F. J. T.; Macau, E. E. N.; Winter, O. C.

    In the frame of the equilateral equilibrium points exploration, numerous future space missions will require maximization of payload mass, simultaneously achieving reasonable transfer times. To fulfill this request, low-energy non-Keplerian orbits could be used to reach L4 and L5 in the Earth-Moon system instead of high energetic transfers. Previous studies have shown that chaos in physical systems like the restricted three-body Earth-Moon-particle problem can be used to direct a chaotic trajectory to a target that has been previously considered. In this work, we propose to transfer a spacecraft from a circular Earth Orbit in the chaotic region to the equilateral equilibrium points L4 and L5 in the Earth-Moon system, exploiting the chaotic region that connects the Earth with the Moon and changing the trajectory of the spacecraft (relative to the Earth) by using a gravity assist maneuver with the Moon. Choosing a sequence of small perturbations, the time of flight is reduced and the spacecraft is guided to a proper trajectory so that it uses the Moon's gravitational force to finally arrive at a desired target. In this study, the desired target will be an orbit about the Lagrangian equilibrium points L4 or L5. This strategy is not only more efficient with respect to thrust requirement, but also its time transfer is comparable to other known transfer techniques based on time optimization.

  6. Smoke-Point Properties of Nonbuoyant Round Laminar Jet Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Urban, D. L.; Yuan, Z.-G.; Sunderland, R. B.; Lin, K.-C.; Dai, Z.; Faeth, G. M.

    2000-01-01

    The laminar smoke-point properties of nonbuoyant round laminar jet diffusion flames were studied emphasizing results from long duration (100-230 s) experiments at microgravity carried -out on- orbit in the Space Shuttle Columbia. Experimental conditions included ethylene-and propane-fueled flames burning in still air at an ambient temperature of 300 K, initial jet exit diameters of 1.6 and 2.7 mm, jet exit velocities of 170-1630 mm/s, jet exit Reynolds numbers of 46-172, characteristic flame residence times of 40-302 ms, and luminous flame lengths of 15-63 mm. The onset of laminar smoke-point conditions involved two flame configurations: closed-tip flames with first soot emissions along the flame axis and open-tip flames with first soot emissions from an annular ring about the flame axis. Open-tip flames were observed at large characteristic flame residence times with the onset of soot emissions associated with radiative quenching near the flame tip; nevertheless, unified correlations of laminar smoke-point properties were obtained that included both flame configurations. Flame lengths at laminar smoke-point conditions were well-correlated in terms of a corrected fuel flow rate suggested by a simplified analysis of flame shape. The present steady and nonbuoyant flames emitted soot more readily than earlier tests of nonbuoyant flames at microgravity using ground-based facilities and of buoyant flames at normal gravity due to reduced effects of unsteadiness, flame disturbances and buoyant motion. For example, laminar smoke-point flame lengths from ground-based microgravity measurements were up to 2.3 times longer and from buoyant flame measurements were up to 6.4 times longer than the present measurements at comparable conditions. Finally, present laminar smoke-point flame lengths were roughly inversely proportional to pressure, which is a somewhat slower variation than observed during earlier tests both at microgravity using ground-based facilities and at normal gravity.

  7. Smoke-Point Properties of Nonbuoyant Round Laminar Jet Diffusion Flames. Appendix B

    NASA Technical Reports Server (NTRS)

    Urban, D. L.; Yuan, Z.-G.; Sunderland, P. B.; Lin, K.-C.; Dai, Z.; Faeth, G. M.; Ross, H. D. (Technical Monitor)

    2000-01-01

    The laminar smoke-point properties of non-buoyant round laminar jet diffusion flames were studied emphasizing results from long-duration (100-230 s) experiments at microgravity carried out in orbit aboard the space shuttle Columbia. Experimental conditions included ethylene- and propane-fueled flames burning in still air at an ambient temperature of 300 K, pressures of 35-130 kPa, jet exit diameters of 1.6 and 2.7 mm, jet exit velocities of 170-690 mm/s, jet exit Reynolds numbers of 46-172, characteristic flame residence times of 40-302 ms, and luminous flame lengths of 15-63 mm. Contrary to the normal-gravity laminar smoke point, in microgravity the onset of laminar smoke-point conditions involved two flame configurations: closed-tip flames with soot emissions along the flame axis and open-tip flames with soot emissions from an annular ring about the flame axis. Open-tip flames were observed at large characteristic flame residence times with the onset of soot emissions associated with radiative quenching near the flame tip: nevertheless, unified correlations of laminar smoke-point properties were obtained that included both flame configurations. Flame lengths at laminar smoke-point conditions were well correlated in terms of a corrected fuel flow rate suggested by a simplified analysis of flame shape. The present steady and nonbuoyant flames emitted soot more readily than non-buoyant flames in earlier tests using ground-based microgravity facilities and than buoyant flames at normal gravity, as a result of reduced effects of unsteadiness, flame disturbances, and buoyant motion. For example, present measurements of laminar smokepoint flame lengths at comparable conditions were up to 2.3 times shorter than ground-based microgravity measurements and up to 6.4 times shorter than buoyant flame measurements. Finally, present laminar smoke-point flame lengths were roughly inversely proportional to pressure to a degree that is a somewhat smaller than observed during earlier tests both at microgravity (using ground-based facilities) and at normal gravity,

  8. An Intrinsic Role of Beta Oscillations in Memory for Time Estimation.

    PubMed

    Wiener, Martin; Parikh, Alomi; Krakow, Arielle; Coslett, H Branch

    2018-05-22

    The neural mechanisms underlying time perception are of vital importance to a comprehensive understanding of behavior and cognition. Recent work has suggested a supramodal role for beta oscillations in measuring temporal intervals. However, the precise function of beta oscillations and whether their manipulation alters timing has yet to be determined. To accomplish this, we first re-analyzed two, separate EEG datasets and demonstrate that beta oscillations are associated with the retention and comparison of a memory standard for duration. We next conducted a study of 20 human participants using transcranial alternating current stimulation (tACS), over frontocentral cortex, at alpha and beta frequencies, during a visual temporal bisection task, finding that beta stimulation exclusively shifts the perception of time such that stimuli are reported as longer in duration. Finally, we decomposed trialwise choice data with a drift diffusion model of timing, revealing that the shift in timing is caused by a change in the starting point of accumulation, rather than the drift rate or threshold. Our results provide evidence for the intrinsic involvement of beta oscillations in the perception of time, and point to a specific role for beta oscillations in the encoding and retention of memory for temporal intervals.

  9. A Statistical Guide to the Design of Deep Mutational Scanning Experiments.

    PubMed

    Matuszewski, Sebastian; Hildebrandt, Marcel E; Ghenu, Ana-Hermina; Jensen, Jeffrey D; Bank, Claudia

    2016-09-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. Copyright © 2016 by the Genetics Society of America.

  10. Final Report for Dynamic Models for Causal Analysis of Panel Data. Methodological Overview. Part II, Chapter 1.

    ERIC Educational Resources Information Center

    Hannan, Michael T.

    This technical document, part of a series of chapters described in SO 011 759, describes a basic model of panel analysis used in a study of the causes of institutional and structural change in nations. Panel analysis is defined as a record of state occupancy of a sample of units at two or more points in time; for example, voters disclose voting…

  11. La Stella di Betlemme in arte e scienza

    NASA Astrophysics Data System (ADS)

    Sigismondi, Costantino

    2014-05-01

    The star of Bethlehem has been represented in many artworks, starting from II century AD in Priscilla Catacumbs in Rome. The 14 pointed silver star of 1717 which is located in the place of birth of Jesus in Bethlehem remembers the numbers of generations 14 repeated three times since Abraham to Jesus in Matthew 1: 1-17. Finally the hypotehsis of Mira Ceti as star of Bethlehem is reviewed

  12. CudaChain: an alternative algorithm for finding 2D convex hulls on the GPU.

    PubMed

    Mei, Gang

    2016-01-01

    This paper presents an alternative GPU-accelerated convex hull algorithm and a novel S orting-based P reprocessing A pproach (SPA) for planar point sets. The proposed convex hull algorithm termed as CudaChain consists of two stages: (1) two rounds of preprocessing performed on the GPU and (2) the finalization of calculating the expected convex hull on the CPU. Those interior points locating inside a quadrilateral formed by four extreme points are first discarded, and then the remaining points are distributed into several (typically four) sub regions. For each subset of points, they are first sorted in parallel; then the second round of discarding is performed using SPA; and finally a simple chain is formed for the current remaining points. A simple polygon can be easily generated by directly connecting all the chains in sub regions. The expected convex hull of the input points can be finally obtained by calculating the convex hull of the simple polygon. The library Thrust is utilized to realize the parallel sorting, reduction, and partitioning for better efficiency and simplicity. Experimental results show that: (1) SPA can very effectively detect and discard the interior points; and (2) CudaChain achieves 5×-6× speedups over the famous Qhull implementation for 20M points.

  13. Examining perceived stigma of children with newly-diagnosed epilepsy and their caregivers over a two-year period.

    PubMed

    Rood, Jennifer E; Schultz, Janet R; Rausch, Joseph R; Modi, Avani C

    2014-10-01

    The purpose of this study was to examine the following: 1) the course of perceived epilepsy-related stigma among children newly diagnosed with epilepsy (n=39) and their caregivers (n=97) over a two-year period, 2) the influence of seizure absence/presence on children and caregivers' perception of epilepsy-related stigma, and 3) the congruence of child and caregiver perception of child epilepsy-related stigma. Participants completed a measure of perceived epilepsy-related stigma at three time points, and seizure status was collected at the final time point. Results indicated that both caregivers (t(1,76)=-2.57, p<.01) and children with epilepsy (t(1,29)=-3.37, p<.01) reported decreasing epilepsy-related stigma from diagnosis to two years postdiagnosis. No significant differences were found in caregiver and child reports of perceived stigma for children experiencing seizures compared with children who have been seizure-free for the past year. Results revealed poor caregiver-child agreement of perceived epilepsy-related stigma at all three time points. These data suggest that while children with epilepsy initially perceive epilepsy-related stigma at diagnosis, their perception of stigma decreases over time. Having a better understanding of the course of epilepsy-related stigma provides clinicians with information regarding critical times to support families with stigma reduction interventions. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Examining Perceived Stigma of Children with Newly-Diagnosed Epilepsy and Their Caregivers Over a Two Year Period

    PubMed Central

    Rood, Jennifer E.; Schultz, Janet R.; Rausch, Joseph R.; Modi, Avani C.

    2014-01-01

    The purpose of this study was to examine: 1) the course of perceived epilepsy-related stigma among children newly-diagnosed with epilepsy (n=39) and their caregivers (n=97) over a two year period, 2) the influence of seizure absence/presence on children and caregivers’ perception of epilepsy-related stigma, and 3) congruence of child and caregiver perception of child epilepsy-related stigma. Participants completed a measure of perceived epilepsy-related stigma at three time points, and seizure status was collected at the final time point. Results indicated both caregivers (t1,76 = − 2.57 p< .01) and children with epilepsy (t1,29=−3.37, p< .01) reported decreasing epilepsy-related stigma from diagnosis to two years post-diagnosis. No significant differences were found in caregiver and child report of perceived stigma for children experiencing seizures as compared to children who have been seizure-free for the past year. Results revealed poor caregiver-child agreement of perceived epilepsy-related stigma at all three time points. These data suggest that while children with epilepsy initially perceive epilepsy-related stigma at diagnosis, their perception of stigma decreases over time. Having a better understanding of the course of epilepsy-related stigma provides clinicians with information regarding critical times to support families with stigma reduction interventions. PMID:25173098

  15. Sensitivity quantification of remote detection NMR and MRI

    NASA Astrophysics Data System (ADS)

    Granwehr, J.; Seeley, J. A.

    2006-04-01

    A sensitivity analysis is presented of the remote detection NMR technique, which facilitates the spatial separation of encoding and detection of spin magnetization. Three different cases are considered: remote detection of a transient signal that must be encoded point-by-point like a free induction decay, remote detection of an experiment where the transient dimension is reduced to one data point like phase encoding in an imaging experiment, and time-of-flight (TOF) flow visualization. For all cases, the sensitivity enhancement is proportional to the relative sensitivity between the remote detector and the circuit that is used for encoding. It is shown for the case of an encoded transient signal that the sensitivity does not scale unfavorably with the number of encoded points compared to direct detection. Remote enhancement scales as the square root of the ratio of corresponding relaxation times in the two detection environments. Thus, remote detection especially increases the sensitivity of imaging experiments of porous materials with large susceptibility gradients, which cause a rapid dephasing of transverse spin magnetization. Finally, TOF remote detection, in which the detection volume is smaller than the encoded fluid volume, allows partial images corresponding to different time intervals between encoding and detection to be recorded. These partial images, which contain information about the fluid displacement, can be recorded, in an ideal case, with the same sensitivity as the full image detected in a single step with a larger coil.

  16. A Composite Source Model With Fractal Subevent Size Distribution

    NASA Astrophysics Data System (ADS)

    Burjanek, J.; Zahradnik, J.

    A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.

  17. Matrix quantum mechanics on S1 /Z2

    NASA Astrophysics Data System (ADS)

    Betzios, P.; Gürsoy, U.; Papadoulaki, O.

    2018-03-01

    We study Matrix Quantum Mechanics on the Euclidean time orbifold S1 /Z2. Upon Wick rotation to Lorentzian time and taking the double-scaling limit this theory provides a toy model for a big-bang/big crunch universe in two dimensional non-critical string theory where the orbifold fixed points become cosmological singularities. We derive the MQM partition function both in the canonical and grand canonical ensemble in two different formulations and demonstrate agreement between them. We pinpoint the contribution of twisted states in both of these formulations either in terms of bi-local operators acting at the end-points of time or branch-cuts on the complex plane. We calculate, in the matrix model, the contribution of the twisted states to the torus level partition function explicitly and show that it precisely matches the world-sheet result, providing a non-trivial test of the proposed duality. Finally we discuss some interesting features of the partition function and the possibility of realising it as a τ-function of an integrable hierarchy.

  18. The effects of practice on movement distance and final position reproduction: implications for the equilibrium-point control of movements.

    PubMed

    Jaric, S; Corcos, D M; Gottlieb, G L; Ilic, D B; Latash, M L

    1994-01-01

    Predictions of two views on single-joint motor control, namely programming of muscle force patterns and equilibrium-point control, were compared with the results of experiments with reproduction of movement distance and final location during fast unidirectional elbow flexions. Two groups of subjects were tested. The first group practiced movements over a fixed distance (36 degrees), starting from seven different initial positions (distance group, DG). The second group practiced movements from the same seven initial positions to a fixed final location (location group, LG). Later, all the subjects were tested at the practiced task with their eyes closed, and then, unexpectedly for the subjects, they were tested at the other, unpracticed task. In both groups, the task to reproduce final position had lower indices of final position variability than the task to reproduce movement distance. Analysis of the linear regression lines between initial position and final position (or movement distance) also demonstrated a better (more accurate) performance during final position reproduction than during distance reproduction. The data are in a good correspondence with the predictions of the equilibrium-point hypothesis, but not with the predictions of the force-pattern control approach.

  19. Variability of residue concentrations of ciprofloxacin in honey from treated hives.

    PubMed

    Chan, Danny; Macarthur, Roy; Fussell, Richard J; Wilford, Jack; Budge, Giles

    2017-04-01

    Honey bees (Apis mellifera L.) were treated with a model veterinary drug compound (ciprofloxacin) in a 3-year study (2012-14) to investigate the variability of residue concentration in honey. Sucrose solution containing ciprofloxacin was administered to 45 hives (1 g of ciprofloxacin per hive) at the beginning of the honey flow in late May/mid-June 2012, 2013 and 2014. Buckfast honey bees (A. mellifera - hybrid) were used in years 2012 and 2013. Carniolan honey bees (A. mellifera carnica) were used instead of the Buckfast honey bees as a replacement due to unforeseen circumstances in the final year of the study (2014). Honey was collected over nine scheduled time points from May/June till late October each year. Up to five hives were removed and their honey analysed per time point. Honey samples were analysed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) to determine ciprofloxacin concentration. Statistical assessment of the data shows that the inter-hive variation of ciprofloxacin concentrations in 2012/13 is very different compared with that of 2014 with relative standard deviations (RSDs) of 138% and 61%, respectively. The average ciprofloxacin concentration for 2014 at the last time point was more than 10 times the concentration compared with samples from 2012/13 at the same time point. The difference between the 2012/13 data compared with the 2014 data is likely due to the different type of honey bees used in this study (2012/13 Buckfast versus 2014 Carniolan). Uncertainty estimates for honey with high ciprofloxacin concentration (upper 95th percentile) across all hives for 55-day withdrawal samples gave residual standard errors (RSEs) of 22%, 20% and 11% for 2012, 2013 and 2014, respectively. If the number of hives were to be reduced for future studies, RSEs were estimated to be 52% (2012), 54% (2013) and 26% (2014) for one hive per time point (nine total hives).

  20. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    DOE PAGES

    Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; ...

    2016-05-20

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enablingmore » larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.« less

  1. Effect of target color and scanning geometry on terrestrial LiDAR point-cloud noise and plane fitting

    NASA Astrophysics Data System (ADS)

    Bolkas, Dimitrios; Martinez, Aaron

    2018-01-01

    Point-cloud coordinate information derived from terrestrial Light Detection And Ranging (LiDAR) is important for several applications in surveying and civil engineering. Plane fitting and segmentation of target-surfaces is an important step in several applications such as in the monitoring of structures. Reliable parametric modeling and segmentation relies on the underlying quality of the point-cloud. Therefore, understanding how point-cloud errors affect fitting of planes and segmentation is important. Point-cloud intensity, which accompanies the point-cloud data, often goes hand-in-hand with point-cloud noise. This study uses industrial particle boards painted with eight different colors (black, white, grey, red, green, blue, brown, and yellow) and two different sheens (flat and semi-gloss) to explore how noise and plane residuals vary with scanning geometry (i.e., distance and incidence angle) and target-color. Results show that darker colors, such as black and brown, can produce point clouds that are several times noisier than bright targets, such as white. In addition, semi-gloss targets manage to reduce noise in dark targets by about 2-3 times. The study of plane residuals with scanning geometry reveals that, in many of the cases tested, residuals decrease with increasing incidence angles, which can assist in understanding the distribution of plane residuals in a dataset. Finally, a scheme is developed to derive survey guidelines based on the data collected in this experiment. Three examples demonstrate that users should consider instrument specification, required precision of plane residuals, required point-spacing, target-color, and target-sheen, when selecting scanning locations. Outcomes of this study can aid users to select appropriate instrumentation and improve planning of terrestrial LiDAR data-acquisition.

  2. The role of photographic parameters in laser speckle or particle image displacement velocimetry

    NASA Technical Reports Server (NTRS)

    Lourenco, L.; Krothapalli, A.

    1987-01-01

    The parameters involved in obtaining the multiple exposure photographs in the laser speckle velocimetry method (to record the light scattering by the seeding particles) were optimized. The effects of the type, concentration, and dimensions of the tracer, the exposure conditions (time between exposures, exposure time, and number of exposures), and the sensitivity and resolution of the film on the quality of the final results were investigated, photographing an experimental flow behind an impulsively started circular cylinder. The velocity data were acquired by digital processing of Young's fringes, produced by point-by-point scanning of a photographic negative. Using the optimal photographing conditions, the errors involved in the estimation of the fringe angle and spacing were of the order of 1 percent for the spacing and +/1 deg for the fringe orientation. The resulting accuracy in the velocity was of the order of 2-3 percent of the maximum velocity in the field.

  3. The Raptor Real-Time Processing Architecture

    NASA Astrophysics Data System (ADS)

    Galassi, M.; Starr, D.; Wozniak, P.; Brozdin, K.

    The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback, etc.) is implemented with a ``component'' approach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally, the Raptor architecture is entirely based on free software (sometimes referred to as ``open source'' software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.

  4. Raptor -- Mining the Sky in Real Time

    NASA Astrophysics Data System (ADS)

    Galassi, M.; Borozdin, K.; Casperson, D.; McGowan, K.; Starr, D.; White, R.; Wozniak, P.; Wren, J.

    2004-06-01

    The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback...) is implemented with a ``component'' aproach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally: the Raptor architecture is entirely based on free software (sometimes referred to as "open source" software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.

  5. Role of Erosion in Shaping Point Bars

    NASA Astrophysics Data System (ADS)

    Moody, J.; Meade, R.

    2012-04-01

    A powerful metaphor in fluvial geomorphology has been that depositional features such as point bars (and other floodplain features) constitute the river's historical memory in the form of uniformly thick sedimentary deposits waiting for the geomorphologist to dissect and interpret the past. For the past three decades, along the channel of Powder River (Montana USA) we have documented (with annual cross-sectional surveys and pit trenches) the evolution of the shape of three point bars that were created when an extreme flood in 1978 cut new channels across the necks of two former meander bends and radically shifted the location of a third bend. Subsequent erosion has substantially reshaped, at different time scales, the relic sediment deposits of varying age. At the weekly to monthly time scale (i.e., floods from snowmelt or floods from convective or cyclonic storms), the maximum scour depth was computed (by using a numerical model) at locations spaced 1 m apart across the entire point bar for a couple of the largest floods. The maximum predicted scour is about 0.22 m. At the annual time scale, repeated cross-section topographic surveys (25 during 32 years) indicate that net annual erosion at a single location can be as great as 0.5 m, and that the net erosion is greater than net deposition during 8, 16, and 32% of the years for the three point bars. On average, the median annual net erosion was 21, 36, and 51% of the net deposition. At the decadal time scale, an index of point bar preservation often referred to as completeness was defined for each cross section as the percentage of the initial deposit (older than 10 years) that was still remaining in 2011; computations indicate that 19, 41, and 36% of the initial deposits of sediment were eroded. Initial deposits were not uniform in thickness and often represented thicker pods of sediment connected by thin layers of sediment or even isolated pods at different elevations across the point bar in response to multiple floods during a water year. Erosion often was preferential and removed part or all of pods at lower elevations, and in time left what appears to be a random arrangement of sediment pods forming the point bar. Thus, we conclude that the erosional process is as important as the deposition process in shaping the final form of the point bar, and that point bars are not uniformly aggradational or transgressive deposits of sediment in which the age of the deposit increases monotonically downward at all locations across the point bar.

  6. The Relationship among Health Education Systems, Inc. Progression and Exit Examination Scores, Day or Evening Enrollment, Final Grade Point Average and NCLEX-RN® Success in Associate Degree Nursing Students

    ERIC Educational Resources Information Center

    Barnwell-Sanders, Pamela

    2015-01-01

    Graduates of associate degree (AD) nursing programs form the largest segment of first-time National Council Licensure Examination for Registered Nurses (NCLEX-RN®) test takers, yet also experience the highest rate of NCLEX-RN® failures. NCLEX-RN® failure delays entry into the profession, adding an emotional and financial toll to the unsuccessful…

  7. Real-Time Processing of Pressure-Sensitive Paint Images

    DTIC Science & Technology

    2006-12-01

    intermediate or final data to the hard disk in 3D grid format. In addition to the pressure or pressure coefficient at every grid point, the saved file may...occurs. Nevertheless, to achieve an accurate mapping between 2D image coordinates and 3D spatial coordinates, additional parameters must be introduced. A...improved mapping between the 2D and 3D coordinates. In a more sophisticated approach, additional terms corresponding to specific deformation modes

  8. Dynamic Resource Allocation to Improve Service Performance in Order Fulfillment Systems

    DTIC Science & Technology

    2009-01-01

    efficient system uses economies of scale at two points: orders are batched before processing, which reduces processing costs, and processed or- ders ...the ef- fects of batching on order picking processes is well-researched and well-understood ( van den Berg and Gademann, 1999). Because orders are...a final so- journ time distribution. Our work builds on existing research in matrix-geometric methods by Neuts (1981), Asmussen and M0ller (2001

  9. Accurate documentation in cultural heritage by merging TLS and high-resolution photogrammetric data

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, Pierre; Alby, Emmanuel; Assali, Pierre; Poitevin, Valentin; Hullo, Jean-François; Smigiel, Eddie

    2011-07-01

    Several recording techniques are used together in Cultural Heritage Documentation projects. The main purpose of the documentation and conservation works is usually to generate geometric and photorealistic 3D models for both accurate reconstruction and visualization purposes. The recording approach discussed in this paper is based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons, and criteria as geometry, texture, accuracy, resolution, recording and processing time are often compared. TLS techniques (time of flight or phase shift systems) are often used for the recording of large and complex objects or sites. Point cloud generation from images by dense stereo or multi-image matching can be used as an alternative or a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one as the acquisition system is limited to a digital camera and a few accessories only. Indeed, the stereo matching process offers a cheap, flexible and accurate solution to get 3D point clouds and textured models. The calibration of the camera allows the processing of distortion free images, accurate orientation of the images, and matching at the subpixel level. The main advantage of this photogrammetric methodology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After the matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but with really better raster information for textures. The paper will address the automation of recording and processing steps, the assessment of the results, and the deliverables (e.g. PDF-3D files). Visualization aspects of the final 3D models are presented. Two case studies with merged photogrammetric and TLS data are finally presented: - The Gallo-roman Theatre of Mandeure, France); - The Medieval Fortress of Châtel-sur-Moselle, France), where a network of underground galleries and vaults has been recorded.

  10. Ecological change points: The strength of density dependence and the loss of history.

    PubMed

    Ponciano, José M; Taper, Mark L; Dennis, Brian

    2018-05-01

    Change points in the dynamics of animal abundances have extensively been recorded in historical time series records. Little attention has been paid to the theoretical dynamic consequences of such change-points. Here we propose a change-point model of stochastic population dynamics. This investigation embodies a shift of attention from the problem of detecting when a change will occur, to another non-trivial puzzle: using ecological theory to understand and predict the post-breakpoint behavior of the population dynamics. The proposed model and the explicit expressions derived here predict and quantify how density dependence modulates the influence of the pre-breakpoint parameters into the post-breakpoint dynamics. Time series transitioning from one stationary distribution to another contain information about where the process was before the change-point, where is it heading and how long it will take to transition, and here this information is explicitly stated. Importantly, our results provide a direct connection of the strength of density dependence with theoretical properties of dynamic systems, such as the concept of resilience. Finally, we illustrate how to harness such information through maximum likelihood estimation for state-space models, and test the model robustness to widely different forms of compensatory dynamics. The model can be used to estimate important quantities in the theory and practice of population recovery. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Registration of 4D cardiac CT sequences under trajectory constraints with multichannel diffeomorphic demons.

    PubMed

    Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Xu, Chenyang; Ayache, Nicholas

    2010-07-01

    We propose a framework for the nonlinear spatiotemporal registration of 4D time-series of images based on the Diffeomorphic Demons (DD) algorithm. In this framework, the 4D spatiotemporal registration is decoupled into a 4D temporal registration, defined as mapping physiological states, and a 4D spatial registration, defined as mapping trajectories of physical points. Our contribution focuses more specifically on the 4D spatial registration that should be consistent over time as opposed to 3D registration that solely aims at mapping homologous points at a given time-point. First, we estimate in each sequence the motion displacement field, which is a dense representation of the point trajectories we want to register. Then, we perform simultaneously 3D registrations of corresponding time-points with the constraints to map the same physical points over time called the trajectory constraints. Under these constraints, we show that the 4D spatial registration can be formulated as a multichannel registration of 3D images. To solve it, we propose a novel version of the Diffeomorphic Demons (DD) algorithm extended to vector-valued 3D images, the Multichannel Diffeomorphic Demons (MDD). For evaluation, this framework is applied to the registration of 4D cardiac computed tomography (CT) sequences and compared to other standard methods with real patient data and synthetic data simulated from a physiologically realistic electromechanical cardiac model. Results show that the trajectory constraints act as a temporal regularization consistent with motion whereas the multichannel registration acts as a spatial regularization. Finally, using these trajectory constraints with multichannel registration yields the best compromise between registration accuracy, temporal and spatial smoothness, and computation times. A prospective example of application is also presented with the spatiotemporal registration of 4D cardiac CT sequences of the same patient before and after radiofrequency ablation (RFA) in case of atrial fibrillation (AF). The intersequence spatial transformations over a cardiac cycle allow to analyze and quantify the regression of left ventricular hypertrophy and its impact on the cardiac function.

  12. First Neutrino Point-Source Results from the 22 String Icecube Detector

    NASA Astrophysics Data System (ADS)

    Abbasi, R.; Abdou, Y.; Ackermann, M.; Adams, J.; Aguilar, J.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Bradley, L.; Braun, J.; Breder, D.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cohen, S.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Day, C. T.; De Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; De Young, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Gerhardt, L.; Gladstone, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hasegawa, Y.; Heise, J.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Imlay, R. L.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Klepser, S.; Knops, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Lauer, R.; Leich, H.; Lennarz, D.; Lucke, A.; Lundberg, J.; Lünemann, J.; Madsen, J.; Majumdar, P.; Maruyama, R.; Mase, K.; Matis, H. S.; McParland, C. P.; Meagher, K.; Merck, M.; Mészáros, P.; Middell, E.; Milke, N.; Miyamoto, H.; Mohr, A.; Montaruli, T.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Patton, S.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Potthoff, N.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Rutledge, D.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Satalecka, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schukraft, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K.-H.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Terranova, C.; Tilav, S.; Tluczykont, M.; Toale, P. A.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; Voigt, B.; Walck, C.; Waldenmaier, T.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebusch, C. H.; Wiedemann, A.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Ice Cube Collaboration

    2009-08-01

    We present new results of searches for neutrino point sources in the northern sky, using data recorded in 2007-2008 with 22 strings of the IceCube detector (approximately one-fourth of the planned total) and 275.7 days of live time. The final sample of 5114 neutrino candidate events agrees well with the expected background of atmospheric muon neutrinos and a small component of atmospheric muons. No evidence of a point source is found, with the most significant excess of events in the sky at 2.2σ after accounting for all trials. The average upper limit over the northern sky for point sources of muon-neutrinos with E -2 spectrum is E^{2} Φ_{ν_{μ}} < 1.4 × 10^{-11} TeV cm^{-2} s^{-1}, in the energy range from 3 TeV to 3 PeV, improving the previous best average upper limit by the AMANDA-II detector by a factor of 2.

  13. Prospective memory: A comparative perspective

    PubMed Central

    Crystal, Jonathon D.; Wilson, A. George

    2014-01-01

    Prospective memory consists of forming a representation of a future action, temporarily storing that representation in memory, and retrieving it at a future time point. Here we review the recent development of animal models of prospective memory. We review experiments using rats that focus on the development of time-based and event-based prospective memory. Next, we review a number of prospective-memory approaches that have been used with a variety of non-human primates. Finally, we review selected approaches from the human literature on prospective memory to identify targets for development of animal models of prospective memory. PMID:25101562

  14. Detecting recurrence domains of dynamical systems by symbolic dynamics.

    PubMed

    beim Graben, Peter; Hutt, Axel

    2013-04-12

    We propose an algorithm for the detection of recurrence domains of complex dynamical systems from time series. Our approach exploits the characteristic checkerboard texture of recurrence domains exhibited in recurrence plots. In phase space, recurrence plots yield intersecting balls around sampling points that could be merged into cells of a phase space partition. We construct this partition by a rewriting grammar applied to the symbolic dynamics of time indices. A maximum entropy principle defines the optimal size of intersecting balls. The final application to high-dimensional brain signals yields an optimal symbolic recurrence plot revealing functional components of the signal.

  15. Beach Point Test Site, Aberdeen Proving Ground, Edgewood Area, Maryland. Focused Feasibility Study, Final Project Work Plan

    DTIC Science & Technology

    1993-10-01

    Disposal Act (SWDA)/Resource Conservation and Recovery Act (RCRA). Jacobs Englnwfng Gmo Inc FINAL PROJECT WORK PLAN Washington Operatvo 8~ Fs 2-551I Date...shaded map of contaminated areas defined by chemical data, more than one map may be consructed f special and grapk cons *tft ame encountered. Results...that a Siophysical c ao p Pro rma down the axis or Beah PointS using geophysic teehnozoglosts that are •vi subsurface hyd&rosAdgraphic cmal beneath

  16. Validation of a new noniterative method for accurate position determination of a scanning laser vibrometer

    NASA Astrophysics Data System (ADS)

    Pauwels, Steven; Boucart, Nick; Dierckx, Benoit; Van Vlierberghe, Pieter

    2000-05-01

    The use of a scanning laser Doppler vibrometer for vibration testing is becoming a popular instrument. The scanning laser Doppler vibrometer is a non-contacting transducer that can measure many points at a high spatial resolution in a short time. Manually aiming the laser beam at the points that need to be measured is very time consuming. In order to use it effectively, the position of the laser Doppler vibrometer needs to be determined relative to the structure. If the position of the laser Doppler vibrometer is known, any visible point on the structure can be hit and measured automatically. A new algorithm for this position determination is developed, based on a geometry model of the structure. After manually aiming the laser beam at 4 or more known points, the laser position and orientation relative to the structure is determined. Using this calculated position and orientation a list with the mirror angles for every measurement point is generated, which is used during the measurement. The algorithm is validated using 3 practical cases. In the first case a plate is used of which the points are measured very accurately, so the geometry model is assumed to be perfect. The second case is a brake disc. Here the geometry points are measured with a ruler, thus not so accurate. The final validation is done on a body in white of a car. A reduced finite element model is used as geometry model. This calibration shows that the new algorithm is very effective and practically usable.

  17. A Theoretical Framework for Lagrangian Descriptors

    NASA Astrophysics Data System (ADS)

    Lopesino, C.; Balibrea-Iniesta, F.; García-Garrido, V. J.; Wiggins, S.; Mancho, A. M.

    This paper provides a theoretical background for Lagrangian Descriptors (LDs). The goal of achieving rigorous proofs that justify the ability of LDs to detect invariant manifolds is simplified by introducing an alternative definition for LDs. The definition is stated for n-dimensional systems with general time dependence, however we rigorously prove that this method reveals the stable and unstable manifolds of hyperbolic points in four particular 2D cases: a hyperbolic saddle point for linear autonomous systems, a hyperbolic saddle point for nonlinear autonomous systems, a hyperbolic saddle point for linear nonautonomous systems and a hyperbolic saddle point for nonlinear nonautonomous systems. We also discuss further rigorous results which show the ability of LDs to highlight additional invariants sets, such as n-tori. These results are just a simple extension of the ergodic partition theory which we illustrate by applying this methodology to well-known examples, such as the planar field of the harmonic oscillator and the 3D ABC flow. Finally, we provide a thorough discussion on the requirement of the objectivity (frame-invariance) property for tools designed to reveal phase space structures and their implications for Lagrangian descriptors.

  18. Floquet Engineering in Quantum Chains

    NASA Astrophysics Data System (ADS)

    Kennes, D. M.; de la Torre, A.; Ron, A.; Hsieh, D.; Millis, A. J.

    2018-03-01

    We consider a one-dimensional interacting spinless fermion model, which displays the well-known Luttinger liquid (LL) to charge density wave (CDW) transition as a function of the ratio between the strength of the interaction U and the hopping J . We subject this system to a spatially uniform drive which is ramped up over a finite time interval and becomes time periodic in the long-time limit. We show that by using a density matrix renormalization group approach formulated for infinite system sizes, we can access the large-time limit even when the drive induces finite heating. When both the initial and long-time states are in the gapless (LL) phase, the final state has power-law correlations for all ramp speeds. However, when the initial and final state are gapped (CDW phase), we find a pseudothermal state with an effective temperature that depends on the ramp rate, both for the Magnus regime in which the drive frequency is very large compared to other scales in the system and in the opposite limit where the drive frequency is less than the gap. Remarkably, quantum defects (instantons) appear when the drive tunes the system through the quantum critical point, in a realization of the Kibble-Zurek mechanism.

  19. Improving pointing of Toruń 32-m radio telescope: effects of rail surface irregularities

    NASA Astrophysics Data System (ADS)

    Lew, Bartosz

    2018-03-01

    Over the last few years a number of software and hardware improvements have been implemented to the 32-m Cassegrain radio telescope located near Toruń. The 19-bit angle encoders have been upgraded to 29-bit in azimuth and elevation axes. The control system has been substantially improved, in order to account for a number of previously-neglected, astrometric effects that are relevant for milli-degree pointing. In the summer 2015, as a result of maintenance works, the orientation of the secondary mirror has been slightly altered, which resulted in worsening of the pointing precision, much below the nominal telescope capabilities. In preparation for observations at the highest available frequency of 30-GHz, we use One Centimeter Receiver Array (OCRA), to take the most accurate pointing data ever collected with the telescope, and we analyze it in order to improve the pointing precision. We introduce a new generalized pointing model that, for the first time, accounts for the rail irregularities, and we show that the telescope can have root mean square pointing accuracy at the level < 8″ and < 12″ in azimuth and elevation respectively. Finally, we discuss the implemented pointing improvements in the light of effects that may influence their long-term stability.

  20. Monitoring urban subsidence based on SAR lnterferometric point target analysis

    USGS Publications Warehouse

    Zhang, Y.; Zhang, Jiahua; Gong, W.; Lu, Z.

    2009-01-01

    lnterferometric point target analysis (IPTA) is one of the latest developments in radar interferometric processing. It is achieved by analysis of the interferometric phases of some individual point targets, which are discrete and present temporarily stable backscattering characteristics, in long temporal series of interferometric SAR images. This paper analyzes the interferometric phase model of point targets, and then addresses two key issues within IPTA process. Firstly, a spatial searching method is proposed to unwrap the interferometric phase difference between two neighboring point targets. The height residual error and linear deformation rate of each point target can then be calculated, when a global reference point with known height correction and deformation history is chosen. Secondly, a spatial-temporal filtering scheme is proposed to further separate the atmosphere phase and nonlinear deformation phase from the residual interferometric phase. Finally, an experiment of the developed IPTA methodology is conducted over Suzhou urban area. Totally 38 ERS-1/2 SAR scenes are analyzed, and the deformation information over 3 546 point targets in the time span of 1992-2002 are generated. The IPTA-derived deformation shows very good agreement with the published result, which demonstrates that the IPTA technique can be developed into an operational tool to map the ground subsidence over urban area.

  1. A definition and classification of status epilepticus--Report of the ILAE Task Force on Classification of Status Epilepticus.

    PubMed

    Trinka, Eugen; Cock, Hannah; Hesdorffer, Dale; Rossetti, Andrea O; Scheffer, Ingrid E; Shinnar, Shlomo; Shorvon, Simon; Lowenstein, Daniel H

    2015-10-01

    The Commission on Classification and Terminology and the Commission on Epidemiology of the International League Against Epilepsy (ILAE) have charged a Task Force to revise concepts, definition, and classification of status epilepticus (SE). The proposed new definition of SE is as follows: Status epilepticus is a condition resulting either from the failure of the mechanisms responsible for seizure termination or from the initiation of mechanisms, which lead to abnormally, prolonged seizures (after time point t1 ). It is a condition, which can have long-term consequences (after time point t2 ), including neuronal death, neuronal injury, and alteration of neuronal networks, depending on the type and duration of seizures. This definition is conceptual, with two operational dimensions: the first is the length of the seizure and the time point (t1 ) beyond which the seizure should be regarded as "continuous seizure activity." The second time point (t2 ) is the time of ongoing seizure activity after which there is a risk of long-term consequences. In the case of convulsive (tonic-clonic) SE, both time points (t1 at 5 min and t2 at 30 min) are based on animal experiments and clinical research. This evidence is incomplete, and there is furthermore considerable variation, so these time points should be considered as the best estimates currently available. Data are not yet available for other forms of SE, but as knowledge and understanding increase, time points can be defined for specific forms of SE based on scientific evidence and incorporated into the definition, without changing the underlying concepts. A new diagnostic classification system of SE is proposed, which will provide a framework for clinical diagnosis, investigation, and therapeutic approaches for each patient. There are four axes: (1) semiology; (2) etiology; (3) electroencephalography (EEG) correlates; and (4) age. Axis 1 (semiology) lists different forms of SE divided into those with prominent motor systems, those without prominent motor systems, and currently indeterminate conditions (such as acute confusional states with epileptiform EEG patterns). Axis 2 (etiology) is divided into subcategories of known and unknown causes. Axis 3 (EEG correlates) adopts the latest recommendations by consensus panels to use the following descriptors for the EEG: name of pattern, morphology, location, time-related features, modulation, and effect of intervention. Finally, axis 4 divides age groups into neonatal, infancy, childhood, adolescent and adulthood, and elderly. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.

  2. Guidance, Navigation, and Control Performance for the GOES-R Spacecraft

    NASA Technical Reports Server (NTRS)

    Chapel, Jim; Stancliffe, Devin; Bevacqua, TIm; Winkler, Stephen; Clapp, Brian; Rood, Tim; Gaylor, David; Freesland, Doug; Krimchansky, Alexander

    2014-01-01

    The Geostationary Operational Environmental Satellite-R Series (GOES-R) is the first of the next generation geostationary weather satellites. The series represents a dramatic increase in Earth observation capabilities, with 4 times the resolution, 5 times the observation rate, and 3 times the number of spectral bands. GOES-R also provides unprecedented availability, with less than 120 minutes per year of lost observation time. This paper presents the Guidance Navigation & Control (GN&C) requirements necessary to realize the ambitious pointing, knowledge, and Image Navigation and Registration (INR) objectives of GOES-R. Because the suite of instruments is sensitive to disturbances over a broad spectral range, a high fidelity simulation of the vehicle has been created with modal content over 500 Hz to assess the pointing stability requirements. Simulation results are presented showing acceleration, shock response spectra (SRS), and line of sight (LOS) responses for various disturbances from 0 Hz to 512 Hz. Simulation results demonstrate excellent performance relative to the pointing and pointing stability requirements, with LOS jitter for the isolated instrument platform of approximately 1 micro-rad. Attitude and attitude rate knowledge are provided directly to the instrument with an accuracy defined by the Integrated Rate Error (IRE) requirements. The data are used internally for motion compensation. The final piece of the INR performance is orbit knowledge, which GOES-R achieves with GPS navigation. Performance results are shown demonstrating compliance with the 50 to 75 m orbit position accuracy requirements. As presented in this paper, the GN&C performance supports the challenging mission objectives of GOES-R.

  3. Injection Laryngoplasty Using Micronized Acellular Dermis for Vocal Fold Paralysis: Long-term Voice Outcomes.

    PubMed

    Hernandez, Stephen C; Sibley, Haley; Fink, Daniel S; Kunduk, Melda; Schexnaildre, Mell; Kakade, Anagha; McWhorter, Andrew J

    2016-05-01

    Micronized acellular dermis has been used for nearly 15 years to correct glottic insufficiency. With previous demonstration of safety and efficacy, this study aims to evaluate intermediate and long-term voice outcomes in those who underwent injection laryngoplasty for unilateral vocal fold paralysis. Technique and timing of injection were also reviewed to assess their impact on outcomes. Case series with chart review. Tertiary care center. Patients undergoing injection laryngoplasty from May 2007 to September 2012 were reviewed for possible inclusion. Pre- and postoperative Voice Handicap Index (VHI) scores, as well as senior speech-language pathologists' blinded assessment of voice, were collected for analysis. The final sample included patients who underwent injection laryngoplasty for unilateral vocal fold paralysis, 33 of whom had VHI results and 37 of whom had voice recordings. Additional data were obtained, including technique and timing of injection. Analysis was performed on those patients above with VHI and perceptual voice grades before and at least 6 months following injection. Mean VHI improved by 28.7 points at 6 to 12 months and 22.8 points at >12 months (P = .001). Mean perceptual voice grades improved by 17.6 points at 6 to 12 months and 16.3 points at >12 months (P < .001). No statistically significant difference was found with technique or time to injection. Micronized acellular dermis is a safe injectable that improved both patient-completed voice ratings and blinded reviewer voice gradings at intermediate and long-term follow-up. Further investigation may be warranted regarding technique and timing of injection. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.

  4. Genetic variation of growth dynamics in maize (Zea mays L.) revealed through automated non-invasive phenotyping.

    PubMed

    Muraya, Moses M; Chu, Jianting; Zhao, Yusheng; Junker, Astrid; Klukas, Christian; Reif, Jochen C; Altmann, Thomas

    2017-01-01

    Hitherto, most quantitative trait loci of maize growth and biomass yield have been identified for a single time point, usually the final harvest stage. Through this approach cumulative effects are detected, without considering genetic factors causing phase-specific differences in growth rates. To assess the genetics of growth dynamics, we employed automated non-invasive phenotyping to monitor the plant sizes of 252 diverse maize inbred lines at 11 different developmental time points; 50 k SNP array genotype data were used for genome-wide association mapping and genomic selection. The heritability of biomass was estimated to be over 71%, and the average prediction accuracy amounted to 0.39. Using the individual time point data, 12 main effect marker-trait associations (MTAs) and six pairs of epistatic interactions were detected that displayed different patterns of expression at various developmental time points. A subset of them also showed significant effects on relative growth rates in different intervals. The detected MTAs jointly explained up to 12% of the total phenotypic variation, decreasing with developmental progression. Using non-parametric functional mapping and multivariate mapping approaches, four additional marker loci affecting growth dynamics were detected. Our results demonstrate that plant biomass accumulation is a complex trait governed by many small effect loci, most of which act at certain restricted developmental phases. This highlights the need for investigation of stage-specific growth affecting genes to elucidate important processes operating at different developmental phases. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.

  5. Distinct Neurochemical Adaptations Within the Nucleus Accumbens Produced by a History of Self-Administered vs Non-Contingently Administered Intravenous Methamphetamine

    PubMed Central

    Lominac, Kevin D; Sacramento, Arianne D; Szumlinski, Karen K; Kippin, Tod E

    2012-01-01

    Methamphetamine is a highly addictive psychomotor stimulant yet the neurobiological consequences of methamphetamine self-administration remain under-characterized. Thus, we employed microdialysis in rats trained to self-administer intravenous (IV) infusions of methamphetamine (METH-SA) or saline (SAL) and a group of rats receiving non-contingent IV infusions of methamphetamine (METH-NC) at 1 or 21 days withdrawal to determine the dopamine and glutamate responses in the nucleus accumbens (NAC) to a 2 mg/kg methamphetamine intraperitoneal challenge. Furthermore, basal NAC extracellular glutamate content was assessed employing no net-flux procedures in these three groups at both time points. At both 1- and 21-day withdrawal points, methamphetamine elicited a rise in extracellular dopamine in SAL animals and this effect was sensitized in METH-NC rats. However, METH-SA animals showed a much greater sensitized dopamine response to the drug challenge compared with the other groups. Additionally, acute methamphetamine decreased extracellular glutamate in both SAL and METH-NC animals at both time-points. In contrast, METH-SA rats exhibited a modest and delayed rise in glutamate at 1-day withdrawal and this rise was sensitized at 21 days withdrawal. Finally, no net-flux microdialysis revealed elevated basal glutamate and increased extraction fraction at both withdrawal time-points in METH-SA rats. Although METH-NC rats exhibited no change in the glutamate extraction fraction, they exhibited a time-dependent elevation in basal glutamate levels. These data illustrate for the first time that a history of methamphetamine self-administration produces enduring changes in NAC neurotransmission and that non-pharmacological factors have a critical role in the expression of these methamphetamine-induced neurochemical adaptations. PMID:22030712

  6. Initial Validation of NDVI time seriesfrom AVHRR, VEGETATION, and MODIS

    NASA Technical Reports Server (NTRS)

    Morisette, Jeffrey T.; Pinzon, Jorge E.; Brown, Molly E.; Tucker, Jim; Justice, Christopher O.

    2004-01-01

    The paper will address Theme 7: Multi-sensor opportunities for VEGETATION. We present analysis of a long-term vegetation record derived from three moderate resolution sensors: AVHRR, VEGETATION, and MODIS. While empirically based manipulation can ensure agreement between the three data sets, there is a need to validate the series. This paper uses atmospherically corrected ETM+ data available over the EOS Land Validation Core Sites as an independent data set with which to compare the time series. We use ETM+ data from 15 globally distributed sites, 7 of which contain repeat coverage in time. These high-resolution data are compared to the values of each sensor by spatially aggregating the ETM+ to each specific sensors' spatial coverage. The aggregated ETM+ value provides a point estimate for a specific site on a specific date. The standard deviation of that point estimate is used to construct a confidence interval for that point estimate. The values from each moderate resolution sensor are then evaluated with respect to that confident interval. Result show that AVHRR, VEGETATION, and MODIS data can be combined to assess temporal uncertainties and address data continuity issues and that the atmospherically corrected ETM+ data provide an independent source with which to compare that record. The final product is a consistent time series climate record that links historical observations to current and future measurements.

  7. Final report on APMP supplementary comparison of industrial platinum resistance thermometer for range -50 °C to 400 °C. (APMP.T-S6)

    NASA Astrophysics Data System (ADS)

    Ali, Nurulaini Md; Othman, Hafidzah; Ho, Mong-Kim; Yang, Inseok; Gabi, Victor; Zhang, Zhe; Sheng, Ying; Hu, Jing; Zhang, Jintao; Tsai, Shu-Fei; Liao, Su-Chuion; Shafiqul Alam, Md; Norranim, Uthai; Yaokulbodee, Charuayrat; Liedberg, Hans; Gamal Ahmed, Mohamed; Harba, Naser; Fuad Flaifel, Mustafa; Suherlan; Achmadi, Aditya; Haoyuan, Kho; Yan, Fan; Boon Kwee, Neoh; Thanh Binh, Pham; Ragay, Monalisa

    2017-01-01

    The National Metrology Laboratory, Malaysia (NML-SIRIM) had coordinated the Supplementary Comparison of Industrial Platinum Resistance Thermometer (IPRT) started from July 2009 until April 2011 together with the National Measurement Institute of Australia (NMIA) and Korea Research Institute of Standards and Science (KRISS). This comparison was opened to all APMP members and 16 participants had taken part in order to consolidate or improve their calibration and measurement capabilities (CMCs). Three units of artefacts 100-ohm IPRT belonging to NML-SIRIM, NMIA and KRISS were used and three loops of measurement were run in parallel to shorten the circulation time. The stability of artefacts was monitored with initial and final ice point measurement by each laboratory. The participants need to perform measurement at temperature points (0,-50,-30, 0, 100, 200, 300, 400,0) °C. All submitted data were compiled and adjusted at nominal temperature and by using the linkage from NMIA and KRISS as reference value for all three loops , the temperature deviation from participants from different loops can be compared. The approach of Birge Ratio for final exclusion comprised all the excluded results from the analysis of each loop and the measurement performance for each participant were summarized by the En number value. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  8. Hierarchical leak detection and localization method in natural gas pipeline monitoring sensor networks.

    PubMed

    Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning

    2012-01-01

    In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point's position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate.

  9. Instance-based learning: integrating sampling and repeated decisions from experience.

    PubMed

    Gonzalez, Cleotilde; Dutt, Varun

    2011-10-01

    In decisions from experience, there are 2 experimental paradigms: sampling and repeated-choice. In the sampling paradigm, participants sample between 2 options as many times as they want (i.e., the stopping point is variable), observe the outcome with no real consequences each time, and finally select 1 of the 2 options that cause them to earn or lose money. In the repeated-choice paradigm, participants select 1 of the 2 options for a fixed number of times and receive immediate outcome feedback that affects their earnings. These 2 experimental paradigms have been studied independently, and different cognitive processes have often been assumed to take place in each, as represented in widely diverse computational models. We demonstrate that behavior in these 2 paradigms relies upon common cognitive processes proposed by the instance-based learning theory (IBLT; Gonzalez, Lerch, & Lebiere, 2003) and that the stopping point is the only difference between the 2 paradigms. A single cognitive model based on IBLT (with an added stopping point rule in the sampling paradigm) captures human choices and predicts the sequence of choice selections across both paradigms. We integrate the paradigms through quantitative model comparison, where IBLT outperforms the best models created for each paradigm separately. We discuss the implications for the psychology of decision making. © 2011 American Psychological Association

  10. Fickian dispersion is anomalous

    DOE PAGES

    Cushman, John H.; O’Malley, Dan

    2015-06-22

    The thesis put forward here is that the occurrence of Fickian dispersion in geophysical settings is a rare event and consequently should be labeled as anomalous. What people classically call anomalous is really the norm. In a Lagrangian setting, a process with mean square displacement which is proportional to time is generally labeled as Fickian dispersion. With a number of counter examples we show why this definition is fraught with difficulty. In a related discussion, we show an infinite second moment does not necessarily imply the process is super dispersive. By employing a rigorous mathematical definition of Fickian dispersion wemore » illustrate why it is so hard to find a Fickian process. We go on to employ a number of renormalization group approaches to classify non-Fickian dispersive behavior. Scaling laws for the probability density function for a dispersive process, the distribution for the first passage times, the mean first passage time, and the finite-size Lyapunov exponent are presented for fixed points of both deterministic and stochastic renormalization group operators. The fixed points of the renormalization group operators are p-self-similar processes. A generalized renormalization group operator is introduced whose fixed points form a set of generalized self-similar processes. Finally, power-law clocks are introduced to examine multi-scaling behavior. Several examples of these ideas are presented and discussed.« less

  11. Caesium-137 and strontium-90 temporal series in the Tagus River: experimental results and a modelling study.

    PubMed

    Miró, Conrado; Baeza, Antonio; Madruga, María J; Periañez, Raul

    2012-11-01

    The objective of this work consisted of analysing the spatial and temporal evolution of two radionuclide concentrations in the Tagus River. Time-series analysis techniques and numerical modelling have been used in this study. (137)Cs and (90)Sr concentrations have been measured from 1994 to 1999 at several sampling points in Spain and Portugal. These radionuclides have been introduced into the river by the liquid releases from several nuclear power plants in Spain, as well as from global fallout. Time-series analysis techniques have allowed the determination of radionuclide transit times along the river, and have also pointed out the existence of temporal cycles of radionuclide concentrations at some sampling points, which are attributed to water management in the reservoirs placed along the Tagus River. A stochastic dispersion model, in which transport with water, radioactive decay and water-sediment interactions are solved through Monte Carlo methods, has been developed. Model results are, in general, in reasonable agreement with measurements. The model has finally been applied to the calculation of mean ages of radioactive content in water and sediments in each reservoir. This kind of model can be a very useful tool to support the decision-making process after an eventual emergency situation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Low-cost standalone multi-sensor thermometer for long time measurements

    NASA Astrophysics Data System (ADS)

    Kumchaiseemak, Nakorn; Hormwantha, Tongchai; Wungmool, Piyachat; Suwanatus, Suchat; Kanjai, Supaporn; Lertkitthaworn, Thitima; Jutamanee, Kanapol; Luengviriya, Chaiya

    2017-09-01

    We present a portable device for long-time recording of the temperature at multiple measuring points. Thermocouple wires are utilized as the sensors attached to the objects. To minimize the production cost, the measured voltage signals are relayed via a multiplexer to a set of amplifiers and finally to a single microcontroller. The observed temperature and the corresponding date and time, obtained from a real-time clock circuit, are recorded in a memory card for further analysis. The device is powered by a rechargeable battery and placed in a rainproof container, thus it can operate under outdoor conditions. A demonstration of the device usage in a mandarin orange cultivation field of the Royal project, located in the northern Thailand, is illustrated.

  13. The dynamics of behavior in modified dictator games

    PubMed Central

    2017-01-01

    We investigate the dynamics of individual pro-social behavior over time. The dynamics are tested by running the same experiment with the same subjects at several points in time. To exclude learning and reputation building, we employ non-strategic decision tasks and a sequential prisoners-dilemma as a control treatment. In the first wave, pro-social concerns explain a high share of individual decisions. Pro-social decisions decrease over time, however. In the final wave, most decisions can be accounted for by assuming pure selfishness. Stable behavior in the sense that subjects stick to their decisions over time is observed predominantly for purely selfish subjects. We offer two explanation for our results: diminishing experimenter demand effects and moral self-licensing. PMID:28448506

  14. Real-Time GNSS Positioning with JPL's new GIPSYx Software

    NASA Astrophysics Data System (ADS)

    Bar-Sever, Y. E.

    2016-12-01

    The JPL Global Differential GPS (GDGPS) System is now producing real-time orbit and clock solutions for GPS, GLONASS, BeiDou, and Galileo. The operations are based on JPL's next generation geodetic analysis and data processing software, GIPSYx (also known at RTGx). We will examine the impact of the nascent GNSS constellations on real-time kinematic positioning for earthquake monitoring, and assess the marginal benefits from each constellation. We will discus the options for signal selection, inter-signal bias modeling, and estimation strategies in the context of real-time point positioning. We will provide a brief overview of the key features and attributes of GIPSYx. Finally we will describe the current natural hazard monitoring services from the GDGPS System.

  15. Evaluation of cost-effectiveness from the funding body's point of view of ultrasound-guided central venous catheter insertion compared with the conventional technique.

    PubMed

    Noritomi, Danilo Teixeira; Zigaib, Rogério; Ranzani, Otavio T; Teich, Vanessa

    2016-01-01

    To evaluate the cost-effectiveness, from the funding body's point of view, of real-time ultrasound-guided central venous catheter insertion compared to the traditional method, which is based on the external anatomical landmark technique. A theoretical simulation based on international literature data was applied to the Brazilian context, i.e., the Unified Health System (Sistema Único de Saúde - SUS). A decision tree was constructed that showed the two central venous catheter insertion techniques: real-time ultrasonography versus external anatomical landmarks. The probabilities of failure and complications were extracted from a search on the PubMed and Embase databases, and values associated with the procedure and with complications were taken from market research and the Department of Information Technology of the Unified Health System (DATASUS). Each central venous catheter insertion alternative had a cost that could be calculated by following each of the possible paths on the decision tree. The incremental cost-effectiveness ratio was calculated by dividing the mean incremental cost of real-time ultrasound compared to the external anatomical landmark technique by the mean incremental benefit, in terms of avoided complications. When considering the incorporation of real-time ultrasound and the concomitant lower cost due to the reduced number of complications, the decision tree revealed a final mean cost for the external anatomical landmark technique of 262.27 Brazilian reals (R$) and for real-time ultrasound of R$187.94. The final incremental cost of the real-time ultrasound-guided technique was -R$74.33 per central venous catheter. The incremental cost-effectiveness ratio was -R$2,494.34 due to the pneumothorax avoided. Real-time ultrasound-guided central venous catheter insertion was associated with decreased failure and complication rates and hypothetically reduced costs from the view of the funding body, which in this case was the SUS.

  16. Automatic extraction of the mid-sagittal plane using an ICP variant

    NASA Astrophysics Data System (ADS)

    Fieten, Lorenz; Eschweiler, Jörg; de la Fuente, Matías; Gravius, Sascha; Radermacher, Klaus

    2008-03-01

    Precise knowledge of the mid-sagittal plane is important for the assessment and correction of several deformities. Furthermore, the mid-sagittal plane can be used for the definition of standardized coordinate systems such as pelvis or skull coordinate systems. A popular approach for mid-sagittal plane computation is based on the selection of anatomical landmarks located either directly on the plane or symmetrically to it. However, the manual selection of landmarks is a tedious, time-consuming and error-prone task, which requires great care. In order to overcome this drawback, previously it was suggested to use the iterative closest point (ICP) algorithm: After an initial mirroring of the data points on a default mirror plane, the mirrored data points should be registered iteratively to the model points using rigid transforms. Finally, a reflection transform approximating the cumulative transform could be extracted. In this work, we present an ICP variant for the iterative optimization of the reflection parameters. It is based on a closed-form solution to the least-squares problem of matching data points to model points using a reflection. In experiments on CT pelvis and skull datasets our method showed a better ability to match homologous areas.

  17. Nonuniform multiview color texture mapping of image sequence and three-dimensional model for faded cultural relics with sift feature points

    NASA Astrophysics Data System (ADS)

    Li, Na; Gong, Xingyu; Li, Hongan; Jia, Pengtao

    2018-01-01

    For faded relics, such as Terracotta Army, the 2D-3D registration between an optical camera and point cloud model is an important part for color texture reconstruction and further applications. This paper proposes a nonuniform multiview color texture mapping for the image sequence and the three-dimensional (3D) model of point cloud collected by Handyscan3D. We first introduce nonuniform multiview calibration, including the explanation of its algorithm principle and the analysis of its advantages. We then establish transformation equations based on sift feature points for the multiview image sequence. At the same time, the selection of nonuniform multiview sift feature points is introduced in detail. Finally, the solving process of the collinear equations based on multiview perspective projection is given with three steps and the flowchart. In the experiment, this method is applied to the color reconstruction of the kneeling figurine, Tangsancai lady, and general figurine. These results demonstrate that the proposed method provides an effective support for the color reconstruction of the faded cultural relics and be able to improve the accuracy of 2D-3D registration between the image sequence and the point cloud model.

  18. Dynamic performance of maximum power point tracking circuits using sinusoidal extremum seeking control for photovoltaic generation

    NASA Astrophysics Data System (ADS)

    Leyva, R.; Artillan, P.; Cabal, C.; Estibals, B.; Alonso, C.

    2011-04-01

    The article studies the dynamic performance of a family of maximum power point tracking circuits used for photovoltaic generation. It revisits the sinusoidal extremum seeking control (ESC) technique which can be considered as a particular subgroup of the Perturb and Observe algorithms. The sinusoidal ESC technique consists of adding a small sinusoidal disturbance to the input and processing the perturbed output to drive the operating point at its maximum. The output processing involves a synchronous multiplication and a filtering stage. The filter instance determines the dynamic performance of the MPPT based on sinusoidal ESC principle. The approach uses the well-known root-locus method to give insight about damping degree and settlement time of maximum-seeking waveforms. This article shows the transient waveforms in three different filter instances to illustrate the approach. Finally, an experimental prototype corroborates the dynamic analysis.

  19. Regularity results for the minimum time function with Hörmander vector fields

    NASA Astrophysics Data System (ADS)

    Albano, Paolo; Cannarsa, Piermarco; Scarinci, Teresa

    2018-03-01

    In a bounded domain of Rn with boundary given by a smooth (n - 1)-dimensional manifold, we consider the homogeneous Dirichlet problem for the eikonal equation associated with a family of smooth vector fields {X1 , … ,XN } subject to Hörmander's bracket generating condition. We investigate the regularity of the viscosity solution T of such problem. Due to the presence of characteristic boundary points, singular trajectories may occur. First, we characterize these trajectories as the closed set of all points at which the solution loses point-wise Lipschitz continuity. Then, we prove that the local Lipschitz continuity of T, the local semiconcavity of T, and the absence of singular trajectories are equivalent properties. Finally, we show that the last condition is satisfied whenever the characteristic set of {X1 , … ,XN } is a symplectic manifold. We apply our results to several examples.

  20. Fuel feasibility study for Red River Army Depot boiler plant. Final report. [Economic breakeven points for conversion to fossil fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ables, L.D.

    This paper establishes economic breakeven points for the conversion to various fossil fuels as a function of time and pollution constraints for the main boiler plant at Red River Army Depot in Texarkana, Texas. In carrying out the objectives of this paper, the author develops what he considers to be the basic conversion costs and operating costs for each fossil fuel under investigation. These costs are analyzed by the use of the present worth comparison method, and the minimum cost difference between the present fuel and the proposed fuel which would justify the conversion to the proposed fuel is calculated.more » These calculated breakeven points allow a fast and easy method of determining the feasibility of a fuel by merely knowing the relative price difference between the fuels under consideration. (GRA)« less

  1. A Multiphase Validation of Atlas-Based Automatic and Semiautomatic Segmentation Strategies for Prostate MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Spencer; Rodrigues, George, E-mail: george.rodrigues@lhsc.on.ca; Department of Epidemiology/Biostatistics, University of Western Ontario, London

    2013-01-01

    Purpose: To perform a rigorous technological assessment and statistical validation of a software technology for anatomic delineations of the prostate on MRI datasets. Methods and Materials: A 3-phase validation strategy was used. Phase I consisted of anatomic atlas building using 100 prostate cancer MRI data sets to provide training data sets for the segmentation algorithms. In phase II, 2 experts contoured 15 new MRI prostate cancer cases using 3 approaches (manual, N points, and region of interest). In phase III, 5 new physicians with variable MRI prostate contouring experience segmented the same 15 phase II datasets using 3 approaches: manual,more » N points with no editing, and full autosegmentation with user editing allowed. Statistical analyses for time and accuracy (using Dice similarity coefficient) endpoints used traditional descriptive statistics, analysis of variance, analysis of covariance, and pooled Student t test. Results: In phase I, average (SD) total and per slice contouring time for the 2 physicians was 228 (75), 17 (3.5), 209 (65), and 15 seconds (3.9), respectively. In phase II, statistically significant differences in physician contouring time were observed based on physician, type of contouring, and case sequence. The N points strategy resulted in superior segmentation accuracy when initial autosegmented contours were compared with final contours. In phase III, statistically significant differences in contouring time were observed based on physician, type of contouring, and case sequence again. The average relative timesaving for N points and autosegmentation were 49% and 27%, respectively, compared with manual contouring. The N points and autosegmentation strategies resulted in average Dice values of 0.89 and 0.88, respectively. Pre- and postedited autosegmented contours demonstrated a higher average Dice similarity coefficient of 0.94. Conclusion: The software provided robust contours with minimal editing required. Observed time savings were seen for all physicians irrespective of experience level and baseline manual contouring speed.« less

  2. Surgical Pathology Resident Rotation Restructuring at a Tertiary Care Academic Center.

    PubMed

    Mehr, Chelsea R; Obstfeld, Amrom E; Barrett, Amanda C; Montone, Kathleen T; Schwartz, Lauren E

    2017-01-01

    Changes in the field of pathology and resident education necessitate ongoing evaluation of residency training. Evolutionary change is particularly important for surgical pathology rotations, which form the core of anatomic pathology training programs. In the past, we organized this rotation based on subjective insight. When faced with the recent need to restructure the rotation, we strove for a more evidence-based process. Our approach involved 2 primary sources of data. We quantified the number of cases and blocks submitted per case type to estimate workload and surveyed residents about the time required to gross specimens in all organ systems. A multidisciplinary committee including faculty, residents, and staff evaluated the results and used the data to model how various changes to the rotation would affect resident workload, turnaround time, and other variables. Finally, we identified rotation structures that equally distributed work and created a point-based system that capped grossing time for residents of different experience. Following implementation, we retrospectively compared turnaround time and duty hour violations before and after these changes and surveyed residents about their experiences with both systems. We evaluated the accuracy of the point-based system by examining grossing times and comparing them to the assigned point values. We found overall improvement in the rotation following the implementation. As there is essentially no literature on the subject of surgical pathology rotation organization, we hope that our experience will provide a road map to improve pathology resident education at other institutions.

  3. Endotracheal cuff pressure changes with change in position in neurosurgical patients.

    PubMed

    Athiraman, UmeshKumar; Gupta, Rohit; Singh, Georgene

    2015-01-01

    Placement of a cuffed endotracheal tube for the administration of general anesthesia is routine. The cuff of the endotracheal tube is inflated with air to achieve an adequate seal to prevent micro-aspiration. Over inflation of the cuff can decrease the mucosal perfusion, leading to pressure necrosis and nerve palsies. Inadequate seal can lead to micro aspiration. So the cuff pressure has to be monitored and kept within the prescribed limits of 20-30 cms of water. To observe the effect of different positions on the endotracheal cuff pressure in patients undergoing neurosurgical procedures. This is an observational study conducted on 70 patients undergoing neurosurgical procedures in various positions. After intubation, the cuff pressure was checked with a cuff pressure manometer, Endotest (Teleflex Medical, Rush) and adjusted to be within the allowable pressure limits as is the routine practice. The cuff pressure was checked again at three time points after achieving the final position with the head on pins, at the end of the procedure and before extubation. Various factors such as the age, position, duration of surgery were studied. There were no major complications like aspiration, stridor or hoarseness of voice post extubation in any of the patients. A significant decline in the cuff pressures were noted from the initial supine position to extubation (P < .001) in the supine group. Also a significant decline in the cuff pressures were found in the prone group from their initial intubated supine position to all the other three corresponding time points namely after final positioning (P < .001), at the end of the procedure (P < .001) and before extubation (P < .001). Cuff pressure has to be checked after achieving the final positioning of the patient and adjusted to the prescribed limits to prevent micro aspiration.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez-Monroy, J.A., E-mail: antosan@gmail.com; Quimbay, C.J., E-mail: cjquimbayh@unal.edu.co; Centro Internacional de Fisica, Bogota D.C.

    In the context of a semiclassical approach where vectorial gauge fields can be considered as classical fields, we obtain exact static solutions of the SU(N) Yang-Mills equations in an (n+1)-dimensional curved space-time, for the cases n=1,2,3. As an application of the results obtained for the case n=3, we consider the solutions for the anti-de Sitter and Schwarzschild metrics. We show that these solutions have a confining behavior and can be considered as a first step in the study of the corrections of the spectra of quarkonia in a curved background. Since the solutions that we find in this work aremore » valid also for the group U(1), the case n=2 is a description of the (2+1) electrodynamics in the presence of a point charge. For this case, the solution has a confining behavior and can be considered as an application of the planar electrodynamics in a curved space-time. Finally we find that the solution for the case n=1 is invariant under a parity transformation and has the form of a linear confining solution. - Highlights: Black-Right-Pointing-Pointer We study exact static confining solutions of the SU(N) Yang-Mills equations in an (n+1)-dimensional curved space-time. Black-Right-Pointing-Pointer The solutions found are a first step in the study of the corrections on the spectra of quarkonia in a curved background. Black-Right-Pointing-Pointer A expression for the confinement potential in low dimensionality is found.« less

  5. TIME-INTERVAL MEASURING DEVICE

    DOEpatents

    Gross, J.E.

    1958-04-15

    An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.

  6. Early Diagnosis and Intervention Strategies for Post-Traumatic Heterotopic Ossification in Severely Injured Extremities

    DTIC Science & Technology

    2016-12-01

    the study for the presence or absence of ectopic bone formation at the indicated time points post injury (Table 1.). 8 Table 1. Incidence of HO...1 Award Number: W81XWH-12-2-0119 TITLE: Early Diagnosis and Intervention Strategies for Post -Traumatic Heterotopic Ossification in Severely...2016 TYPE OF REPORT: Final PREPARED FOR: U.S. Army Medical Research and Materiel Command Fort Detrick, Maryland 21702-5012 DISTRIBUTION STATEMENT

  7. Limited energy study, West Point, NY. Executive summary and final report. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, C.T.

    1994-05-13

    In the Holleder Sports Complex at West Point Military Academy, there is an indoor ice skating rink. Due to perceived operational inefficiencies, it was anticipated that energy was being wasted. Furthermore, it was noted that during the normal operation of the ice making plant, heat was being rejected from the building. Questions were asked as to the possibility of recapturing this rejected heat and utilizing it to increase the operational efficiency and reduce the energy wasted. The existing ice making refrigerant plant was originally installed with a heat reclaiming subsystem to utilize waste heat to provide for the required underslabmore » heating system and to melt waste ice scrapings (snow) from the ice resurfacing process. The underslab heating system is working properly, but there is not enough recovered waste heat left to totally melt the snow from resurfacing. This snow builds up over time and is melted by spraying domestic hot water at 140 deg F over the snow pile. This process is labor intensive, energy use intensive, and reduces the capacity of the domestic hot water system to satisfy hot water needs in other parts of the building. Actual compressor run times were obtained from the operator of the ice refrigerant plant and calculations showed that 2,122,100 MBH per year of energy was available for recovery.« less

  8. Point-to-point Commercial Space Transportation in the National Aviation System Final Report.

    DOT National Transportation Integrated Search

    2010-03-10

    The advent of suborbital transport brings promise of point-to-point (PTP) long distance transportation as a revolutionary mode of air transportation. In 2008, the International Space University (ISU) of Strasbourg, France, published a report1 documen...

  9. Synthetic Minor NSR Permit: BP America Production Company - Wolf Point Central Delivery Point

    EPA Pesticide Factsheets

    This page contains the response to public comments and the final synthetic minor NSR permit for the BP America Production Company, Wolf Point Central Delivery Point, located on the Southern Ute Indian Reservation in La Plata County, CO.

  10. Nondiscrimination on the basis of handicap--Office of the Secretary, HHS. Interim final rule.

    PubMed

    1983-03-07

    The interim final rule modifies existing regulations to meet the exigent needs that can arise when a handicapped infant is discriminatorily denied food or other medical care. Three current regulatory provisions are modified to allow timely reporting of violations, expeditious investigation, and immediate enforcement action when necessary to protect a handicapped infant whose life is endangered by discrimination in a program or activity receiving federal financial assistance. Recipients that provide health care to infants will be required to post a conspicuous notice in locations that provide such care. The notice will describe the protections under federal law against discrimination toward the handicapped, and will provide a contact point in the Department of HHS for reporting violations immediately by telephone.

  11. Surgical Practical Skills Learning Curriculum: Implementation and Interns' Confidence Perceptions.

    PubMed

    Acosta, Danilo; Castillo-Angeles, Manuel; Garces-Descovich, Alejandro; Watkins, Ammara A; Gupta, Alok; Critchlow, Jonathan F; Kent, Tara S

    To provide an overview of the practical skills learning curriculum and assess its effects over time on the surgical interns' perceptions of their technical skills, patient management, administrative tasks, and knowledge. An 84-hour practical skills curriculum composed of didactic, simulation, and practical sessions was implemented during the 2015 to 2016 academic year for general surgery interns. Totally, 40% of the sessions were held during orientation, whereas the remainder sessions were held throughout the academic year. Interns' perceptions of their technical skills, administrative tasks, patient management, and knowledge were assessed by the practical skills curriculum residents' perception survey at various time points during their intern year (baseline, midpoint, and final). Interns were also asked to fill out an evaluation survey at the completion of each session to obtain feedback on the curriculum. General Surgery Residency program at a tertiary care academic institution. 20 General Surgery categorical and preliminary interns. Significant differences were found over time in interns' perceptions on their technical skills, patient management, administrative tasks, and knowledge (p < 0.001 for all). The results were also statistically significant when accounting for a prior boot camp course in medical school, intern status (categorical or preliminary), and gender (p < 0.05 for all). Differences in interns' perceptions occurred both from baseline to midpoint, and from midpoint to final time point evaluations (p < 0.001 for all). Prior surgical boot camp in medical school status, intern status (categorical vs. preliminary), and gender did not differ in the interns' baseline perceptions of their technical skills, patient management, administrative tasks, and knowledge (p > 0.05 for all). Implementation of a Practical Skills Curriculum in surgical internships can improve interns' confidence perception on their technical skills, patient management skills, administrative tasks, and knowledge. Copyright © 2018. Published by Elsevier Inc.

  12. Return to sporting activity after osteochondral autograft transplantation for Freiberg disease in young athletes.

    PubMed

    Ishimatsu, Tetsuro; Yoshimura, Ichiro; Kanazawa, Kazuki; Hagio, Tomonobu; Yamamoto, Takuaki

    2017-07-01

    Freiberg disease is defined as osteochondrosis of the metatarsal head and typically occurs in adolescents with sporting activity. This study aimed to evaluate the sporting activity of young athletes after osteochondral autograft transplantation (OAT) for Freiberg disease. OAT for Freiberg disease was conducted in 12 consecutive patients between August 2008 and November 2014. The present study evaluated 10 of these patients who both undertook sporting activity preoperatively and were teenagers at the time of surgery. Clinical evaluations were performed based on the Japanese Society for Surgery of the Foot lesser metatarsophalangeal-interphalangeal scale (JSSF scale) and range of motion (ROM) of the operated metatarsophalangeal joint preoperatively and at the final follow-up (mean 24.6 months). Whether patients were able to return to sporting activity and time until return to sporting activity were evaluated, including the Halasi score to reflect the level of sporting activity. Regarding symptoms at the donor knee, the Lysholm knee scale score was evaluated at the final follow-up. The mean JSSF scale showed a significant improvement at the final follow-up (p < 0.01). The mean ROM in extension and flexion improved at the final follow-up (p < 0.01, and p < 0.05, respectively). All patients were able to return to sporting activity at a mean time of 3.5 months postoperatively and the Halasi score showed no significant change. The mean Lysholm knee scale score was 97.9 (range 89-100) points at the final follow-up. All young athletes who underwent OAT for Freiberg disease achieved early return to almost equal sporting activity postoperatively and exhibited a significant improvement of the ROM of the metatarsophalangeal joint with almost no knee pain.

  13. Predict or classify: The deceptive role of time-locking in brain signal classification

    NASA Astrophysics Data System (ADS)

    Rusconi, Marco; Valleriani, Angelo

    2016-06-01

    Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate that the high classification accuracy is a consequence of time-locking and that its time behavior is simply related to the large relaxation time of the process. We conclude that when time-locking is a crucial step in the analysis of neural activity patterns, both the emergence and the timing of the classification accuracy are affected by structural properties of the network that generates the signal.

  14. Final report on COOMET.T-S1. Comparison of type S thermocouples at the freezing points of zinc, aluminium and copper 2014—2015

    NASA Astrophysics Data System (ADS)

    Pokhodun, A. I.; Ivanova, A. G.; Duysebayeva, K. K.; Ivanova, K. P.

    2015-01-01

    Regional comparison of type S thermocouples at the freezing points of zinc, aluminium and copper was initiated by COOMET TC1.1-10 (the technical committee of COOMET `Thermometry and thermal physics'). Three NMI take part in COOMET regional comparison: D I Mendeleev Institute for Metrology (VNIIM) (Russian Federation), National Scientific Centre (Institute of Metrology) (NSC IM, Ukraine), Republic State Enterprise (Kazakhstan Institute of Metrology) (KazInMetr, Republic of Kazakhstan). VNIIM (Russia) was chosen as the coordinator-pilot of the regional comparison. A star type comparison was used. The participants: KazInMetr and NSC IM constructed the type S thermocouples and calibrated them in three fixed points: zinc, aluminum and copper points, using methods of ITS-90 fixed point realizations. The thermocouples have been sent to VNIIM together with the results of the calibration at three fixed points, with the values of the inhomogeneity at temperature 200 °C and the uncertainty evaluations of the results. For calibration of thermocouples the same VNIIM fixed points cells were used. Participating laboratories repeated the calibration of thermocouples after its returning in zinc, aluminum and copper points to determine the stability of its results. In result of the comparison was to evaluate the equivalence of the type S thermocouples calibration in fixed points by NMIs to confirm corresponding lines of international website for NMI's Calibration and Measurement Capabilities (CMC). This paper is the final report of the comparison including analysis of the uncertainty of measurement results. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT WG-KC, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  15. Reverse arthroplasty for osteoarthritis and rotator cuff deficiency after previous surgery for recurrent anterior shoulder instability.

    PubMed

    Raiss, Patric; Zeifang, Felix; Pons-Villanueva, Juan; Smithers, Christopher J; Loew, Markus; Walch, Gilles

    2014-07-01

    Osteoarthritis in combination with rotator cuff deficiency following previous shoulder stabilisation surgery and after failed surgical treatment for chronic anterior shoulder dislocation is a challenging condition. The aim of this study was to analyse the results of reverse shoulder arthroplasty in such patients. Thirteen patients with a median follow-up of 3.5 (range two to eight) years and a median age of 70 (range 48-82) years were included. In all shoulders a tear of at least one rotator cuff tendon in combination with osteoarthritis was present at the time of arthroplasty. The Constant score, shoulder flexion and external and internal rotation with the elbow at the side were documented pre-operatively and at the final follow-up. Pre-operative, immediate post-operative and final follow-up radiographs were analysed. All complications and revisions were documented. Twelve patients were either satisfied or very satisfied with the procedure. The median Constant score increased from 26 points pre-operatively to 67 points at the final follow-up (p = 0.001). The median shoulder flexion increased significantly from 70° to 130° and internal rotation from two to four points (p = 0.002). External rotation did not change significantly (p = 0.55). Glenoid notching was present in five cases and was graded as mild in three cases and moderate in two. One complication occurred leading to revision surgery. Reverse arthroplasty leads to high satisfaction rates for patients with osteoarthritis and rotator cuff deficiency who had undergone previous shoulder stabilisation procedures. The improvements in clinical outcome as well as the radiographic results seem to be comparable with those of other studies reporting on the outcome of reverse shoulder arthroplasty for other conditions.

  16. 40 CFR Appendix B to Part 97 - Final Section 126 Rule: Non-EGU Allocations, 2004-2007

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Final Section 126 Rule: Non-EGU Allocations, 2004-2007 B Appendix B to Part 97 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... COASTAL EAGLE POINT OIL COMPAN 55004 001 3 NJ Gloucester COASTAL EAGLE POINT OIL COMPAN 55004 038 11 NJ...

  17. Rheological of chocolate-flavored, reduced-calories coating as a function of conching process.

    PubMed

    Medina-Torres, Luis; Sanchez-Olivares, Guadalupe; Nuñez-Ramirez, Diola Marina; Moreno, Leonardo; Calderas, Fausto

    2014-07-01

    Continuous flow and linear viscoelasticity rheology of chocolate coating is studied in this work using fat substitute gums (xanthan, GX). An alternative conching process, using a Rotor-Estator (RE) type impeller, is proposed. The objective is to obtain a chocolate coating material with improved flow properties. Characterization of the final material through particle size distribution (PSD), differential scanning calorimetry (DSC) and proximal analysis is reported. Particle size distribution of the final material showed less polydispersity and therefore, greater homogeneity; fusion points were also generated at around 20 °C assuming crystal type I (β'2) and II (α). Moreover, the final material exhibited crossover points (higher structure material), whereas the commercial brand chocolate used for comparison did not. The best conditions to produce the coating were maturing of 36 h and 35 °C, showing crossover points around 76 Pa and a 0.505 solids particle dispersion (average particle diameter of 0.364 μm), and a fusion point at 20.04 °C with a ΔHf of 1.40 (J/g). The results indicate that xanthan gum is a good substitute for cocoa butter and provides stability to the final product.

  18. A point-based tool to predict conversion from mild cognitive impairment to probable Alzheimer's disease.

    PubMed

    Barnes, Deborah E; Cenzer, Irena S; Yaffe, Kristine; Ritchie, Christine S; Lee, Sei J

    2014-11-01

    Our objective in this study was to develop a point-based tool to predict conversion from amnestic mild cognitive impairment (MCI) to probable Alzheimer's disease (AD). Subjects were participants in the first part of the Alzheimer's Disease Neuroimaging Initiative. Cox proportional hazards models were used to identify factors associated with development of AD, and a point score was created from predictors in the final model. The final point score could range from 0 to 9 (mean 4.8) and included: the Functional Assessment Questionnaire (2‒3 points); magnetic resonance imaging (MRI) middle temporal cortical thinning (1 point); MRI hippocampal subcortical volume (1 point); Alzheimer's Disease Cognitive Scale-cognitive subscale (2‒3 points); and the Clock Test (1 point). Prognostic accuracy was good (Harrell's c = 0.78; 95% CI 0.75, 0.81); 3-year conversion rates were 6% (0‒3 points), 53% (4‒6 points), and 91% (7‒9 points). A point-based risk score combining functional dependence, cerebral MRI measures, and neuropsychological test scores provided good accuracy for prediction of conversion from amnestic MCI to AD. Copyright © 2014 The Alzheimer's Association. All rights reserved.

  19. Optimized stereo matching in binocular three-dimensional measurement system using structured light.

    PubMed

    Liu, Kun; Zhou, Changhe; Wei, Shengbin; Wang, Shaoqing; Fan, Xin; Ma, Jianyong

    2014-09-10

    In this paper, we develop an optimized stereo-matching method used in an active binocular three-dimensional measurement system. A traditional dense stereo-matching algorithm is time consuming due to a long search range and the high complexity of a similarity evaluation. We project a binary fringe pattern in combination with a series of N binary band limited patterns. In order to prune the search range, we execute an initial matching before exhaustive matching and evaluate a similarity measure using logical comparison instead of a complicated floating-point operation. Finally, an accurate point cloud can be obtained by triangulation methods and subpixel interpolation. The experiment results verify the computational efficiency and matching accuracy of the method.

  20. Projection matrix acquisition for cone-beam computed tomography iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Shi, Wenlong; Zhang, Caixin; Gao, Zongzhao

    2017-02-01

    Projection matrix is an essential and time-consuming part in computed tomography (CT) iterative reconstruction. In this article a novel calculation algorithm of three-dimensional (3D) projection matrix is proposed to quickly acquire the matrix for cone-beam CT (CBCT). The CT data needed to be reconstructed is considered as consisting of the three orthogonal sets of equally spaced and parallel planes, rather than the individual voxels. After getting the intersections the rays with the surfaces of the voxels, the coordinate points and vertex is compared to obtain the index value that the ray traversed. Without considering ray-slope to voxel, it just need comparing the position of two points. Finally, the computer simulation is used to verify the effectiveness of the algorithm.

  1. Assessing delivery practices of mothers over time and over space in Uganda, 2003-2012.

    PubMed

    Sprague, Daniel A; Jeffery, Caroline; Crossland, Nadine; House, Thomas; Roberts, Gareth O; Vargas, William; Ouma, Joseph; Lwanga, Stephen K; Valadez, Joseph J

    2016-01-01

    It is well known that safe delivery in a health facility reduces the risks of maternal and infant mortality resulting from perinatal complications. What is less understood are the factors associated with safe delivery practices. We investigate factors influencing health facility delivery practices while adjusting for multiple other factors simultaneously, spatial heterogeneity, and trends over time. We fitted a logistic regression model to Lot Quality Assurance Sampling (LQAS) data from Uganda in a framework that considered individual-level covariates, geographical features, and variations over five time points. We accounted for all two-covariate interactions and all three-covariate interactions for which two of the covariates already had a significant interaction, were able to quantify uncertainty in outputs using computationally intensive cluster bootstrap methods, and displayed outputs using a geographical information system. Finally, we investigated what information could be predicted about districts at future time-points, before the next LQAS survey is carried out. To do this, we applied the model to project a confidence interval for the district level coverage of health facility delivery at future time points, by using the lower and upper end values of known demographics to construct a confidence range for the prediction and define priority groups. We show that ease of access, maternal age and education are strongly associated with delivery in a health facility; after accounting for this, there remains a significant trend towards greater uptake over time. We use this model together with known demographics to formulate a nascent early warning system that identifies candidate districts expected to have low prevalence of facility-based delivery in the immediate future. Our results support the hypothesis that increased development, particularly related to education and access to health facilities, will act to increase facility-based deliveries, a factor associated with reducing perinatal associated mortality. We provide a statistical method for using inexpensive and routinely collected monitoring and evaluation data to answer complex epidemiology and public health questions in a resource-poor setting. We produced a model based on this data that explained the spatial distribution of facility-based delivery in Uganda. Finally, we used this model to make a prediction about the future priority of districts that was validated by monitoring and evaluation data collected in the next year.

  2. Percutaneous internal fixation of proximal fifth metatarsal jones fractures (Zones II and III) with Charlotte Carolina screw and bone marrow aspirate concentrate: an outcome study in athletes.

    PubMed

    Murawski, Christopher D; Kennedy, John G

    2011-06-01

    Internal fixation is a popular first-line treatment method for proximal fifth metatarsal Jones fractures in athletes; however, nonunions and screw breakage can occur, in part because of nonspecific fixation hardware and poor blood supply. To report the results from 26 patients who underwent percutaneous internal fixation with a specialized screw system of a proximal fifth metatarsal Jones fracture (zones II and III) and bone marrow aspirate concentrate. Case series; Level of evidence, 4. Percutaneous internal fixation for a proximal fifth metatarsal Jones fracture (zones II and III) was performed on 26 athletic patients (mean age, 27.47 years; range, 18-47). All patients were competing at some level of sport and were assessed preoperatively and postoperatively using the Foot and Ankle Outcome Score and SF-12 outcome scores. The mean follow-up time was 20.62 months (range, 12-28). Of the 26 fractures, 17 were traditional zone II Jones fractures, and the remaining 9 were zone III proximal diaphyseal fractures. The mean Foot and Ankle Outcome Score significantly increased, from 51.15 points preoperatively (range, 14-69) to 90.91 at final follow-up (range, 71-100; P < .01). The mean physical component of the SF-12 score significantly improved, from 25.69 points preoperatively (range, 6-39) to 54.62 at final follow-up (range, 32-62; P < .01). The mean mental component of the SF-12 score also significantly improved, from 28.20 points preoperatively (range, 14-45) to 58.41 at final follow-up (range, 36-67; P < .01). The mean time to fracture healing on standard radiographs was 5 weeks after surgery (range, 4-24). Two patients did not return to their previous levels of sporting activity. One patient experienced a delayed union, and 1 healed but later refractured. Percutaneous internal fixation of proximal fifth metatarsal Jones fractures, with a Charlotte Carolina screw and bone marrow aspirate concentrate, provides more predictable results while permitting athletes a return to sport at their previous levels of competition, with few complications.

  3. Study into Point Cloud Geometric Rigidity and Accuracy of TLS-Based Identification of Geometric Bodies

    NASA Astrophysics Data System (ADS)

    Klapa, Przemyslaw; Mitka, Bartosz; Zygmunt, Mariusz

    2017-12-01

    Capability of obtaining a multimillion point cloud in a very short time has made the Terrestrial Laser Scanning (TLS) a widely used tool in many fields of science and technology. The TLS accuracy matches traditional devices used in land surveying (tacheometry, GNSS - RTK), but like any measurement it is burdened with error which affects the precise identification of objects based on their image in the form of a point cloud. The point’s coordinates are determined indirectly by means of measuring the angles and calculating the time of travel of the electromagnetic wave. Each such component has a measurement error which is translated into the final result. The XYZ coordinates of a measuring point are determined with some uncertainty and the very accuracy of determining these coordinates is reduced as the distance to the instrument increases. The paper presents the results of examination of geometrical stability of a point cloud obtained by means terrestrial laser scanner and accuracy evaluation of solids determined using the cloud. Leica P40 scanner and two different settings of measuring points were used in the tests. The first concept involved placing a few balls in the field and then scanning them from various sides at similar distances. The second part of measurement involved placing balls and scanning them a few times from one side but at varying distances from the instrument to the object. Each measurement encompassed a scan of the object with automatic determination of its position and geometry. The desk studies involved a semiautomatic fitting of solids and measurement of their geometrical elements, and comparison of parameters that determine their geometry and location in space. The differences of measures of geometrical elements of balls and translations vectors of the solids centres indicate the geometrical changes of the point cloud depending on the scanning distance and parameters. The results indicate the changes in the geometry of scanned objects depending on the point cloud quality and distance from the measuring instrument. Varying geometrical dimensions of the same element suggest also that the point cloud does not keep a stable geometry of measured objects.

  4. Finite-time stability and synchronization of memristor-based fractional-order fuzzy cellular neural networks

    NASA Astrophysics Data System (ADS)

    Zheng, Mingwen; Li, Lixiang; Peng, Haipeng; Xiao, Jinghua; Yang, Yixian; Zhang, Yanping; Zhao, Hui

    2018-06-01

    This paper mainly studies the finite-time stability and synchronization problems of memristor-based fractional-order fuzzy cellular neural network (MFFCNN). Firstly, we discuss the existence and uniqueness of the Filippov solution of the MFFCNN according to the Banach fixed point theorem and give a sufficient condition for the existence and uniqueness of the solution. Secondly, a sufficient condition to ensure the finite-time stability of the MFFCNN is obtained based on the definition of finite-time stability of the MFFCNN and Gronwall-Bellman inequality. Thirdly, by designing a simple linear feedback controller, the finite-time synchronization criterion for drive-response MFFCNN systems is derived according to the definition of finite-time synchronization. These sufficient conditions are easy to verify. Finally, two examples are given to show the effectiveness of the proposed results.

  5. The impact of peer review on paediatric forensic reports.

    PubMed

    Kariyawasam, Uditha

    2016-10-01

    To retrospectively evaluate the common grammar and spelling errors of the medico-legal reports written by the doctors at the Victorian Forensic Paediatric Medical Service (VFPMS) in both Royal Children's Hospital (RCH) and Monash Medical Centre. The reports were evaluated at two points in time; before and after peer review. The aim of the study was to ascertain whether peer review improved the grammar and spelling in VFPMS medico-legal reports. Draft VFPMS reports are sent to the VFPMS medical director for peer review. The current study sampled 50 reports that were sent consecutively to Dr. Anne Smith from 1st of May 2015. The 50 corresponding final reports were then retrieved from the VFPMS database. The 50 pairs of draft and final reports were scored using a 50-point scoring system. The scores of the draft reports were compared to the scores of the final report to assess if there was a change in quality as measured using an explicit criteria audit of report structure, simple grammar, jargon use and spelling. The audit did not include evaluation of the validity of forensic opinions. The overall scores were statistically analysed using descriptive statistics and a paired T-test. The scores of the reports improved by 2.24% when the final reports were compared to the draft reports (p < 0.001). The peer-review process resulted in a significantly higher quality of medico-legal reports. The report writing and peer-review process could be assisted by an abbreviated version of the checklist used for the audit. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  6. Non-reciprocity in nonlinear elastodynamics

    NASA Astrophysics Data System (ADS)

    Blanchard, Antoine; Sapsis, Themistoklis P.; Vakakis, Alexander F.

    2018-01-01

    Reciprocity is a fundamental property of linear time-invariant (LTI) acoustic waveguides governed by self-adjoint operators with symmetric Green's functions. The break of reciprocity in LTI elastodynamics is only possible through the break of time reversal symmetry on the micro-level, and this can be achieved by imposing external biases, adding nonlinearities or allowing for time-varying system properties. We present a Volterra-series based asymptotic analysis for studying spatial non-reciprocity in a class of one-dimensional (1D), time-invariant elastic systems with weak stiffness nonlinearities. We show that nonlinearity is neither necessary nor sufficient for breaking reciprocity in this class of systems; rather, it depends on the boundary conditions, the symmetries of the governing linear and nonlinear operators, and the choice of the spatial points where the non-reciprocity criterion is tested. Extension of the analysis to higher dimensions and time-varying systems is straightforward from a mathematical point of view (but not in terms of new non-reciprocal physical phenomena), whereas the connection of non-reciprocity and time irreversibility can be studied as well. Finally, we show that suitably defined non-reciprocity measures enable optimization, and can provide physical understanding of the nonlinear effects in the dynamics, enabling one to establish regimes of "maximum nonlinearity." We highlight the theoretical developments by means of a numerical example.

  7. Ultrasonic brain therapy: First trans-skull in vivo experiments on sheep using adaptive focusing

    NASA Astrophysics Data System (ADS)

    Pernot, Mathieu; Aubry, Jean-Francois; Tanter, Michael; Fink, Mathias; Boch, Anne-Laure; Kujas, Michèle

    2004-05-01

    A high-power prototype dedicated to trans-skull therapy has been tested in vivo on 20 sheep. The array is made of 200 high-power transducers working at 1-MHz central and is able to reach 260 bars at focus in water. An echographic array connected to a Philips HDI 1000 system has been inserted in the therapeutic array in order to perform real-time monitoring of the treatment. A complete craniotomy has been performed on half of the treated animal models in order to get a reference model. On the other animals, a minimally invasive surgery has been performed thanks to a time-reversal experiment: a hydrophone was inserted at the target inside the brain thanks to a 1-mm2 craniotomy. A time-reversal experiment was then conducted through the skull bone with the therapeutic array to treat the targeted point. For all the animals a specified region around the target was treated thanks to electronic beam steering. Animals were finally divided into three groups and sacrificed, respectively, 0, 1, and 2 weeks after treatment. Finally, histological examination confirmed tissue damage. These in vivo experiments highlight the strong potential of high-power time-reversal technology.

  8. COSMOS-e'-soft Higgsotic attractors

    NASA Astrophysics Data System (ADS)

    Choudhury, Sayantan

    2017-07-01

    In this work, we have developed an elegant algorithm to study the cosmological consequences from a huge class of quantum field theories (i.e. superstring theory, supergravity, extra dimensional theory, modified gravity, etc.), which are equivalently described by soft attractors in the effective field theory framework. In this description we have restricted our analysis for two scalar fields - dilaton and Higgsotic fields minimally coupled with Einstein gravity, which can be generalized for any arbitrary number of scalar field contents with generalized non-canonical and non-minimal interactions. We have explicitly used R^2 gravity, from which we have studied the attractor and non-attractor phases by exactly computing two point, three point and four point correlation functions from scalar fluctuations using the In-In (Schwinger-Keldysh) and the δ N formalisms. We have also presented theoretical bounds on the amplitude, tilt and running of the primordial power spectrum, various shapes (equilateral, squeezed, folded kite or counter-collinear) of the amplitude as obtained from three and four point scalar functions, which are consistent with observed data. Also the results from two point tensor fluctuations and the field excursion formula are explicitly presented for the attractor and non-attractor phase. Further, reheating constraints, scale dependent behavior of the couplings and the dynamical solution for the dilaton and Higgsotic fields are also presented. New sets of consistency relations between two, three and four point observables are also presented, which shows significant deviation from canonical slow-roll models. Additionally, three possible theoretical proposals have presented to overcome the tachyonic instability at the time of late time acceleration. Finally, we have also provided the bulk interpretation from the three and four point scalar correlation functions for completeness.

  9. 3D Surface Reconstruction and Volume Calculation of Rills

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.

    2015-04-01

    We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.

  10. [Evaluation of preliminary grades and credits in nurse training programs].

    PubMed

    Darmann-Finck, Ingrid; Duveneck, Nicole

    2016-01-01

    In the federal state of Bremen preliminary grades were included to the extent of 25 % in written, oral and practical final grades during the time period 2009-2014. The evaluation focuses on the effects of preliminary grades on the scale of final grades and the performance of learners as well as on the assessment of the appropriateness of final grades. A mixed-methods design was employed that consisted of a quasi-experimental study comprising of surveys of students and teachers of comparative and model courses as well as a qualitative study using group discussions. The results confirm that preliminary grades lead to a minimal improvement of the final grades of some exclusively low-achieving students. The assessment of appropriateness hardly changed. From both learners' and teachers' point of view there is still great dissatisfaction concerning the practical final grades. With regard to learning habits an increased willingness to learn new skills on the one hand and a partly increased performance pressure on the other hand were demonstrated. On the basis of these research results, the authors recommend the regular introduction of preliminary marks into the nursing training. Copyright © 2016. Published by Elsevier GmbH.

  11. Lifetime Prediction of IGBT in a STATCOM Using Modified-Graphical Rainflow Counting Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopi Reddy, Lakshmi Reddy; Tolbert, Leon M; Ozpineci, Burak

    Rainflow algorithms are one of the best counting methods used in fatigue and failure analysis [17]. There have been many approaches to the rainflow algorithm, some proposing modifications. Graphical Rainflow Method (GRM) was proposed recently with a claim of faster execution times [10]. However, the steps of the graphical method of rainflow algorithm, when implemented, do not generate the same output as the four-point or ASTM standard algorithm. A modified graphical method is presented and discussed in this paper to overcome the shortcomings of graphical rainflow algorithm. A fast rainflow algorithm based on four-point algorithm but considering point comparison thanmore » range comparison is also presented. A comparison between the performances of the common rainflow algorithms [6-10], including the proposed methods, in terms of execution time, memory used, and efficiency, complexity, and load sequences is presented. Finally, the rainflow algorithm is applied to temperature data of an IGBT in assessing the lifetime of a STATCOM operating for power factor correction of the load. From 5-minute data load profiles available, the lifetime is estimated to be at 3.4 years.« less

  12. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    NASA Astrophysics Data System (ADS)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  13. Gravitational collapse of colloidal gels: Origins of the tipping point

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Poornima; Zia, Roseanna

    2016-11-01

    Reversible colloidal gels are soft viscoelastic solids in which durable but reversible bonds permit on-demand transition from solidlike to liquidlike behavior; these O(kT) bonds also lead to ongoing coarsening and age stiffening, making their rheology inherently time dependent. To wit, such gels may remain stable for an extended time, but then suddenly collapse, sedimenting to the bottom of the container (or creaming to the top) and eliminating any intended functionality of the material. Although this phenomenon has been studied extensively in the experimental literature, the microscopic mechanism underlying the collapse is not well understood. Effects of gel age, interparticle attraction strength, and wall effects all have been shown to affect collapse behavior, but the microstructural transformations underlying the 'tipping point' remain murky. To study this behavior, we conduct large-scale dynamic simulation to model the structural and rheological evolution of colloidal gels subjected to various gravitational stresses, examining the detailed micromechanics in three temporal regimes: slow sedimentation prior to collapse; the tipping point leading to the onset of rapid collapse; and the subsequent compaction of the material as it approaches its final bed height. Acknowledgment for funding and support from the Office of Naval Research; the National Science Foundation; and NSF XSEDE.

  14. Observation of CO 2 in Fourier transform infrared spectral measurements of living Acholeplasma laidlawii cells

    NASA Astrophysics Data System (ADS)

    Omura, Yoko; Okazaki, Norio

    2003-06-01

    In monitoring the time course of conformational disorder by Fourier transform infrared spectroscopy for intact Acholeplasma laidlawii cells grown at 37 °C on binary fatty acid mixtures containing oleic acid and for cells grown on pure palmitic acid, an absorption band at 2343 cm-1 was observed. The band intensity was found to increase with time. This band was not observed in the spectra for isolated membranes. It is suggested that the 2343 cm-1 band is due to CO2 dissolved in water, most likely produced at the final point of fermentation of amino acid by this microorganism.

  15. Real-time in situ nanoclustering during initial stages of artificial aging of Al-Cu alloys

    NASA Astrophysics Data System (ADS)

    Zatsepin, Nadia A.; Dilanian, Ruben A.; Nikulin, Andrei Y.; Gao, Xiang; Muddle, Barry C.; Matveev, Victor N.; Sakata, Osami

    2010-01-01

    We report an experimental demonstration of real-time in situ x-ray diffraction investigations of clustering and dynamic strain in early stages of nanoparticle growth in Al-Cu alloys. Simulations involving a simplified model of local strain are well correlated with the x-ray diffraction data, suggesting a redistribution of point defects and the formation of nanoscale clusters in the bulk material. A modal, representative nanoparticle size is determined subsequent to the final stage of artificial aging. Such investigations are imperative for the understanding, and ultimately the control, of nanoparticle nucleation and growth in this technologically important alloy.

  16. The MUSE project face to face with reality

    NASA Astrophysics Data System (ADS)

    Caillier, P.; Accardo, M.; Adjali, L.; Anwand, H.; Bacon, Roland; Boudon, D.; Brotons, L.; Capoani, L.; Daguisé, E.; Dupieux, M.; Dupuy, C.; François, M.; Glindemann, A.; Gojak, D.; Hansali, G.; Hahn, T.; Jarno, A.; Kelz, A.; Koehler, C.; Kosmalski, J.; Laurent, F.; Le Floch, M.; Lizon, J.-L.; Loupias, M.; Manescau, A.; Migniau, J. E.; Monstein, C.; Nicklas, H.; Parès, L.; Pécontal-Rousset, A.; Piqueras, L.; Reiss, R.; Remillieux, A.; Renault, E.; Rupprecht, G.; Streicher, O.; Stuik, R.; Valentin, H.; Vernet, J.; Weilbacher, P.; Zins, G.

    2012-09-01

    MUSE (Multi Unit Spectroscopic Explorer) is a second generation instrument built for ESO (European Southern Observatory) to be installed in Chile on the VLT (Very Large Telescope). The MUSE project is supported by a European consortium of 7 institutes. After the critical turning point of shifting from the design to the manufacturing phase, the MUSE project has now completed the realization of its different sub-systems and should finalize its global integration and test in Europe. To arrive to this point many challenges had to be overcome, many technical difficulties, non compliances or procurements delays which seemed at the time overwhelming. Now is the time to face the results of our organization, of our strategy, of our choices. Now is the time to face the reality of the MUSE instrument. During the design phase a plan was provided by the project management in order to achieve the realization of the MUSE instrument in specification, time and cost. This critical moment in the project life when the instrument takes shape and reality is the opportunity to look not only at the outcome but also to see how well we followed the original plan, what had to be changed or adapted and what should have been.

  17. Dynamic route and departure time choice model based on self-adaptive reference point and reinforcement learning

    NASA Astrophysics Data System (ADS)

    Li, Xue-yan; Li, Xue-mei; Yang, Lingrun; Li, Jing

    2018-07-01

    Most of the previous studies on dynamic traffic assignment are based on traditional analytical framework, for instance, the idea of Dynamic User Equilibrium has been widely used in depicting both the route choice and the departure time choice. However, some recent studies have demonstrated that the dynamic traffic flow assignment largely depends on travelers' rationality degree, travelers' heterogeneity and what the traffic information the travelers have. In this paper, we develop a new self-adaptive multi agent model to depict travelers' behavior in Dynamic Traffic Assignment. We use Cumulative Prospect Theory with heterogeneous reference points to illustrate travelers' bounded rationality. We use reinforcement-learning model to depict travelers' route and departure time choosing behavior under the condition of imperfect information. We design the evolution rule of travelers' expected arrival time and the algorithm of traffic flow assignment. Compared with the traditional model, the self-adaptive multi agent model we proposed in this paper can effectively help travelers avoid the rush hour. Finally, we report and analyze the effect of travelers' group behavior on the transportation system, and give some insights into the relation between travelers' group behavior and the performance of transportation system.

  18. Integrated Intermodal Passenger Transportation System

    NASA Technical Reports Server (NTRS)

    Klock, Ryan; Owens, David; Schwartz, Henry; Plencner, Robert

    2012-01-01

    Modern transportation consists of many unique modes of travel. Each of these modes and their respective industries has evolved independently over time, forming a largely incoherent and inefficient overall transportation system. Travelers today are forced to spend unnecessary time and efforts planning a trip through varying modes of travel each with their own scheduling, pricing, and services; causing many travelers to simply rely on their relatively inefficient and expensive personal automobile. This paper presents a demonstration program system to not only collect and format many different sources of trip planning information, but also combine these independent modes of travel in order to form optimal routes and itineraries of travel. The results of this system show a mean decrease in inter-city travel time of 10 percent and a 25 percent reduction in carbon dioxide emissions over personal automobiles. Additionally, a 55 percent reduction in carbon dioxide emissions is observed for intra-city travel. A conclusion is that current resources are available, if somewhat hidden, to drastically improve point to point transportation in terms of time spent traveling, the cost of travel, and the ecological impact of a trip. Finally, future concepts are considered which could dramatically improve the interoperability and efficiency of the transportation infrastructure.

  19. KEY COMPARISON: Final report of APMP.T-K6 (original name APMP-IC-1-97): Comparison of humidity measurements using a dew point meter as a transfer standard

    NASA Astrophysics Data System (ADS)

    Li, Wang; Takahashi, C.; Hussain, F.; Hong, Yi; Nham, H. S.; Chan, K. H.; Lee, L. T.; Chahine, K.

    2007-01-01

    This APMP key comparison of humidity measurements using a dew point meter as a transfer standard was carried out among eight national metrology institutes from February 1999 to January 2001. The NMC/SPRING, Singapore was the pilot laboratory and a chilled mirror dew point meter offered by NMIJ was used as a transfer standard. The transfer standard was calibrated by each participating institute against local humidity standards in terms of frost and dew point temperature. Each institute selected its frost/dew point temperature calibration points within the range from -70 °C to 20 °C frost/dew point with 5 °C step. The majority of participating institutes measured from -60 °C to 20 °C frost/dew point and a simple mean evaluation was performed in this range. The differences between the institute values and the simple means for all participating institutes are within two standard deviations from the mean values. Bilateral equivalence was analysed in terms of pair difference and single parameter Quantified Demonstrated Equivalence. The results are presented in the report. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  20. Ensemble Pulsar Time Scale

    NASA Astrophysics Data System (ADS)

    Yin, Dong-shan; Gao, Yu-ping; Zhao, Shu-hong

    2017-07-01

    Millisecond pulsars can generate another type of time scale that is totally independent of the atomic time scale, because the physical mechanisms of the pulsar time scale and the atomic time scale are quite different from each other. Usually the pulsar timing observations are not evenly sampled, and the internals between two data points range from several hours to more than half a month. Further more, these data sets are sparse. All this makes it difficult to generate an ensemble pulsar time scale. Hence, a new algorithm to calculate the ensemble pulsar time scale is proposed. Firstly, a cubic spline interpolation is used to densify the data set, and make the intervals between data points uniform. Then, the Vondrak filter is employed to smooth the data set, and get rid of the high-frequency noises, and finally the weighted average method is adopted to generate the ensemble pulsar time scale. The newly released NANOGRAV (North American Nanohertz Observatory for Gravitational Waves) 9-year data set is used to generate the ensemble pulsar time scale. This data set includes the 9-year observational data of 37 millisecond pulsars observed by the 100-meter Green Bank telescope and the 305-meter Arecibo telescope. It is found that the algorithm used in this paper can reduce effectively the influence caused by the noises in pulsar timing residuals, and improve the long-term stability of the ensemble pulsar time scale. Results indicate that the long-term (> 1 yr) stability of the ensemble pulsar time scale is better than 3.4 × 10-15.

  1. Computational fluid dynamics: An engineering tool?

    NASA Astrophysics Data System (ADS)

    Anderson, J. D., Jr.

    1982-06-01

    Computational fluid dynamics in general, and time dependent finite difference techniques in particular, are examined from the point of view of direct engineering applications. Examples are given of the supersonic blunt body problem and gasdynamic laser calculations, where such techniques are clearly engineering tools. In addition, Navier-Stokes calculations of chemical laser flows are discussed as an example of a near engineering tool. Finally, calculations of the flowfield in a reciprocating internal combustion engine are offered as a promising future engineering application of computational fluid dynamics.

  2. Observations from Sarmizegetusa Sanctuary

    NASA Astrophysics Data System (ADS)

    Barbosu, M.

    2000 years ago, Sarmizegetusa Regia was the capital of ancient Dacia (today: Romania). It is known that the Dacian high priests used the Sanctuary of Sarmizegetusa not only for religious ceremonies, but also for astronomical observations. After having completed geodesic measurements, we analyzed the architecture of the sanctuary with its main points, directions and circles. We discuss here what kind of astronomical observations could have been made with the scientific knowledge of that time. The final section of this work is dedicated to the remarkable resemblance between Sarmizegztusa and Stonehenge.

  3. Retention in the Marine Corps: The Importance of Quality in the Career Enlisted Force

    DTIC Science & Technology

    1990-12-01

    study is on the issue of the quality of the labor supply at the first-term reenlistment point. Quality is assessed on the basis of an individual’s...Objective Memoranda (POM), are then reviewed by the Defense Resources Board. The Defense Resources Board makes the final decisions and issues its...the economy of the U.S. These striking events, that clearly affect the defense position of the country, were not considered at the time DoD, the

  4. Parametric motion control of robotic arms: A biologically based approach using neural networks

    NASA Technical Reports Server (NTRS)

    Bock, O.; D'Eleuterio, G. M. T.; Lipitkas, J.; Grodski, J. J.

    1993-01-01

    A neural network based system is presented which is able to generate point-to-point movements of robotic manipulators. The foundation of this approach is the use of prototypical control torque signals which are defined by a set of parameters. The parameter set is used for scaling and shaping of these prototypical torque signals to effect a desired outcome of the system. This approach is based on neurophysiological findings that the central nervous system stores generalized cognitive representations of movements called synergies, schemas, or motor programs. It has been proposed that these motor programs may be stored as torque-time functions in central pattern generators which can be scaled with appropriate time and magnitude parameters. The central pattern generators use these parameters to generate stereotypical torque-time profiles, which are then sent to the joint actuators. Hence, only a small number of parameters need to be determined for each point-to-point movement instead of the entire torque-time trajectory. This same principle is implemented for controlling the joint torques of robotic manipulators where a neural network is used to identify the relationship between the task requirements and the torque parameters. Movements are specified by the initial robot position in joint coordinates and the desired final end-effector position in Cartesian coordinates. This information is provided to the neural network which calculates six torque parameters for a two-link system. The prototypical torque profiles (one per joint) are then scaled by those parameters. After appropriate training of the network, our parametric control design allowed the reproduction of a trained set of movements with relatively high accuracy, and the production of previously untrained movements with comparable accuracy. We conclude that our approach was successful in discriminating between trained movements and in generalizing to untrained movements.

  5. Backward-stochastic-differential-equation approach to modeling of gene expression

    NASA Astrophysics Data System (ADS)

    Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F.; Aguiar, Paulo

    2017-03-01

    In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).

  6. Horizontal alignment of 5' -> 3' intergene distance segment tropy with respect to the gene as the conserved basis for DNA transcription.

    PubMed

    Sarin, Hemant

    2017-03-01

    To study the conserved basis for gene expression in comparative cell types at opposite ends of the cell pressuromodulation spectrum, the lymphatic endothelial cell and the blood microvascular capillary endothelial cell. The mechanism for gene expression is studied in terms of the 5' -> 3' direction paired point tropy quotients ( prpT Q s) and the final 5' -> 3' direction episodic sub-episode block sums split-integrated weighted average-averaged gene overexpression tropy quotient ( esebssiwaagoT Q ). The final 5' -> 3' esebssiwaagoT Q classifies an lymphatic endothelial cell overexpressed gene as a supra-pressuromodulated gene ( esebssiwaagoT Q ≥ 0.25 < 0.75) every time and classifies a blood microvascular capillary endothelial cell overexpressed gene every time as an infra-pressuromodulated gene ( esebssiwaagoT Q < 0.25) (100% sensitivity; 100% specificity). Horizontal alignment of 5' -> 3' intergene distance segment tropy wrt the gene is the basis for DNA transcription in the pressuromodulated state.

  7. Backward-stochastic-differential-equation approach to modeling of gene expression.

    PubMed

    Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F; Aguiar, Paulo

    2017-03-01

    In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).

  8. Using Global Invariant Manifolds to Understand Metastability in the Burgers Equation With Small Viscosity

    NASA Astrophysics Data System (ADS)

    Beck, Margaret; Wayne, C. Eugene

    2009-01-01

    The large-time behavior of solutions to the Burgers equation with small viscosity is described using invariant manifolds. In particular, a geometric explanation is provided for a phenomenon known as metastability, which in the present context means that solutions spend a very long time near the family of solutions known as diffusive N-waves before finally converging to a stable self-similar diffusion wave. More precisely, it is shown that in terms of similarity, or scaling, variables in an algebraically weighted L^2 space, the self-similar diffusion waves correspond to a one-dimensional global center manifold of stationary solutions. Through each of these fixed points there exists a one-dimensional, global, attractive, invariant manifold corresponding to the diffusive N-waves. Thus, metastability corresponds to a fast transient in which solutions approach this metastable manifold of diffusive N-waves, followed by a slow decay along this manifold, and, finally, convergence to the self-similar diffusion wave.

  9. Reproducibility of Ultrasound-Guided High Intensity Focused Ultrasound (HIFU) Thermal Lesions in Minimally-Invasive Brain Surgery

    NASA Astrophysics Data System (ADS)

    Zahedi, Sulmaz

    This study aims to prove the feasibility of using Ultrasound-Guided High Intensity Focused Ultrasound (USg-HIFU) to create thermal lesions in neurosurgical applications, allowing for precise ablation of brain tissue, while simultaneously providing real time imaging. To test the feasibility of the system, an optically transparent HIFU compatible tissue-mimicking phantom model was produced. USg-HIFU was then used for ablation of the phantom, with and without targets. Finally, ex vivo lamb brain tissue was imaged and ablated using the USg-HIFU system. Real-time ultrasound images and videos obtained throughout the ablation process showing clear lesion formation at the focal point of the HIFU transducer. Post-ablation gross and histopathology examinations were conducted to verify thermal and mechanical damage in the ex vivo lamb brain tissue. Finally, thermocouple readings were obtained, and HIFU field computer simulations were conducted to verify findings. Results of the study concluded reproducibility of USg-HIFU thermal lesions for neurosurgical applications.

  10. Robustness of controllability and observability of linear time-varying systems with application to the emergency control of power systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sastry, S. S.; Desoer, C. A.

    1980-01-01

    Fixed point methods from nonlinear anaysis are used to establish conditions under which the uniform complete controllability of linear time-varying systems is preserved under non-linear perturbations in the state dynamics and the zero-input uniform complete observability of linear time-varying systems is preserved under non-linear perturbation in the state dynamics and output read out map. Algorithms for computing the specific input to steer the perturbed systems from a given initial state to a given final state are also presented. As an application, a very specific emergency control of an interconnected power system is formulated as a steering problem and it ismore » shown that this emergency control is indeed possible in finite time.« less

  11. Comparison of bone tunnel and suture anchor techniques in the modified Broström procedure for chronic lateral ankle instability.

    PubMed

    Hu, Chang-Yong; Lee, Keun-Bae; Song, Eun-Kyoo; Kim, Myung-Sun; Park, Kyung-Soon

    2013-08-01

    The modified Broström procedure is frequently used to treat chronic lateral ankle instability. There are 2 common methods of the modified Broström procedure, which are the bone tunnel and suture anchor techniques. To compare the clinical outcomes of the modified Broström procedure using the bone tunnel and suture anchor techniques. Cohort study; Level of evidence, 2. Eighty-one patients (81 ankles) treated with the modified Broström procedure for chronic lateral ankle instability constituted the study cohort. The 81 ankles were divided into 2 groups, namely, a bone tunnel technique (BT group; 40 ankles) and a suture anchor technique (SA group; 41 ankles). The Karlsson score, American Orthopaedic Foot and Ankle Society (AOFAS) ankle-hindfoot score, anterior talar translation, and talar tilt angle were used to evaluate clinical and radiographic outcomes. The BT group consisted of 32 men and 8 women with a mean age of 34.8 years at surgery and a mean follow-up duration of 34.2 months. The SA group consisted of 33 men and 8 women with a mean age of 33.3 years at surgery and a mean follow-up duration of 32.8 months. Mean Karlsson scores improved significantly from 57.0 points preoperatively to 94.9 points at final follow-up in the BT group and from 59.9 points preoperatively to 96.4 points at final follow-up in the SA group. Mean AOFAS scores also improved from 64.2 points preoperatively to 97.8 points at final follow-up in the BT group and from 70.3 points preoperatively to 97.4 points at final follow-up in the SA group. Mean anterior talar translations in the BT group and SA group improved from 9.0 mm and 9.2 mm preoperatively to 6.5 mm and 6.8 mm at final follow-up, respectively. Mean talar tilt angles were 12.0° in the BT group and 12.5° in the SA group preoperatively and 8.8° at final follow-up for both groups. No significant differences were found between the 2 groups in terms of the Karlsson score, AOFAS score, anterior talar translation, and talar tilt angle. The bone tunnel and suture anchor techniques of the modified Broström procedure showed similar good functional and radiographic outcomes. Both techniques appear to be effective and reliable methods for the treatment of chronic lateral ankle instability.

  12. Optimal impulsive time-fixed orbital rendezvous and interception with path constraints

    NASA Technical Reports Server (NTRS)

    Taur, D.-R.; Prussing, J. E.; Coverstone-Carroll, V.

    1990-01-01

    Minimum-fuel, impulsive, time-fixed solutions are obtained for the problem of orbital rendezvous and interception with interior path constraints. Transfers between coplanar circular orbits in an inverse-square gravitational field are considered, subject to a circular path constraint representing a minimum or maximum permissible orbital radius. Primer vector theory is extended to incorporate path constraints. The optimal number of impulses, their times and positions, and the presence of initial or final coasting arcs are determined. The existence of constraint boundary arcs and boundary points is investigated as well as the optimality of a class of singular arc solutions. To illustrate the complexities introduced by path constraints, an analysis is made of optimal rendezvous in field-free space subject to a minimum radius constraint.

  13. Electrokinetic Analysis of Cell Translocation in Low-Cost Microfluidic Cytometry for Tumor Cell Detection and Enumeration.

    PubMed

    Guo, Jinhong; Pui, Tze Sian; Ban, Yong-Ling; Rahman, Abdur Rub Abdur; Kang, Yuejun

    2013-12-01

    Conventional Coulter counters have been introduced as an important tool in biological cell assays since several decades ago. Recently, the emerging portable Coulter counter has demonstrated its merits in point of care diagnostics, such as on chip detection and enumeration of circulating tumor cells (CTC). The working principle is based on the cell translocation time and amplitude of electrical current change that the cell induces. In this paper, we provide an analysis of a Coulter counter that evaluates the hydrodynamic and electrokinetic properties of polystyrene microparticles in a microfluidic channel. The hydrodynamic force and electrokinetic force are concurrently analyzed to determine the translocation time and the electrical current pulses induced by the particles. Finally, we characterize the chip performance for CTC detection. The experimental results validate the numerical analysis of the microfluidic chip. The presented model can provide critical insight and guidance for developing micro-Coulter counter for point of care prognosis.

  14. Paint stripping with a XeCl laser: basic research and processing techniques

    NASA Astrophysics Data System (ADS)

    Raiber, Armin; Plege, Burkhard; Holbein, Reinhold; Callies, Gert; Dausinger, Friedrich; Huegel, Helmut

    1995-03-01

    This work investigates the possibility of ablating paint from aerospace material with a XeCl- laser. The main advantage of this type of laser is the low heat generation during the ablation process. This is important when stripping thermally sensitive materials such as polymer composites. The dependence of the ablation process on energy density, pulse frequency as well as other laser parameters are presented. The results show the influence of chemical and UV artificial aging processes on ablation depth. Further, the behavior of the time-averaged transmission of the laser beam through the plasma is described as a function of the energy density. The time-varying temperature in the substrate at the point of ablation was measured during the process. An abrupt change in the temperature variation indicates the end of point ablation. This measured temperature variation is compared with the calculated temperatures, which are derived from the 1D heat equations. Finally, first results of repaintability and ablation rates will be presented.

  15. Hierarchical and symmetric infant image registration by robust longitudinal-example-guided correspondence detection

    PubMed Central

    Wu, Yao; Wu, Guorong; Wang, Li; Munsell, Brent C.; Wang, Qian; Lin, Weili; Feng, Qianjin; Chen, Wufan; Shen, Dinggang

    2015-01-01

    Purpose: To investigate anatomical differences across individual subjects, or longitudinal changes in early brain development, it is important to perform accurate image registration. However, due to fast brain development and dynamic tissue appearance changes, it is very difficult to align infant brain images acquired from birth to 1-yr-old. Methods: To solve this challenging problem, a novel image registration method is proposed to align two infant brain images, regardless of age at acquisition. The main idea is to utilize the growth trajectories, or spatial-temporal correspondences, learned from a set of longitudinal training images, for guiding the registration of two different time-point images with different image appearances. Specifically, in the training stage, an intrinsic growth trajectory is first estimated for each training subject using the longitudinal images. To register two new infant images with potentially a large age gap, the corresponding images patches between each new image and its respective training images with similar age are identified. Finally, the registration between the two new images can be assisted by the learned growth trajectories from one time point to another time point that have been established in the training stage. To further improve registration accuracy, the proposed method is combined with a hierarchical and symmetric registration framework that can iteratively add new key points in both images to steer the estimation of the deformation between the two infant brain images under registration. Results: To evaluate image registration accuracy, the proposed method is used to align 24 infant subjects at five different time points (2-week-old, 3-month-old, 6-month-old, 9-month-old, and 12-month-old). Compared to the state-of-the-art methods, the proposed method demonstrated superior registration performance. Conclusions: The proposed method addresses the difficulties in the infant brain registration and produces better results compared to existing state-of-the-art registration methods. PMID:26133617

  16. The Effects of a Behavioral Metacognitive Task in High School Biology Students

    NASA Astrophysics Data System (ADS)

    Sussan, Danielle

    Three studies were conducted to examine the effects of a behavioral metacognitive technique on lessening students' illusions of learning. It was proposed that students' study time strategies, and consequently, final performance on a test, in a classroom setting, could be influenced positively by having students engage in metacognitive processing via making wagers regarding their learning. A novel metacognitive paradigm was implemented in three studies during which high school Biology students made prospective (during study, prior to test) metacognitive judgments, using a "betting" paradigm. This behavioral betting paradigm asked students to select either "high confidence" or "low confidence" based on how confident they felt that they would get a Biology concept correct if they were tested later. If a student chose "high confidence" and got the answer right on a later test, then he would gain 3 points. If he chose "high confidence" and got the answer wrong, he would lose 3 points. If a student chose "low confidence," he would gain one point, regardless of accuracy. Students then made study time allocation decisions by choosing whether they needed to study a particular concept "a lot more," "a little more," or "not at all." Afterwards, students had three minutes to study whichever terms they selected for any duration during those three minutes. Finally, a performance test was administered. The results showed that people are generally good at monitoring their own knowledge, in that students performed better on items judged with high confidence bets than on items judged with low confidence bets. Data analyses compared students' Study time Intentions, Actual Study Time, and Accuracy at final test for those who were required to bet versus those who were not. Results showed that students for whom bets were required tended to select relatively longer study than for whom no bets were required. That is, the intentions of those who bet were less overconfident than those who did not bet. However, there were no differences in actual study time or, as one would subsequently expect, in final test performance between the two conditions. The data provide partial evidence of the beneficial effects of directly implementing a non-intrusive metacognitive activity in a classroom setting. Students who completed this prospective bet judgment exhibited, at least, a greater willingness to study. That is, enforcing a betting strategy can increase the deliberative processes of the learner, which in turn can lessen people's illusions of knowing. By encouraging students to deliberate about their own learning, by making prospective bets, students' study time intentions were increased. Thus, it may be helpful to encourage students explicitly to use metacognitive strategies. It was unfortunate that students did not follow through on their intentions sufficiently during actual study, however, and a variety of reasons for this breakdown are discussed. The method used in the current study could potentially benefit students in any classroom setting. Using this non-verbal, behavioral betting paradigm, students are required to engage in metacognitive processes without having to take part in an invasive intervention. The betting paradigm would be easy for teachers to incorporate into their classrooms as it can be incorporated into class work, homework, or even tests and assessments. By asking students to make confidence bets, students may engage in metacognitive processing which they may not have done spontaneously.

  17. Chaotic scattering in an open vase-shaped cavity: Topological, numerical, and experimental results

    NASA Astrophysics Data System (ADS)

    Novick, Jaison Allen

    We present a study of trajectories in a two-dimensional, open, vase-shaped cavity in the absence of forces The classical trajectories freely propagate between elastic collisions. Bound trajectories, regular scattering trajectories, and chaotic scattering trajectories are present in the vase. Most importantly, we find that classical trajectories passing through the vase's mouth escape without return. In our simulations, we propagate bursts of trajectories from point sources located along the vase walls. We record the time for escaping trajectories to pass through the vase's neck. Constructing a plot of escape time versus the initial launch angle for the chaotic trajectories reveals a vastly complicated recursive structure or a fractal. This fractal structure can be understood by a suitable coordinate transform. Reducing the dynamics to two dimensions reveals that the chaotic dynamics are organized by a homoclinic tangle, which is formed by the union of infinitely long, intersecting stable and unstable manifolds. This study is broken down into three major components. We first present a topological theory that extracts the essential topological information from a finite subset of the tangle and encodes this information in a set of symbolic dynamical equations. These equations can be used to predict a topologically forced minimal subset of the recursive structure seen in numerically computed escape time plots. We present three applications of the theory and compare these predictions to our simulations. The second component is a presentation of an experiment in which the vase was constructed from Teflon walls using an ultrasound transducer as a point source. We compare the escaping signal to a classical simulation and find agreement between the two. Finally, we present an approximate solution to the time independent Schrodinger Equation for escaping waves. We choose a set of points at which to evaluate the wave function and interpolate trajectories connecting the source point to each "detector point". We then construct the wave function directly from these classical trajectories using the two-dimensional WKB approximation. The wave function is Fourier Transformed using a Fast Fourier Transform algorithm resulting in a spectrum in which each peak corresponds to an interpolated trajectory. Our predictions are based on an imagined experiment that uses microwave propagation within an electromagnetic waveguide. Such an experiment exploits the fact that under suitable conditions both Maxwell's Equations and the Schrodinger Equation can be reduced to the Helmholtz Equation. Therefore, our predictions, while compared to the electromagnetic experiment, contain information about the quantum system. Identifying peaks in the transmission spectrum with chaotic trajectories will allow for an additional experimental verification of the intermediate recursive structure. Finally, we summarize our results and discuss possible extensions of this project.

  18. Study on Low Illumination Simultaneous Polarization Image Registration Based on Improved SURF Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Wanjun; Yang, Xu

    2017-12-01

    Registration of simultaneous polarization images is the premise of subsequent image fusion operations. However, in the process of shooting all-weather, the polarized camera exposure time need to be kept unchanged, sometimes polarization images under low illumination conditions due to too dark result in SURF algorithm can not extract feature points, thus unable to complete the registration, therefore this paper proposes an improved SURF algorithm. Firstly, the luminance operator is used to improve overall brightness of low illumination image, and then create integral image, using Hession matrix to extract the points of interest to get the main direction of characteristic points, calculate Haar wavelet response in X and Y directions to get the SURF descriptor information, then use the RANSAC function to make precise matching, the function can eliminate wrong matching points and improve accuracy rate. And finally resume the brightness of the polarized image after registration, the effect of the polarized image is not affected. Results show that the improved SURF algorithm can be applied well under low illumination conditions.

  19. Collocation and Galerkin Time-Stepping Methods

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2011-01-01

    We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.

  20. Analysis of 2D Torus and Hub Topologies of 100Mb/s Ethernet for the Whitney Commodity Computing Testbed

    NASA Technical Reports Server (NTRS)

    Pedretti, Kevin T.; Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)

    1997-01-01

    A variety of different network technologies and topologies are currently being evaluated as part of the Whitney Project. This paper reports on the implementation and performance of a Fast Ethernet network configured in a 4x4 2D torus topology in a testbed cluster of 'commodity' Pentium Pro PCs. Several benchmarks were used for performance evaluation: an MPI point to point message passing benchmark, an MPI collective communication benchmark, and the NAS Parallel Benchmarks version 2.2 (NPB2). Our results show that for point to point communication on an unloaded network, the hub and 1 hop routes on the torus have about the same bandwidth and latency. However, the bandwidth decreases and the latency increases on the torus for each additional route hop. Collective communication benchmarks show that the torus provides roughly four times more aggregate bandwidth and eight times faster MPI barrier synchronizations than a hub based network for 16 processor systems. Finally, the SOAPBOX benchmarks, which simulate real-world CFD applications, generally demonstrated substantially better performance on the torus than on the hub. In the few cases the hub was faster, the difference was negligible. In total, our experimental results lead to the conclusion that for Fast Ethernet networks, the torus topology has better performance and scales better than a hub based network.

  1. Post-Flight EDL Entry Guidance Performance of the 2011 Mars Science Laboratory Mission

    NASA Technical Reports Server (NTRS)

    Mendeck, Gavin F.; McGrew, Lynn Craig

    2013-01-01

    The 2011 Mars Science Laboratory was the first Mars guided entry which safely delivered the rover to a landing within a touchdown ellipse of 19.1 km x 6.9 km. The Entry Terminal Point Controller guidance algorithm is derived from the final phase Apollo Command Module guidance and, like Apollo, modulates the bank angle to control the range flown. The guided entry performed as designed without any significant exceptions. The Curiosity rover was delivered about 2.2 km from the expected touchdown. This miss distance is attributed to little time to correct the downrange drift from the final bank reversal and a suspected tailwind during heading alignment. The successful guided entry for the Mars Science Laboratory lays the foundation for future Mars missions to improve upon.

  2. An Exploration Of Fuel Optimal Two-impulse Transfers To Cyclers in the Earth-Moon System

    NASA Astrophysics Data System (ADS)

    Hosseinisianaki, Saghar

    2011-12-01

    This research explores the optimum two-impulse transfers between a low Earth orbit and cycler orbits in the Earth-Moon circular restricted three-body framework, emphasizing the optimization strategy. Cyclers are those types of periodic orbits that meet both the Earth and the Moon periodically. A spacecraft on such trajectories are under the influence of both the Earth and the Moon gravitational fields. Cyclers have gained recent interest as baseline orbits for several Earth-Moon mission concepts, notably in relation to human exploration. In this thesis it is shown that a direct optimization starting from the classic lambert initial guess may not be adequate for these problems and propose a three-step optimization solver to improve the domain of convergence toward an optimal solution. The first step consists of finding feasible trajectories with a given transfer time. I employ Lambert's problem to provide initial guess to optimize the error in arrival position. This includes the analysis of the liability of Lambert's solution as an initial guess. Once a feasible trajectory is found, the velocity impulse is only a function of transfer time, departure, and arrival points' phases. The second step consists of the optimization of impulse over transfer time which results in the minimum impulse transfer for fixed end points. Finally, the third step is mapping the optimal solutions as the end points are varied.

  3. An Exploration Of Fuel Optimal Two-impulse Transfers To Cyclers in the Earth-Moon System

    NASA Astrophysics Data System (ADS)

    Hosseinisianaki, Saghar

    This research explores the optimum two-impulse transfers between a low Earth orbit and cycler orbits in the Earth-Moon circular restricted three-body framework, emphasizing the optimization strategy. Cyclers are those types of periodic orbits that meet both the Earth and the Moon periodically. A spacecraft on such trajectories are under the influence of both the Earth and the Moon gravitational fields. Cyclers have gained recent interest as baseline orbits for several Earth-Moon mission concepts, notably in relation to human exploration. In this thesis it is shown that a direct optimization starting from the classic lambert initial guess may not be adequate for these problems and propose a three-step optimization solver to improve the domain of convergence toward an optimal solution. The first step consists of finding feasible trajectories with a given transfer time. I employ Lambert's problem to provide initial guess to optimize the error in arrival position. This includes the analysis of the liability of Lambert's solution as an initial guess. Once a feasible trajectory is found, the velocity impulse is only a function of transfer time, departure, and arrival points' phases. The second step consists of the optimization of impulse over transfer time which results in the minimum impulse transfer for fixed end points. Finally, the third step is mapping the optimal solutions as the end points are varied.

  4. A systematic review of near real-time and point-of-care clinical decision support in anesthesia information management systems.

    PubMed

    Simpao, Allan F; Tan, Jonathan M; Lingappan, Arul M; Gálvez, Jorge A; Morgan, Sherry E; Krall, Michael A

    2017-10-01

    Anesthesia information management systems (AIMS) are sophisticated hardware and software technology solutions that can provide electronic feedback to anesthesia providers. This feedback can be tailored to provide clinical decision support (CDS) to aid clinicians with patient care processes, documentation compliance, and resource utilization. We conducted a systematic review of peer-reviewed articles on near real-time and point-of-care CDS within AIMS using the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols. Studies were identified by searches of the electronic databases Medline and EMBASE. Two reviewers screened studies based on title, abstract, and full text. Studies that were similar in intervention and desired outcome were grouped into CDS categories. Three reviewers graded the evidence within each category. The final analysis included 25 articles on CDS as implemented within AIMS. CDS categories included perioperative antibiotic prophylaxis, post-operative nausea and vomiting prophylaxis, vital sign monitors and alarms, glucose management, blood pressure management, ventilator management, clinical documentation, and resource utilization. Of these categories, the reviewers graded perioperative antibiotic prophylaxis and clinical documentation as having strong evidence per the peer reviewed literature. There is strong evidence for the inclusion of near real-time and point-of-care CDS in AIMS to enhance compliance with perioperative antibiotic prophylaxis and clinical documentation. Additional research is needed in many other areas of AIMS-based CDS.

  5. The study of infrared target recognition at sea background based on visual attention computational model

    NASA Astrophysics Data System (ADS)

    Wang, Deng-wei; Zhang, Tian-xu; Shi, Wen-jun; Wei, Long-sheng; Wang, Xiao-ping; Ao, Guo-qing

    2009-07-01

    Infrared images at sea background are notorious for the low signal-to-noise ratio, therefore, the target recognition of infrared image through traditional methods is very difficult. In this paper, we present a novel target recognition method based on the integration of visual attention computational model and conventional approach (selective filtering and segmentation). The two distinct techniques for image processing are combined in a manner to utilize the strengths of both. The visual attention algorithm searches the salient regions automatically, and represented them by a set of winner points, at the same time, demonstrated the salient regions in terms of circles centered at these winner points. This provides a priori knowledge for the filtering and segmentation process. Based on the winner point, we construct a rectangular region to facilitate the filtering and segmentation, then the labeling operation will be added selectively by requirement. Making use of the labeled information, from the final segmentation result we obtain the positional information of the interested region, label the centroid on the corresponding original image, and finish the localization for the target. The cost time does not depend on the size of the image but the salient regions, therefore the consumed time is greatly reduced. The method is used in the recognition of several kinds of real infrared images, and the experimental results reveal the effectiveness of the algorithm presented in this paper.

  6. Projecting Event-Based Analysis Dates in Clinical Trials: An Illustration Based on the International Duration Evaluation of Adjuvant Chemotherapy (IDEA) Collaboration. Projecting analysis dates for the IDEA collaboration.

    PubMed

    Renfro, Lindsay A; Grothey, Axel M; Paul, James; Floriani, Irene; Bonnetain, Franck; Niedzwiecki, Donna; Yamanaka, Takeharu; Souglakos, Ioannis; Yothers, Greg; Sargent, Daniel J

    2014-12-01

    Clinical trials are expensive and lengthy, where success of a given trial depends on observing a prospectively defined number of patient events required to answer the clinical question. The point at which this analysis time occurs depends on both patient accrual and primary event rates, which typically vary throughout the trial's duration. We demonstrate real-time analysis date projections using data from a collection of six clinical trials that are part of the IDEA collaboration, an international preplanned pooling of data from six trials testing the duration of adjuvant chemotherapy in stage III colon cancer, and we additionally consider the hypothetical impact of one trial's early termination of follow-up. In the absence of outcome data from IDEA, monthly accrual rates for each of the six IDEA trials were used to project subsequent trial-specific accrual, while historical data from similar Adjuvant Colon Cancer Endpoints (ACCENT) Group trials were used to construct a parametric model for IDEA's primary endpoint, disease-free survival, under the same treatment regimen. With this information and using the planned total accrual from each IDEA trial protocol, individual patient accrual and event dates were simulated and the overall IDEA interim and final analysis times projected. Projections were then compared with actual (previously undisclosed) trial-specific event totals at a recent census time for validation. The change in projected final analysis date assuming early termination of follow-up for one IDEA trial was also calculated. Trial-specific predicted event totals were close to the actual number of events per trial for the recent census date at which the number of events per trial was known, with the overall IDEA projected number of events only off by eight patients. Potential early termination of follow-up by one IDEA trial was estimated to postpone the overall IDEA final analysis date by 9 months. Real-time projection of the final analysis time during a trial, or the overall analysis time during a trial collaborative such as IDEA, has practical implications for trial feasibility when these projections are translated into additional time and resources required.

  7. A method for automatic feature points extraction of human vertebrae three-dimensional model

    NASA Astrophysics Data System (ADS)

    Wu, Zhen; Wu, Junsheng

    2017-05-01

    A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.

  8. Biological signatures of dynamic river networks from a coupled landscape evolution and neutral community model

    NASA Astrophysics Data System (ADS)

    Stokes, M.; Perron, J. T.

    2017-12-01

    Freshwater systems host exceptionally species-rich communities whose spatial structure is dictated by the topology of the river networks they inhabit. Over geologic time, river networks are dynamic; drainage basins shrink and grow, and river capture establishes new connections between previously separated regions. It has been hypothesized that these changes in river network structure influence the evolution of life by exchanging and isolating species, perhaps boosting biodiversity in the process. However, no general model exists to predict the evolutionary consequences of landscape change. We couple a neutral community model of freshwater organisms to a landscape evolution model in which the river network undergoes drainage divide migration and repeated river capture. Neutral community models are macro-ecological models that include stochastic speciation and dispersal to produce realistic patterns of biodiversity. We explore the consequences of three modes of speciation - point mutation, time-protracted, and vicariant (geographic) speciation - by tracking patterns of diversity in time and comparing the final result to an equilibrium solution of the neutral model on the final landscape. Under point mutation, a simple model of stochastic and instantaneous speciation, the results are identical to the equilibrium solution and indicate the dominance of the species-area relationship in forming patterns of diversity. The number of species in a basin is proportional to its area, and regional species richness reaches its maximum when drainage area is evenly distributed among sub-basins. Time-protracted speciation is also modeled as a stochastic process, but in order to produce more realistic rates of diversification, speciation is not assumed to be instantaneous. Rather, each new species must persist for a certain amount of time before it is considered to be established. When vicariance (geographic speciation) is included, there is a transient signature of increased regional diversity after river capture. The results indicate that the mode of speciation and the rate of speciation relative to the rate of divide migration determine the evolutionary signature of river capture.

  9. Crystallization of supercooled liquids

    NASA Astrophysics Data System (ADS)

    Odagaki, Takashi; Shikuya, Yuuna

    2014-03-01

    We investigate the crystallization process on the basis of the free energy landscape (FEL) approach to non-equilibrium systems. In this approach, the crystallization time is given by the first passage time of the representative point arriving at the crystalline basin in the FEL. We devise an efficient method to obtain the first passage time exploiting a specific boundary condition. Applying this formalism to a model system, we show that the first passage time is determined by two competing effects; one is the difference in the free energy of the initial and the final basins, and the other is the slow relaxation. As the temperature is reduced, the former accelerates the crystallization and the latter retards it. We show that these competing effects give rise to the typical nose-shape form of the time-temperature transformation curve and that the retardation of the crystallization is related to the mean waiting time of the jump motion.

  10. Visualization of time-varying MRI data for MS lesion analysis

    NASA Astrophysics Data System (ADS)

    Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella

    2001-05-01

    Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.

  11. Research on the range side lobe suppression method for modulated stepped frequency radar signals

    NASA Astrophysics Data System (ADS)

    Liu, Yinkai; Shan, Tao; Feng, Yuan

    2018-05-01

    The magnitude of time-domain range sidelobe of modulated stepped frequency radar affects the imaging quality of inverse synthetic aperture radar (ISAR). In this paper, the cause of high sidelobe in modulated stepped frequency radar imaging is analyzed first in real environment. Then, the chaos particle swarm optimization (CPSO) is used to select the amplitude and phase compensation factors according to the minimum sidelobe criterion. Finally, the compensated one-dimensional range images are obtained. Experimental results show that the amplitude-phase compensation method based on CPSO algorithm can effectively reduce the sidelobe peak value of one-dimensional range images, which outperforms the common sidelobe suppression methods and avoids the coverage of weak scattering points by strong scattering points due to the high sidelobes.

  12. The modified distal horizontal metatarsal osteotomy for correction of bunionette deformity.

    PubMed

    Radl, Roman; Leithner, Andreas; Koehler, Wolfgang; Scheipl, Susanne; Windhager, Reinhard

    2005-06-01

    Bunionette is a common deformity for which a number of operative procedures have been described. The objective of this study was to evaluate the results of a modified distal horizontal metatarsal osteotomy in the correction of symptomatic bunionette. Metatarsal osteotomies were done in 21 feet in 14 patients (11 females, three males) with an average age of 44 (range 20 to 67) years at the time of operation. The average followup was 32 (range 12 to 52) months. The average Lesser Toe Metatarsophalangeal-Interphalangeal Score of the American Orthopaedic Foot and Ankle Society increased from 42 points (range 24 to 50) preoperatively to 87 points (range 60 to 100) at the last followup. The fifth metatarsophalangeal angle averaged 18 degrees (5 to 38 degrees) preoperatively and 5 degrees (-5 to 26 degrees) at final followup. The 4-5 intermetatarsal angle averaged 14 degrees (10 to 20 degrees) preoperatively and 9 degrees (5 to 12 degrees) at final followup. Hardware was removed from two feet and scheduled for a third foot because of symptomatic skin irritation. The modified distal horizontal metatarsal osteotomy is a stable and reliable method for correction of bunionette. Unsatisfactory results in our patients were related to prominent hardware.

  13. Economic Feasibility of Wireless Sensor Network-Based Service Provision in a Duopoly Setting with a Monopolist Operator.

    PubMed

    Sanchis-Cano, Angel; Romero, Julián; Sacoto-Cabrera, Erwin J; Guijarro, Luis

    2017-11-25

    We analyze the feasibility of providing Wireless Sensor Network-data-based services in an Internet of Things scenario from an economical point of view. The scenario has two competing service providers with their own private sensor networks, a network operator and final users. The scenario is analyzed as two games using game theory. In the first game, sensors decide to subscribe or not to the network operator to upload the collected sensing-data, based on a utility function related to the mean service time and the price charged by the operator. In the second game, users decide to subscribe or not to the sensor-data-based service of the service providers based on a Logit discrete choice model related to the quality of the data collected and the subscription price. The sinks and users subscription stages are analyzed using population games and discrete choice models, while network operator and service providers pricing stages are analyzed using optimization and Nash equilibrium concepts respectively. The model is shown feasible from an economic point of view for all the actors if there are enough interested final users and opens the possibility of developing more efficient models with different types of services.

  14. Uavs to Assess the Evolution of Embryo Dunes

    NASA Astrophysics Data System (ADS)

    Taddia, Y.; Corbau, C.; Zambello, E.; Russo, V.; Simeoni, U.; Russo, P.; Pellegrinelli, A.

    2017-08-01

    The balance of a coastal environment is particularly complex: the continuous formation of dunes, their destruction as a result of violent storms, the growth of vegetation and the consequent growth of the dunes themselves are phenomena that significantly affect this balance. This work presents an approach to the long-term monitoring of a complex dune system by means of Unmanned Aerial Vehicles (UAVs). Four different surveys were carried out between November 2015 and November 2016. Aerial photogrammetric data were acquired during flights by a DJI Phantom 2 and a DJI Phantom 3 with cameras in a nadiral arrangement. GNSS receivers in Network Real Time Kinematic (NRTK) mode were used to frame models in the European Terrestrial Reference System. Processing of the captured images consisted in reconstruction of a three-dimensional model using the principles of Structure from Motion (SfM). Particular care was necessary due to the vegetation: filtering of the dense cloud, mainly based on slope detection, was performed to minimize this issue. Final products of the SfM approach were represented by Digital Elevation Models (DEMs) of the sandy coastal environment. Each model was validated by comparison through specially surveyed points. Other analyses were also performed, such as cross sections and computing elevation variations over time. The use of digital photogrammetry by UAVs is particularly reliable: fast acquisition of the images, reconstruction of high-density point clouds, high resolution of final elevation models, as well as flexibility, low cost and accuracy comparable with other available techniques.

  15. Operando identification of the point of [Mn- 2]O- 4 spinel formation during gamma-MnO 2 discharge within batteries

    DOE PAGES

    Gallaway, Joshua W.; Hertzberg, Benjamin J.; Zhong, Zhong; ...

    2016-05-07

    The rechargeability of γ-MnO 2 cathodes in alkaline batteries is limited by the formation of the [Mn 2]O 4 spinels ZnMn 2O 4 (hetaerolite) and Mn 3O 4 (hausmannite). However, the time and formation mechanisms of these spinels are not well understood. Here we directly observe γ-MnO 2 discharge at a range of reaction extents distributed across a thick porous electrode. Coupled with a battery model, this reveals that spinel formation occurs at a precise and predictable point in the reaction, regardless of reaction rate. Observation is accomplished by energy dispersive X-ray diffraction (EDXRD) using photons of high energy andmore » high flux, which penetrate the cell and provide diffraction data as a function of location and time. After insertion of 0.79 protons per γ-MnO 2 the α-MnOOH phase forms rapidly. α-MnOOH is the precursor to spinel, which closely follows. ZnMn 2O 4 and Mn 3O 4 form at the same discharge depth, by the same mechanism. The results show the final discharge product, Mn 3O 4 or Mn(OH) 2, is not an intrinsic property of γ-MnO 2. While several studies have identified Mn(OH) 2 as the final γ-MnO 2 discharge product, we observe direct conversion to Mn 3O 4 with no Mn(OH) 2.« less

  16. Gender differences in the long-term associations between posttraumatic stress disorder and depression symptoms: findings from the Detroit Neighborhood Health Study.

    PubMed

    Horesh, Danny; Lowe, Sarah R; Galea, Sandro; Uddin, Monica; Koenen, Karestan C

    2015-01-01

    Posttraumatic stress disorder (PTSD) and depression are known to be highly comorbid. However, previous findings regarding the nature of this comorbidity have been inconclusive. This study prospectively examined whether PTSD and depression are distinct constructs in an epidemiologic sample, as well as assessed the directionality of the PTSD-depression association across time. Nine hundred and forty-two Detroit residents (males: n = 387; females: n = 555) were interviewed by phone at three time points, 1 year apart. At each time point, they were assessed for PTSD (using the PCL-C), depression (PHQ-9), trauma exposure, and stressful life events. First, a confirmatory factor analysis showed PTSD and depression to be two distinct factors at all three waves of assessments (W1, W2, and W3). Second, chi-square analysis detected significant differences between observed and expected rates of comorbidity at each time point, with significantly more no-disorder and comorbid cases, and significantly fewer PTSD only and depression only cases, than would be expected by chance alone. Finally, a cross-lagged analysis revealed a bidirectional association between PTSD and depression symptoms across time for the entire sample, as well as for women separately, wherein PTSD symptoms at an early wave predicted later depression symptoms, and vice versa. For men, however, only the paths from PTSD symptoms to subsequent depression symptoms were significant. Across time, PTSD and depression are distinct, but correlated, constructs among a highly-exposed epidemiologic sample. Women and men differ in both the risk of these conditions, and the nature of the long-term associations between them. © 2014 Wiley Periodicals, Inc.

  17. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  18. Film characteristics pertinent to coherent optical data processing systems.

    PubMed

    Thomas, C E

    1972-08-01

    Photographic film is studied quantitatively as the input mechanism for coherent optical data recording and processing systems. The two important film characteristics are the amplitude transmission vs exposure (T(A) - E) curve and the film noise power spectral density. Both functions are measured as a function of the type of film, the type of developer, developer time and temperature, and the exposing and readout light wavelengths. A detailed analysis of a coherent optical spatial frequency analyzer reveals that the optimum do bias point for 649-F film is an amplitude transmission of about 70%. This operating point yields minimum harmonic and intermodulation distortion, whereas the 50% amplitude transmission bias point recommended by holographers yields maximum diffraction efficiency. It is also shown that the effective ac gain or contrast of the film is nearly independent of the development conditions for a given film. Finally, the linear dynamic range of one particular coherent optical spatial frequency analyzer is shown to be about 40-50 dB.

  19. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  20. "Photographing money" task pricing

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiang

    2018-05-01

    "Photographing money" [1]is a self-service model under the mobile Internet. The task pricing is reasonable, related to the success of the commodity inspection. First of all, we analyzed the position of the mission and the membership, and introduced the factor of membership density, considering the influence of the number of members around the mission on the pricing. Multivariate regression of task location and membership density using MATLAB to establish the mathematical model of task pricing. At the same time, we can see from the life experience that membership reputation and the intensity of the task will also affect the pricing, and the data of the task success point is more reliable. Therefore, the successful point of the task is selected, and its reputation, task density, membership density and Multiple regression of task positions, according to which a nhew task pricing program. Finally, an objective evaluation is given of the advantages and disadvantages of the established model and solution method, and the improved method is pointed out.

  1. Point-of-Care Test Equipment for Flexible Laboratory Automation.

    PubMed

    You, Won Suk; Park, Jae Jun; Jin, Sung Moon; Ryew, Sung Moo; Choi, Hyouk Ryeol

    2014-08-01

    Blood tests are some of the core clinical laboratory tests for diagnosing patients. In hospitals, an automated process called total laboratory automation, which relies on a set of sophisticated equipment, is normally adopted for blood tests. Noting that the total laboratory automation system typically requires a large footprint and significant amount of power, slim and easy-to-move blood test equipment is necessary for specific demands such as emergency departments or small-size local clinics. In this article, we present a point-of-care test system that can provide flexibility and portability with low cost. First, the system components, including a reagent tray, dispensing module, microfluidic disk rotor, and photometry scanner, and their functions are explained. Then, a scheduler algorithm to provide a point-of-care test platform with an efficient test schedule to reduce test time is introduced. Finally, the results of diagnostic tests are presented to evaluate the system. © 2014 Society for Laboratory Automation and Screening.

  2. Role of Möbius constants and scattering functions in Cachazo-He-Yuan scalar amplitudes

    NASA Astrophysics Data System (ADS)

    Lam, C. S.; Yao, York-Peng

    2016-05-01

    The integration over the Möbius variables leading to the Cachazo-He-Yuan double-color n -point massless scalar amplitude are carried out one integral at a time. Möbius invariance dictates the final amplitude to be independent of the three Möbius constants σr,σs,σt, but their choice affects integrations and the intermediate results. The effect of the Möbius constants, which will be held finite but otherwise arbitrary, the two sets of colors, and the scattering functions on each integration is investigated. A general systematic way to carry out the n -3 integrations is explained, each exposing one of the n -3 propagators of a single Feynman diagram. Two detailed examples are shown to illustrate the procedure, one a five-point amplitude, and the other a nine-point amplitude. Our procedure does not generate intermediate spurious poles, in contrast to what is common by choosing Möbius constants at 0, 1, and ∞ .

  3. Control advances for achieving the ITER baseline scenario on KSTAR

    NASA Astrophysics Data System (ADS)

    Eidietis, N. W.; Barr, J.; Hahn, S. H.; Humphreys, D. A.; in, Y. K.; Jeon, Y. M.; Lanctot, M. J.; Mueller, D.; Walker, M. L.

    2017-10-01

    Control methodologies developed to enable successful production of ITER baseline scenario (IBS) plasmas on the superconducting KSTAR tokamak are presented: decoupled vertical control (DVC), real-time feedforward (rtFF) calculation, and multi-input multi-output (MIMO) X-point control. DVC provides fast vertical control with the in-vessel control coils (IVCC) while sharing slow vertical control with the poloidal field (PF) coils to avoid IVCC saturation. rtFF compensates for inaccuracies in offline PF current feedforward programming, allowing reduction or removal of integral gain (and its detrimental phase lag) from the shape controller. Finally, MIMO X-point control provides accurate positioning of the X-point despite low controllability due to the large distance between coils and plasma. Combined, these techniques enabled achievement of IBS parameters (q95 = 3.2, βN = 2) with a scaled ITER shape on KSTAR. n =2 RMP response displays a strong dependence upon this shaping. Work supported by the US DOE under Award DE-SC0010685 and the KSTAR project.

  4. Effects of extensor synovectomy and excision of the distal ulna in rheumatoid arthritis on long-term function.

    PubMed

    Jain, Abhilash; Ball, Cathy; Freidin, Andrew J; Nanchahal, Jagdeep

    2010-09-01

    Objective outcomes data after excision of the distal ulna in rheumatoid arthritis are lacking. The aim of this study was to evaluate the functional results of this surgery in the long term. We prospectively collected data on range of motion (22 wrists), visual analog pain scores (14 wrists), and grip strength measured using a Jamar dynamometer (20 hands) in a group of 23 patients (26 wrists) preoperatively and at 3 months, 12 months, and a minimum of 5 years postoperatively (range, 5.3-10.4 y). The Jebsen-Taylor hand function test was administered to 9 patients at the same time points. A subgroup of patients also underwent extensor carpi radialis longus to extensor carpi ulnaris tendon transfer (11 wrists). At one year, there were improvements in wrist pronation and supination, which were maintained at final follow-up. Active radial deviation decreased significantly at 3 months (p = .01) and one year (p = .02); this remained reduced at final follow-up (not significant). Wrist extension and active ulnar deviation showed slight improvements by one year, but reduced to levels below that measured preoperatively by final follow-up. Wrist flexion was significantly reduced at all time points postoperatively. Grip strength showed improvement from 10.0 kg (standard deviation [SD] 4.1 kg) preoperatively to 12.5 kg (SD 4.6 kg) 1 year after surgery and returned to preoperative levels (9.5 kg, SD 5.6 kg) by final follow-up. Wrist pain was significantly reduced from a mean score of 5 (SD 4) preoperatively to 2 (SD 2) postoperatively (p = .01). The Jebsen-Taylor hand function test showed improvements in writing and card turning. In the long term, excision of the distal ulna in rheumatoid patients results in an improvement in some aspects of hand function. There is a significant (p = .01) reduction in wrist pain but a reduction of wrist flexion. Therapeutic IV. Copyright 2010 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  5. The Role of Astro-Geodetic in Precise Guidance of Long Tunnels

    NASA Astrophysics Data System (ADS)

    Mirghasempour, M.; Jafari, A. Y.

    2015-12-01

    One of prime aspects of surveying projects is guidance of paths of a long tunnel from different directions and finally ending all paths in a specific place. This kind of underground surveying, because of particular condition, has some different points in relation to the ground surveying, including Improper geometry in underground transverse, low precise measurement in direction and length due to condition such as refraction, distinct gravity between underground point and corresponding point on the ground (both value and direction of gravity) and etc. To solve this problems, astro-geodetic that is part of geodesy science, can help surveying engineers. In this article, the role of astronomy is defined in two subjects: 1- Azimuth determination of directions from entrance and exit nets of tunnel and also calibration of gyro-theodolite to use them in Underground transvers: By astronomical methods, azimuth of directions can be determine with an accuracy of 0.5 arcsecond, whereas, nowadays, no gyroscope can measure the azimuth in this accuracy; For instance, accuracy of the most precise gyroscope (Gyromat 5000) is 1.2 cm over a distance of one kilometre (2.4 arcsecond). Furthermore, the calibration methods that will be mention in this article, have significance effects on underground transverse. 2- Height relation between entrance point and exit point is problematic and time consuming; For example, in a 3 km long tunnel ( in Arak- Khoram Abad freeway), to relate entrance point to exit point, it is necessary to perform levelling about 90 km. Other example of this boring and time consuming levelling is in Kerman tunnel. This tunnel is 36 km length, but to transfer the entrance point height to exit point, 150 km levelling is needed. According to this paper, The solution for this difficulty is application of astro-geodetic and determination of vertical deflection by digital zenith camera system TZK2-D. These two elements make possible to define geoid profile in terms of tunnel azimuth in entrance and exit of tunnel; So by doing this, surveying engineers are able to transfer entrance point height to exit point of tunnels in easiest way.

  6. 12 CFR 1815.105 - Major decision points.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... ENVIRONMENTAL QUALITY § 1815.105 Major decision points. (a) The possible environmental effects of an Application... decisionmaking process: (1) Preliminary approval stage, at which point applications are selected for funding; and (2) Final approval and funding stage. (b) Environmental review shall be integrated into the...

  7. 12 CFR 1815.105 - Major decision points.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... ENVIRONMENTAL QUALITY § 1815.105 Major decision points. (a) The possible environmental effects of an Application... decisionmaking process: (1) Preliminary approval stage, at which point applications are selected for funding; and (2) Final approval and funding stage. (b) Environmental review shall be integrated into the...

  8. 12 CFR 1815.105 - Major decision points.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... ENVIRONMENTAL QUALITY § 1815.105 Major decision points. (a) The possible environmental effects of an Application... decisionmaking process: (1) Preliminary approval stage, at which point applications are selected for funding; and (2) Final approval and funding stage. (b) Environmental review shall be integrated into the...

  9. Time-varying SMART design and data analysis methods for evaluating adaptive intervention effects.

    PubMed

    Dai, Tianjiao; Shete, Sanjay

    2016-08-30

    In a standard two-stage SMART design, the intermediate response to the first-stage intervention is measured at a fixed time point for all participants. Subsequently, responders and non-responders are re-randomized and the final outcome of interest is measured at the end of the study. To reduce the side effects and costs associated with first-stage interventions in a SMART design, we proposed a novel time-varying SMART design in which individuals are re-randomized to the second-stage interventions as soon as a pre-fixed intermediate response is observed. With this strategy, the duration of the first-stage intervention will vary. We developed a time-varying mixed effects model and a joint model that allows for modeling the outcomes of interest (intermediate and final) and the random durations of the first-stage interventions simultaneously. The joint model borrows strength from the survival sub-model in which the duration of the first-stage intervention (i.e., time to response to the first-stage intervention) is modeled. We performed a simulation study to evaluate the statistical properties of these models. Our simulation results showed that the two modeling approaches were both able to provide good estimations of the means of the final outcomes of all the embedded interventions in a SMART. However, the joint modeling approach was more accurate for estimating the coefficients of first-stage interventions and time of the intervention. We conclude that the joint modeling approach provides more accurate parameter estimates and a higher estimated coverage probability than the single time-varying mixed effects model, and we recommend the joint model for analyzing data generated from time-varying SMART designs. In addition, we showed that the proposed time-varying SMART design is cost-efficient and equally effective in selecting the optimal embedded adaptive intervention as the standard SMART design.

  10. Effectiveness of a step-by-step oral recount before a practical simulation of fracture fixation.

    PubMed

    Abagge, Marcelo; Uliana, Christiano Saliba; Fischer, Sergei Taggesell; Kojima, Kodi Edson

    2017-10-01

    To evaluate the effectiveness of a step-by-step oral recount by residents before the final execution of a practical exercise simulating a surgical fixation of a radial diaphyseal fracture. The study included 10 residents of orthopaedics and traumatology (four second- year and six first-year residents) divided into two groups with five residents each. All participants initially gathered in a room in which a video was presented demonstrating the practical exercise to be performed. One group (Group A) was referred directly to the practical exercise room. The other group (Group B) attended an extra session before the practical exercise, in which they were invited by instructors to recount all the steps that they would perform during the practical exercise. During this session, the instructors corrected the residents if any errors in the step-by-step recount were identified, and clarified questions from them. After this session, both Groups A and B gathered in a room in which they proceeded to the practical exercise, while being video recorded and evaluated using a 20-point checklist. Group A achieved a 57% accuracy, with results in this group ranging from 7 to 15 points out of a total of a possible 20 points. Group B achieved an 89% accuracy, with results in this group ranging from 15 to 20 points out of 20. An oral step-by-step recount by the residents before the final execution of a practical simulation exercise of surgical fixation of a diaphyseal radial fracture improved the technique and reduced the execution time of the exercise. © 2017 Elsevier Ltd. All rights reserved.

  11. Health-related quality of life in patients with metastatic renal cell carcinoma treated with sunitinib vs interferon-alpha in a phase III trial: final results and geographical analysis.

    PubMed

    Cella, D; Michaelson, M D; Bushmakin, A G; Cappelleri, J C; Charbonneau, C; Kim, S T; Li, J Z; Motzer, R J

    2010-02-16

    In a randomised phase III trial, sunitinib significantly improved efficacy over interferon-alpha (IFN-alpha) as first-line therapy for metastatic renal cell carcinoma (mRCC). We report the final health-related quality of life (HRQoL) results. Patients (n=750) received oral sunitinib 50 mg per day in 6-week cycles (4 weeks on, 2 weeks off treatment) or subcutaneous IFN-alpha 9 million units three times weekly. Health-related quality of life was assessed with nine end points: the Functional Assessment of Cancer Therapy-General and its four subscales, FACT-Kidney Symptom Index (FKSI-15) and its Disease-Related Symptoms subscale (FKSI-DRS), and EQ-5D questionnaire's EQ-5D Index and visual analogue scale. Data were analysed using mixed-effects model (MM), supplemented with pattern-mixture models (PMM), for the total sample and the US and European Union (EU) subgroups. Patients receiving sunitinib reported better scores in the primary end point, FKSI-DRS, across all patient populations (P<0.05), and in nine, five, and six end points in the total sample, in the US and EU groups respectively (P<0.05). There were no significant differences between the US and EU groups for all end points with the exception of the FKSI item 'I am bothered by side effects of treatment' (P=0.02). In general, MM and PMM results were similar. Patients treated with sunitinib in this study had improved HRQoL, compared with patients treated with IFN-alpha. Treatment differences within the US cohort did not differ from those within the EU cohort.

  12. Prediction of final error level in learning and repetitive control

    NASA Astrophysics Data System (ADS)

    Levoci, Peter A.

    Repetitive control (RC) is a field that creates controllers to eliminate the effects of periodic disturbances on a feedback control system. The methods have applications in spacecraft problems, to isolate fine pointing equipment from periodic vibration disturbances such as slight imbalances in momentum wheels or cryogenic pumps. A closely related field of control design is iterative learning control (ILC) which aims to eliminate tracking error in a task that repeats, each time starting from the same initial condition. Experiments done on a robot at NASA Langley Research Center showed that the final error levels produced by different candidate repetitive and learning controllers can be very different, even when each controller is analytically proven to converge to zero error in the deterministic case. Real world plant and measurement noise and quantization noise (from analog to digital and digital to analog converters) in these control methods are acted on as if they were error sources that will repeat and should be cancelled, which implies that the algorithms amplify such errors. Methods are developed that predict the final error levels of general first order ILC, of higher order ILC including current cycle learning, and of general RC, in the presence of noise, using frequency response methods. The method involves much less computation than the corresponding time domain approach that involves large matrices. The time domain approach was previously developed for ILC and handles a certain class of ILC methods. Here methods are created to include zero-phase filtering that is very important in creating practical designs. Also, time domain methods are developed for higher order ILC and for repetitive control. Since RC and ILC must be implemented digitally, all of these methods predict final error levels at the sample times. It is shown here that RC can easily converge to small error levels between sample times, but that ILC in most applications will have large and diverging intersample error if in fact zero error is reached at the sample times. This is independent of the ILC law used, and is purely a property of the physical system. Methods are developed to address this issue.

  13. Commercially sterilized mussel meats (Mytilus chilensis): a study on process yield.

    PubMed

    Almonacid, S; Bustamante, J; Simpson, R; Urtubia, A; Pinto, M; Teixeira, A

    2012-06-01

    The processing steps most responsible for yield loss in the manufacture of canned mussel meats are the thermal treatments of precooking to remove meats from shells, and thermal processing (retorting) to render the final canned product commercially sterile for long-term shelf stability. The objective of this study was to investigate and evaluate the impact of different combinations of process variables on the ultimate drained weight in the final mussel product (Mytilu chilensis), while verifying that any differences found were statistically and economically significant. The process variables selected for this study were precooking time, brine salt concentration, and retort temperature. Results indicated 2 combinations of process variables producing the widest difference in final drained weight, designated best combination and worst combination with 35% and 29% yield, respectively. Significance of this difference was determined by employing a Bootstrap methodology, which assumes an empirical distribution of statistical error. A difference of nearly 6 percentage points in total yield was found. This represents a 20% increase in annual sales from the same quantity of raw material, in addition to increase in yield, the conditions for the best process included a retort process time 65% shorter than that for the worst process, this difference in yield could have significant economic impact, important to the mussel canning industry. © 2012 Institute of Food Technologists®

  14. Development of a Novel System to Measure a Clearance of a Passenger Platform

    NASA Astrophysics Data System (ADS)

    Shimizu, M.; Oizumi, J.; Matsuoka, R.; Takeda, H.; Okukura, H.; Ooya, A.; Koike, A.

    2016-06-01

    Clearances of a passenger platform at a railway station should be appropriately maintained for safety of both trains and passengers. In most Japanese railways clearances between a platform and a train car is measured precisely once or twice a year. Because current measurement systems operate on a track, the closure of the track is unavoidable. Since the procedure of the closure of a track is time-consuming and bothersome, we decided to develop a new system to measure clearances without the closure of a track. A new system is required to work on a platform and the required measurement accuracy is less than several millimetres. We have adopted a 3D laser scanner and stop-and-go operation for a new system. The current systems on a track measure clearances continuously at walking speed, while our system on a platform measures clearances at approximately ten metres intervals. The scanner controlled by a PC acquires a set of point data at each measuring station. Edge points of the platform, top and side points of two rails are detected from the acquired point data. Finally clearances of the platform are calculated by using the detected feature points of the platform and the rails. The results of an experiment using a prototype of our system show that the measurement accuracy by our system would be satisfactory, but our system would take more time than the current systems. Since our system requires no closure of a track, we conclude that our system would be convenient and effective.

  15. Estimating snow depth in real time using unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Niedzielski, Tomasz; Mizinski, Bartlomiej; Witek, Matylda; Spallek, Waldemar; Szymanowski, Mariusz

    2016-04-01

    In frame of the project no. LIDER/012/223/L-5/13/NCBR/2014, financed by the National Centre for Research and Development of Poland, we elaborated a fully automated approach for estimating snow depth in real time in the field. The procedure uses oblique aerial photographs taken by the unmanned aerial vehicle (UAV). The geotagged images of snow-covered terrain are processed by the Structure-from-Motion (SfM) method which is used to produce a non-georeferenced dense point cloud. The workflow includes the enhanced RunSFM procedure (keypoint detection using the scale-invariant feature transform known as SIFT, image matching, bundling using the Bundler, executing the multi-view stereo PMVS and CMVS2 software) which is preceded by multicore image resizing. The dense point cloud is subsequently automatically georeferenced using the GRASS software, and the ground control points are borrowed from positions of image centres acquired from the UAV-mounted GPS receiver. Finally, the digital surface model (DSM) is produced which - to improve the accuracy of georeferencing - is shifted using a vector obtained through precise geodetic GPS observation of a single ground control point (GCP) placed on the Laboratory for Unmanned Observations of Earth (mobile lab established at the University of Wroclaw, Poland). The DSM includes snow cover and its difference with the corresponding snow-free DSM or digital terrain model (DTM), following the concept of the digital elevation model of differences (DOD), produces a map of snow depth. Since the final result depends on the snow-free model, two experiments are carried out. Firstly, we show the performance of the entire procedure when the snow-free model reveals a very high resolution (3 cm/px) and is produced using the UAV-taken photographs and the precise GCPs measured by the geodetic GPS receiver. Secondly, we perform a similar exercise but the 1-metre resolution light detection and ranging (LIDAR) DSM or DTM serves as the snow-free model. Thus, the main objective of the paper is to present the performance of the new procedure for estimating snow depth and to compare the two experiments.

  16. The electrobrachistochrone

    NASA Astrophysics Data System (ADS)

    Lipscombe, Trevor C.; Mungan, Carl E.

    2018-05-01

    The brachistochrone problem consists of finding the track of shortest travel time between given initial and final points for a particle sliding frictionlessly along it under the influence of a given external force field. Solvable variations of the standard example of a uniform gravitational field would be suitable for homework and computer projects by undergraduate physics students studying intermediate mechanics and electromagnetism. An electrobrachistochrone problem is here proposed, in which a charged particle moves along a frictionless track under the influence of its electrostatic force of attraction to an image charge in a grounded conducting plane below the track. The path of least time is found to be a foreshortened cycloid and its properties are investigated analytically and graphically.

  17. Optimal symmetric flight with an intermediate vehicle model

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.

    1983-01-01

    Optimal flight in the vertical plane with a vehicle model intermediate in complexity between the point-mass and energy models is studied. Flight-path angle takes on the role of a control variable. Range-open problems feature subarcs of vertical flight and singular subarcs. The class of altitude-speed-range-time optimization problems with fuel expenditure unspecified is investigated and some interesting phenomena uncovered. The maximum-lift-to-drag glide appears as part of the family, final-time-open, with appropriate initial and terminal transient exceeding level-flight drag, some members exhibiting oscillations. Oscillatory paths generally fail the Jacobi test for durations exceeding a period and furnish a minimum only for short-duration problems.

  18. Periodicity and stability for variable-time impulsive neural networks.

    PubMed

    Li, Hongfei; Li, Chuandong; Huang, Tingwen

    2017-10-01

    The paper considers a general neural networks model with variable-time impulses. It is shown that each solution of the system intersects with every discontinuous surface exactly once via several new well-proposed assumptions. Moreover, based on the comparison principle, this paper shows that neural networks with variable-time impulse can be reduced to the corresponding neural network with fixed-time impulses under well-selected conditions. Meanwhile, the fixed-time impulsive systems can be regarded as the comparison system of the variable-time impulsive neural networks. Furthermore, a series of sufficient criteria are derived to ensure the existence and global exponential stability of periodic solution of variable-time impulsive neural networks, and to illustrate the same stability properties between variable-time impulsive neural networks and the fixed-time ones. The new criteria are established by applying Schaefer's fixed point theorem combined with the use of inequality technique. Finally, a numerical example is presented to show the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Prosociality: the contribution of traits, values, and self-efficacy beliefs.

    PubMed

    Caprara, Gian Vittorio; Alessandri, Guido; Eisenberg, Nancy

    2012-06-01

    The present study examined how agreeableness, self-transcendence values, and empathic self-efficacy beliefs predict individuals' tendencies to engage in prosocial behavior (i.e., prosociality) across time. Participants were 340 young adults, 190 women and 150 men, age approximately 21 years at Time 1 and 25 years at Time 2. Measures of agreeableness, self-transcendence, empathic self-efficacy beliefs, and prosociality were collected at 2 time points. The findings corroborated the posited paths of relations, with agreeableness directly predicting self-transcendence and indirectly predicting empathic self-efficacy beliefs and prosociality. Self-transcendence mediated the relation between agreeableness and empathic self-efficacy beliefs. Empathic self-efficacy beliefs mediated the relation of agreeableness and self-transcendence to prosociality. Finally, earlier prosociality predicted agreeableness and empathic self-efficacy beliefs assessed at Time 2. The posited conceptual model accounted for a significant portion of variance in prosociality and provides guidance to interventions aimed at promoting prosociality. 2012 APA, all rights reserved

  20. Spectral reconstruction analysis for enhancing signal-to-noise in time-resolved spectroscopies

    NASA Astrophysics Data System (ADS)

    Wilhelm, Michael J.; Smith, Jonathan M.; Dai, Hai-Lung

    2015-09-01

    We demonstrate a new spectral analysis for the enhancement of the signal-to-noise ratio (SNR) in time-resolved spectroscopies. Unlike the simple linear average which produces a single representative spectrum with enhanced SNR, this Spectral Reconstruction analysis (SRa) improves the SNR (by a factor of ca. 0 . 6 √{ n } ) for all n experimentally recorded time-resolved spectra. SRa operates by eliminating noise in the temporal domain, thereby attenuating noise in the spectral domain, as follows: Temporal profiles at each measured frequency are fit to a generic mathematical function that best represents the temporal evolution; spectra at each time are then reconstructed with data points from the fitted profiles. The SRa method is validated with simulated control spectral data sets. Finally, we apply SRa to two distinct experimentally measured sets of time-resolved IR emission spectra: (1) UV photolysis of carbonyl cyanide and (2) UV photolysis of vinyl cyanide.

  1. Autocorrelation and cross-correlation in time series of homicide and attempted homicide

    NASA Astrophysics Data System (ADS)

    Machado Filho, A.; da Silva, M. F.; Zebende, G. F.

    2014-04-01

    We propose in this paper to establish the relationship between homicides and attempted homicides by a non-stationary time-series analysis. This analysis will be carried out by Detrended Fluctuation Analysis (DFA), Detrended Cross-Correlation Analysis (DCCA), and DCCA cross-correlation coefficient, ρ(n). Through this analysis we can identify a positive cross-correlation between homicides and attempted homicides. At the same time, looked at from the point of view of autocorrelation (DFA), this analysis can be more informative depending on time scale. For short scale (days), we cannot identify auto-correlations, on the scale of weeks DFA presents anti-persistent behavior, and for long time scales (n>90 days) DFA presents a persistent behavior. Finally, the application of this new type of statistical analysis proved to be efficient and, in this sense, this paper can contribute to a more accurate descriptive statistics of crime.

  2. Retrieving fear memories, as time goes by…

    PubMed Central

    Do Monte, Fabricio H.; Quirk, Gregory J.; Li, Bo; Penzo, Mario A.

    2016-01-01

    Fear conditioning researches have led to a comprehensive picture of the neuronal circuit underlying the formation of fear memories. In contrast, knowledge about the retrieval of fear memories is much more limited. This disparity may stem from the fact that fear memories are not rigid, but reorganize over time. To bring clarity and raise awareness on the time-dependent dynamics of retrieval circuits, we review current evidence on the neuronal circuitry participating in fear memory retrieval at both early and late time points after conditioning. We focus on the temporal recruitment of the paraventricular nucleus of the thalamus, and its BDNFergic efferents to the central nucleus of the amygdala, for the retrieval and maintenance of fear memories. Finally, we speculate as to why retrieval circuits change across time, and the functional benefits of recruiting structures such as the paraventricular nucleus into the retrieval circuit. PMID:27217148

  3. Parry-Romberg reconstruction: optimal timing for hard and soft tissue procedures.

    PubMed

    Slack, Ginger C; Tabit, Christina J; Allam, Karam A; Kawamoto, Henry K; Bradley, James P

    2012-11-01

    For the treatment of Parry-Romberg syndrome or progressive hemifacial atrophy, we studied 3 controversial issues: (1) optimal timing, (2) need for skeletal reconstruction, and (3) need for soft tissue (medial canthus/lacrimal duct) reconstruction. Patients with Parry-Romberg syndrome (>5 y follow-up) were divided into 2 groups: (1) younger than 14 years and (2) 14 years or older (n = 43). Sex, age, severity of deformity, number of procedures, operative times, and augmentation fat volumes were recorded. Physician and patient satisfaction surveys (5-point scale) were obtained, preoperative and postoperative three-dimensional computed tomographic scans were reviewed, and a digital three-dimensional photogrammetry system was used to determine volume retention. Our results indicate that the younger patient group required more procedures compared with the older patient group (4.3 versus 2.8); however, the younger group had higher patient/family satisfaction scores (3.8 versus 3.0). Skeletal and soft tissue reconstruction resulted in improved symmetry score (60% preoperatively to 93% final) and satisfaction scores (3.4 preoperatively to 3.8 final). Patients with Parry-Romberg syndrome required multiple corrective surgeries but showed improvements even when beginning before puberty. Soft and hard tissue reconstruction was beneficial.

  4. A new template matching method based on contour information

    NASA Astrophysics Data System (ADS)

    Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong

    2014-11-01

    Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process, the initial RST parameters are discrete to obtain the final accurate pose of the object. Experimental results show that the proposed method is reasonable and efficient, and can be used in many real time applications.

  5. Changes in the Game Characteristics of a Badminton Match: A Longitudinal Study through the Olympic Game Finals Analysis in Men's Singles.

    PubMed

    Laffaye, Guillaume; Phomsoupha, Michael; Dor, Frédéric

    2015-09-01

    The goal of this study was to analyze, through a longitudinal study, the Olympic Badminton Men's singles finals from the Barcelona Games (1992) to the London Games (2012) to assess some changes of the Badminton game characteristics. Six Olympic finals have been analyzed based on the official video of the Olympic Games (OG) through the temporal structure and with a notational approach. In total, 537 rallies and 5537 strokes have been analyzed. The results show a change in the game's temporal structure: a significant difference in the rally time, rest time and number of shots per rally (all p<0.0001; 0.09 < η(2) < 0.16). Moreover, the shot frequency shows a 34.0% increase (p<0.000001; η(2) = 0.17), whereas the work density revealed a 58.2% decrease (from 78% to 30.8%) as well as the effective playing time (-34.5% from 34.7±1.4% to 22.7±1.4%). This argues for an increase in the intensity of the game and a necessity for the player to use a longer resting time to recover. Lastly, the strokes distribution and the percentage of unforced and forced mistakes did not show any differences throughout the OG analysis, except for the use of the clear. This results impact on the way the training of Badminton players should be designed, especially in the temporal structure and intensity. Key pointsBadminton game has become faster, with an important increase in the shot frequency (+34%)The effective playing time has decreased between first to last Olympic Games (-34.5%)The strokes distribution and the percentage of unforced and forced errors show no differences through the OG analysis, except for the use of the clear.

  6. Quality-by-Design approach to monitor the operation of a batch bioreactor in an industrial avian vaccine manufacturing process.

    PubMed

    Largoni, Martina; Facco, Pierantonio; Bernini, Donatella; Bezzo, Fabrizio; Barolo, Massimiliano

    2015-10-10

    Monitoring batch bioreactors is a complex task, due to the fact that several sources of variability can affect a running batch and impact on the final product quality. Additionally, the product quality itself may not be measurable on line, but requires sampling and lab analysis taking several days to be completed. In this study we show that, by using appropriate process analytical technology tools, the operation of an industrial batch bioreactor used in avian vaccine manufacturing can be effectively monitored as the batch progresses. Multivariate statistical models are built from historical databases of batches already completed, and they are used to enable the real time identification of the variability sources, to reliably predict the final product quality, and to improve process understanding, paving the way to a reduction of final product rejections, as well as to a reduction of the product cycle time. It is also shown that the product quality "builds up" mainly during the first half of a batch, suggesting on the one side that reducing the variability during this period is crucial, and on the other side that the batch length can possibly be shortened. Overall, the study demonstrates that, by using a Quality-by-Design approach centered on the appropriate use of mathematical modeling, quality can indeed be built "by design" into the final product, whereas the role of end-point product testing can progressively reduce its importance in product manufacturing. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Predicting changes in hypertension control using electronic health records from a chronic disease management program

    PubMed Central

    Sun, Jimeng; McNaughton, Candace D; Zhang, Ping; Perer, Adam; Gkoulalas-Divanis, Aris; Denny, Joshua C; Kirby, Jacqueline; Lasko, Thomas; Saip, Alexander; Malin, Bradley A

    2014-01-01

    Objective Common chronic diseases such as hypertension are costly and difficult to manage. Our ultimate goal is to use data from electronic health records to predict the risk and timing of deterioration in hypertension control. Towards this goal, this work predicts the transition points at which hypertension is brought into, as well as pushed out of, control. Method In a cohort of 1294 patients with hypertension enrolled in a chronic disease management program at the Vanderbilt University Medical Center, patients are modeled as an array of features derived from the clinical domain over time, which are distilled into a core set using an information gain criteria regarding their predictive performance. A model for transition point prediction was then computed using a random forest classifier. Results The most predictive features for transitions in hypertension control status included hypertension assessment patterns, comorbid diagnoses, procedures and medication history. The final random forest model achieved a c-statistic of 0.836 (95% CI 0.830 to 0.842) and an accuracy of 0.773 (95% CI 0.766 to 0.780). Conclusions This study achieved accurate prediction of transition points of hypertension control status, an important first step in the long-term goal of developing personalized hypertension management plans. PMID:24045907

  8. Asymmetric author-topic model for knowledge discovering of big data in toxicogenomics.

    PubMed

    Chung, Ming-Hua; Wang, Yuping; Tang, Hailin; Zou, Wen; Basinger, John; Xu, Xiaowei; Tong, Weida

    2015-01-01

    The advancement of high-throughput screening technologies facilitates the generation of massive amount of biological data, a big data phenomena in biomedical science. Yet, researchers still heavily rely on keyword search and/or literature review to navigate the databases and analyses are often done in rather small-scale. As a result, the rich information of a database has not been fully utilized, particularly for the information embedded in the interactive nature between data points that are largely ignored and buried. For the past 10 years, probabilistic topic modeling has been recognized as an effective machine learning algorithm to annotate the hidden thematic structure of massive collection of documents. The analogy between text corpus and large-scale genomic data enables the application of text mining tools, like probabilistic topic models, to explore hidden patterns of genomic data and to the extension of altered biological functions. In this paper, we developed a generalized probabilistic topic model to analyze a toxicogenomics dataset that consists of a large number of gene expression data from the rat livers treated with drugs in multiple dose and time-points. We discovered the hidden patterns in gene expression associated with the effect of doses and time-points of treatment. Finally, we illustrated the ability of our model to identify the evidence of potential reduction of animal use.

  9. Predicting changes in hypertension control using electronic health records from a chronic disease management program.

    PubMed

    Sun, Jimeng; McNaughton, Candace D; Zhang, Ping; Perer, Adam; Gkoulalas-Divanis, Aris; Denny, Joshua C; Kirby, Jacqueline; Lasko, Thomas; Saip, Alexander; Malin, Bradley A

    2014-01-01

    Common chronic diseases such as hypertension are costly and difficult to manage. Our ultimate goal is to use data from electronic health records to predict the risk and timing of deterioration in hypertension control. Towards this goal, this work predicts the transition points at which hypertension is brought into, as well as pushed out of, control. In a cohort of 1294 patients with hypertension enrolled in a chronic disease management program at the Vanderbilt University Medical Center, patients are modeled as an array of features derived from the clinical domain over time, which are distilled into a core set using an information gain criteria regarding their predictive performance. A model for transition point prediction was then computed using a random forest classifier. The most predictive features for transitions in hypertension control status included hypertension assessment patterns, comorbid diagnoses, procedures and medication history. The final random forest model achieved a c-statistic of 0.836 (95% CI 0.830 to 0.842) and an accuracy of 0.773 (95% CI 0.766 to 0.780). This study achieved accurate prediction of transition points of hypertension control status, an important first step in the long-term goal of developing personalized hypertension management plans.

  10. Synthesis of capillary pressure curves from post-stack seismic data with the use of intelligent estimators: A case study from the Iranian part of the South Pars gas field, Persian Gulf Basin

    NASA Astrophysics Data System (ADS)

    Golsanami, Naser; Kadkhodaie-Ilkhchi, Ali; Erfani, Amir

    2015-01-01

    Capillary pressure curves are important data for reservoir rock typing, analyzing pore throat distribution, determining height above free water level, and reservoir simulation. Laboratory experiments provide accurate data, however they are expensive, time-consuming and discontinuous through the reservoir intervals. The current study focuses on synthesizing artificial capillary pressure (Pc) curves from seismic attributes with the use of artificial intelligent systems including Artificial Neural Networks (ANNs), Fuzzy logic (FL) and Adaptive Neuro-Fuzzy Inference Systems (ANFISs). The synthetic capillary pressure curves were achieved by estimating pressure values at six mercury saturation points. These points correspond to mercury filled pore volumes of core samples (Hg-saturation) at 5%, 20%, 35%, 65%, 80%, and 90% saturations. To predict the synthetic Pc curve at each saturation point, various FL, ANFIS and ANN models were constructed. The varying neural network models differ in their training algorithm. Based on the performance function, the most accurately functioning models were selected as the final solvers to do the prediction process at each of the above-mentioned mercury saturation points. The constructed models were then tested at six depth points of the studied well which were already unforeseen by the models. The results show that the Fuzzy logic and neuro-fuzzy models were not capable of making reliable estimations, while the predictions from the ANN models were satisfyingly trustworthy. The obtained results showed a good agreement between the laboratory derived and synthetic capillary pressure curves. Finally, a 3D seismic cube was captured for which the required attributes were extracted and the capillary pressure cube was estimated by using the developed models. In the next step, the synthesized Pc cube was compared with the seismic cube and an acceptable correspondence was observed.

  11. Construction of Gene Regulatory Networks Using Recurrent Neural Networks and Swarm Intelligence.

    PubMed

    Khan, Abhinandan; Mandal, Sudip; Pal, Rajat Kumar; Saha, Goutam

    2016-01-01

    We have proposed a methodology for the reverse engineering of biologically plausible gene regulatory networks from temporal genetic expression data. We have used established information and the fundamental mathematical theory for this purpose. We have employed the Recurrent Neural Network formalism to extract the underlying dynamics present in the time series expression data accurately. We have introduced a new hybrid swarm intelligence framework for the accurate training of the model parameters. The proposed methodology has been first applied to a small artificial network, and the results obtained suggest that it can produce the best results available in the contemporary literature, to the best of our knowledge. Subsequently, we have implemented our proposed framework on experimental (in vivo) datasets. Finally, we have investigated two medium sized genetic networks (in silico) extracted from GeneNetWeaver, to understand how the proposed algorithm scales up with network size. Additionally, we have implemented our proposed algorithm with half the number of time points. The results indicate that a reduction of 50% in the number of time points does not have an effect on the accuracy of the proposed methodology significantly, with a maximum of just over 15% deterioration in the worst case.

  12. Cosmology of a covariant Galilean field.

    PubMed

    De Felice, Antonio; Tsujikawa, Shinji

    2010-09-10

    We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.

  13. Low authority-threshold control for large flexible structures

    NASA Technical Reports Server (NTRS)

    Zimmerman, D. C.; Inman, D. J.; Juang, J.-N.

    1988-01-01

    An improved active control strategy for the vibration control of large flexible structures is presented. A minimum force, low authority-threshold controller is developed to bring a system with or without known external disturbances back into an 'allowable' state manifold over a finite time interval. The concept of a constrained, or allowable feedback form of the controller is introduced that reflects practical hardware implementation concerns. The robustness properties of the control strategy are then assessed. Finally, examples are presented which highlight the key points made within the paper.

  14. Learning State Space Dynamics in Recurrent Networks

    NASA Astrophysics Data System (ADS)

    Simard, Patrice Yvon

    Fully recurrent (asymmetrical) networks can be used to learn temporal trajectories. The network is unfolded in time, and backpropagation is used to train the weights. The presence of recurrent connections creates internal states in the system which vary as a function of time. The resulting dynamics can provide interesting additional computing power but learning is made more difficult by the existence of internal memories. This study first exhibits the properties of recurrent networks in terms of convergence when the internal states of the system are unknown. A new energy functional is provided to change the weights of the units in order to the control the stability of the fixed points of the network's dynamics. The power of the resultant algorithm is illustrated with the simulation of a content addressable memory. Next, the more general case of time trajectories on a recurrent network is studied. An application is proposed in which trajectories are generated to draw letters as a function of an input. In another application of recurrent systems, a neural network certain temporal properties observed in human callosally sectioned brains. Finally the proposed algorithm for stabilizing dynamics around fixed points is extended to one for stabilizing dynamics around time trajectories. Its effects are illustrated on a network which generates Lisajous curves.

  15. 78 FR 2438 - Agency Information Collection Activities; Existing Collection, Comments Requested: The National...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-11

    ... Collection, Comments Requested: The National Instant Criminal Background Check System (NICS) Point-of-Contact... Background Check System (NICS) Point-of-Contact (POC) State Final Determination Electronic Submission. (3... required to respond, as well as a brief abstract: Primary: Full Point-of-Contact (POC) States; Partial POC...

  16. Reasons for Distress Among Burn Survivors at 6, 12, and 24 Months Postdischarge: A Burn Injury Model System Investigation.

    PubMed

    Wiechman, Shelley A; McMullen, Kara; Carrougher, Gretchen J; Fauerbach, Jame A; Ryan, Colleen M; Herndon, David N; Holavanahalli, Radha; Gibran, Nicole S; Roaten, Kimberly

    2017-12-16

    To identify important sources of distress among burn survivors at discharge and 6, 12, and 24 months postinjury, and to examine if the distress related to these sources changed over time. Exploratory. Outpatient burn clinics in 4 sites across the country. Participants who met preestablished criteria for having a major burn injury (N=1009) were enrolled in this multisite study. Participants were given a previously developed list of 12 sources of distress among burn survivors and asked to rate on a 10-point Likert-type scale (0=no distress to 10=high distress) how much distress each of the 12 issues was causing them at the time of each follow-up. The Medical Outcomes Study 12-Item Short-Form Health Survey was administered at each time point as a measure of health-related quality of life. The Satisfaction With Appearance Scale was used to understand the relation between sources of distress and body image. Finally, whether a person returned to work was used to determine the effect of sources of distress on returning to employment. It was encouraging that no symptoms were worsening at 2 years. However, financial concerns and long recovery time are 2 of the highest means at all time points. Pain and sleep disturbance had the biggest effect on ability to return to work. These findings can be used to inform burn-specific interventions and to give survivors an understanding of the temporal trajectory for various causes of distress. In particular, it appears that interventions targeted at sleep disturbance and high pain levels can potentially effect distress over financial concerns by allowing a person to return to work more quickly. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  17. Identification of biogeochemical hot spots using time-lapse hydrogeophysics

    NASA Astrophysics Data System (ADS)

    Franz, T. E.; Loecke, T.; Burgin, A.

    2016-12-01

    The identification and monitoring of biogeochemical hot spots and hot moments is difficult using point based sampling techniques and sensors. Without proper monitoring and accounting of water, energy, and trace gas fluxes it is difficult to assess the environmental footprint of land management practices. One key limitation is optimal placement of sensors/chambers that adequately capture the point scale fluxes and thus a reasonable integration to landscape scale flux. In this work we present time-lapse hydrogeophysical imaging at an old agricultural field converted into a wetland mitigation bank near Dayton, Ohio. While the wetland was previously instrumented with a network of soil sensors and surface chambers to capture a suite of state variables and fluxes, we hypothesize that time-lapse hydrogeophysical imaging is an underutilized and critical reconnaissance tool for effective network design and landscape scaling. Here we combine the time-lapse hydrogeophysical imagery with the multivariate statistical technique of Empirical Orthogonal Functions (EOF) in order to isolate the spatial and temporal components of the imagery. Comparisons of soil core information (e.g. soil texture, soil carbon) from around the study site and organized within like spatial zones reveal statistically different mean values of soil properties. Moreover, the like spatial zones can be used to identify a finite number of future sampling locations, evaluation of the placement of existing sensors/chambers, upscale/downscale observations, all of which are desirable techniques for commercial use in precision agriculture. Finally, we note that combining the EOF analysis with continuous monitoring from point sensors or remote sensing products may provide a robust statistical framework for scaling observations through time as well as provide appropriate datasets for use in landscape biogeochemical models.

  18. Entanglement spreading after a geometric quench in quantum spin chains

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo; Heidrich-Meisner, Fabian

    2014-08-01

    We investigate the entanglement spreading in the anisotropic spin-1/2 Heisenberg (XXZ) chain after a geometric quench. This corresponds to a sudden change of the geometry of the chain or, in the equivalent language of interacting fermions confined in a box trap, to a sudden increase of the trap size. The entanglement dynamics after the quench is associated with the ballistic propagation of a magnetization wave front. At the free fermion point (XX chain), the von Neumann entropy SA exhibits several intriguing dynamical regimes. Specifically, at short times a logarithmic increase is observed, similar to local quenches. This is accurately described by an analytic formula that we derive from heuristic arguments. At intermediate times partial revivals of the short-time dynamics are superposed with a power-law increase SA˜tα, with α <1. Finally, at very long times a steady state develops with constant entanglement entropy, apart from oscillations. As expected, since the model is integrable, we find that the steady state is nonthermal, although it exhibits extensive entanglement entropy. We also investigate the entanglement dynamics after the quench from a finite to the infinite chain (sudden expansion). While at long times the entanglement vanishes, we demonstrate that its relaxation dynamics exhibits a number of scaling properties. Finally, we discuss the short-time entanglement dynamics in the XXZ chain in the gapless phase. The same formula that describes the time dependence for the XX chain remains valid in the whole gapless phase.

  19. Evolution and End Point of the Black String Instability: Large D Solution.

    PubMed

    Emparan, Roberto; Suzuki, Ryotaku; Tanabe, Kentaro

    2015-08-28

    We derive a simple set of nonlinear, (1+1)-dimensional partial differential equations that describe the dynamical evolution of black strings and branes to leading order in the expansion in the inverse of the number of dimensions D. These equations are easily solved numerically. Their solution shows that thin enough black strings are unstable to developing inhomogeneities along their length, and at late times they asymptote to stable nonuniform black strings. This proves an earlier conjecture about the end point of the instability of black strings in a large enough number of dimensions. If the initial black string is very thin, the final configuration is highly nonuniform and resembles a periodic array of localized black holes joined by short necks. We also present the equations that describe the nonlinear dynamics of anti-de Sitter black branes at large D.

  20. Coalbed-methane pilots - timing, design, and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roadifer, R.D.; Moore, T.R.

    2009-10-15

    Four distinct sequential phases form a recommended process for coalbed-methane (CBM)-prospect assessment: initial screening reconnaissance, pilot testing, and final appraisal. Stepping through these four phases provides a program of progressively ramping work and cost, while creating a series of discrete decision points at which analysis of results and risks can be assessed. While discussing each of these phases in some degree, this paper focuses on the third, the critically important pilot-testing phase. This paper contains roughly 30 specific recommendations and the fundamental rationale behind each recommendation to help ensure that a CBM pilot will fulfill its primary objectives of (1)more » demonstrating whether the subject coal reservoir will desorb and produce consequential gas and (2) gathering the data critical to evaluate and risk the prospect at the next-often most critical-decision point.« less

  1. Numerical investigation of a laser gun injector at CEBAF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byung Yunn; Charles Sinclair; David Neuffer

    1993-08-23

    A laser gun injector is being developed based on the superconducting rf technologies established at CEBAF. This injector will serve as a high charge cw source for a high power free electron laser. It consists of a dc laser gun, a buncher, a cryounit and a chicane. Its space-charge-dominated performance has been thoroughly investigated using the time-consuming but more appropriate point-by-point space charge calculation method in PARMELA. The notion of ``conditioning for final bunching'' will be introduced. This concept has been built into the code and has greatly facilitated the optimization of the whole system to achieve the highest possiblemore » peak current while maintaining low emittance and low energy spread. Extensive parameter variation studies have shown that the design will perform better than the specifications.« less

  2. Quantum Quench Dynamics

    NASA Astrophysics Data System (ADS)

    Mitra, Aditi

    2018-03-01

    Quench dynamics is an active area of study encompassing condensed matter physics and quantum information, with applications to cold-atomic gases and pump-probe spectroscopy of materials. Recent theoretical progress in studying quantum quenches is reviewed. Quenches in interacting one-dimensional systems as well as systems in higher spatial dimensions are covered. The appearance of nontrivial steady states following a quench in exactly solvable models is discussed, and the stability of these states to perturbations is described. Proper conserving approximations needed to capture the onset of thermalization at long times are outlined. The appearance of universal scaling for quenches near critical points and the role of the renormalization group in capturing the transient regime are reviewed. Finally, the effect of quenches near critical points on the dynamics of entanglement entropy and entanglement statistics is discussed. The extraction of critical exponents from the entanglement statistics is outlined.

  3. Detection of small surface defects using DCT based enhancement approach in machine vision systems

    NASA Astrophysics Data System (ADS)

    He, Fuqiang; Wang, Wen; Chen, Zichen

    2005-12-01

    Utilizing DCT based enhancement approach, an improved small defect detection algorithm for real-time leather surface inspection was developed. A two-stage decomposition procedure was proposed to extract an odd-odd frequency matrix after a digital image has been transformed to DCT domain. Then, the reverse cumulative sum algorithm was proposed to detect the transition points of the gentle curves plotted from the odd-odd frequency matrix. The best radius of the cutting sector was computed in terms of the transition points and the high-pass filtering operation was implemented. The filtered image was then inversed and transformed back to the spatial domain. Finally, the restored image was segmented by an entropy method and some defect features are calculated. Experimental results show the proposed small defect detection method can reach the small defect detection rate by 94%.

  4. Challenges in early clinical development of adjuvanted vaccines.

    PubMed

    Della Cioppa, Giovanni; Jonsdottir, Ingileif; Lewis, David

    2015-06-08

    A three-step approach to the early development of adjuvanted vaccine candidates is proposed, the goal of which is to allow ample space for exploratory and hypothesis-generating human experiments and to select dose(s) and dosing schedule(s) to bring into full development. Although the proposed approach is more extensive than the traditional early development program, the authors suggest that by addressing key questions upfront the overall time, size and cost of development will be reduced and the probability of public health advancement enhanced. The immunogenicity end-points chosen for early development should be critically selected: an established immunological parameter with a well characterized assay should be selected as primary end-point for dose and schedule finding; exploratory information-rich end-points should be limited in number and based on pre-defined hypothesis generating plans, including system biology and pathway analyses. Building a pharmacodynamic profile is an important aspect of early development: to this end, multiple early (within 24h) and late (up to one year) sampling is necessary, which can be accomplished by sampling subgroups of subjects at different time points. In most cases the final target population, even if vulnerable, should be considered for inclusion in early development. In order to obtain the multiple formulations necessary for the dose and schedule finding, "bed-side mixing" of various components of the vaccine is often necessary: this is a complex and underestimated area that deserves serious research and logistical support. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Triana Safehold: A New Gyroless, Sun-Pointing Attitude Controller

    NASA Technical Reports Server (NTRS)

    Chen, J.; Morgenstern, Wendy; Garrick, Joseph

    2001-01-01

    Triana is a single-string spacecraft to be placed in a halo orbit about the sun-earth Ll Lagrangian point. The Attitude Control Subsystem (ACS) hardware includes four reaction wheels, ten thrusters, six coarse sun sensors, a star tracker, and a three-axis Inertial Measuring Unit (IMU). The ACS Safehold design features a gyroless sun-pointing control scheme using only sun sensors and wheels. With this minimum hardware approach, Safehold increases mission reliability in the event of a gyroscope anomaly. In place of the gyroscope rate measurements, Triana Safehold uses wheel tachometers to help provide a scaled estimation of the spacecraft body rate about the sun vector. Since Triana nominally performs momentum management every three months, its accumulated system momentum can reach a significant fraction of the wheel capacity. It is therefore a requirement for Safehold to maintain a sun-pointing attitude even when the spacecraft system momentum is reasonably large. The tachometer sun-line rate estimation enables the controller to bring the spacecraft close to its desired sun-pointing attitude even with reasonably high system momentum and wheel drags. This paper presents the design rationale behind this gyroless controller, stability analysis, and some time-domain simulation results showing performances with various initial conditions. Finally, suggestions for future improvements are briefly discussed.

  6. An analytical model for the calculation of the change in transmembrane potential produced by an ultrawideband electromagnetic pulse.

    PubMed

    Hart, Francis X; Easterly, Clay E

    2004-05-01

    The electric field pulse shape and change in transmembrane potential produced at various points within a sphere by an intense, ultrawideband pulse are calculated in a four stage, analytical procedure. Spheres of two sizes are used to represent the head of a human and the head of a rat. In the first stage, the pulse is decomposed into its Fourier components. In the second stage, Mie scattering analysis (MSA) is performed for a particular point in the sphere on each of the Fourier components, and the resulting electric field pulse shape is obtained for that point. In the third stage, the long wavelength approximation (LWA) is used to obtain the change in transmembrane potential in a cell at that point. In the final stage, an energy analysis is performed. These calculations are performed at 45 points within each sphere. Large electric fields and transmembrane potential changes on the order of a millivolt are produced within the brain, but on a time scale on the order of nanoseconds. The pulse shape within the brain differs considerably from that of the incident pulse. Comparison of the results for spheres of different sizes indicates that scaling of such pulses across species is complicated. Published 2004 Wiley-Liss, Inc.

  7. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    PubMed Central

    Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu

    2017-01-01

    The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979

  8. Behavioral momentum and resurgence: Effects of time in extinction and repeated resurgence tests

    PubMed Central

    Shahan, Timothy A.

    2014-01-01

    Resurgence is an increase in a previously extinguished operant response that occurs if an alternative reinforcement introduced during extinction is removed. Shahan and Sweeney (2011) developed a quantitative model of resurgence based on behavioral momentum theory that captures existing data well and predicts that resurgence should decrease as time in extinction and exposure to the alternative reinforcement increases. Two experiments tested this prediction. The data from Experiment 1 suggested that without a return to baseline, resurgence decreases with increased exposure to alternative reinforcement and to extinction of the target response. Experiment 2 tested the predictions of the model across two conditions, one with constant alternative reinforcement for five sessions, and the other with alternative reinforcement removed three times. In both conditions, the alternative reinforcement was removed for the final test session. Experiment 2 again demonstrated a decrease in relapse across repeated resurgence tests. Furthermore, comparably little resurgence was observed at the same time point in extinction in the final test, despite dissimilar previous exposures to alternative reinforcement removal. The quantitative model provided a good description of the observed data in both experiments. More broadly, these data suggest that increased exposure to extinction may be a successful strategy to reduce resurgence. The relationship between these data and existing tests of the effect of time in extinction on resurgence is discussed. PMID:23982985

  9. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations.

    PubMed

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-04-11

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10-0.20 m, and vertical accuracy was approximately 0.01-0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed.

  10. The Recovery of TOMS-EP

    NASA Technical Reports Server (NTRS)

    Robertson, Brent; Sabelhaus, Phil; Mendenhall, Todd; Fesq, Lorraine

    1998-01-01

    On December 13th 1998, the Total Ozone Mapping Spectrometer - Earth Probe (TOMS-EP) spacecraft experienced a Single Event Upset which caused the system to reconfigure and enter a Safe Mode. This incident occurred two and a half years after the launch of the spacecraft which was designed for a two year life. A combination of factors, including changes in component behavior due to age and extended use, very unfortunate initial conditions and the safe mode processing logic prevented the spacecraft from entering its nominal long term storage mode. The spacecraft remained in a high fuel consumption mode designed for temporary use. By the time the onboard fuel was exhausted, the spacecraft was Sun pointing in a high rate flat spin. Although the uncontrolled spacecraft was initially in a power and thermal safe orientation, it would not stay in this state indefinitely due to a slow precession of its momentum vector. A recovery team was immediately assembled to determine if there was time to develop a method of despinning the vehicle and return it to normal science data collection. A three stage plan was developed that used the onboard magnetic torque rods as actuators. The first stage was designed to reduce the high spin rate to within the linear range of the gyros. The second stage transitioned the spacecraft from sun pointing to orbit reference pointing. The final stage returned the spacecraft to normal science operation. The entire recovery scenario was simulated with a wide range of initial conditions to establish the expected behavior. The recovery sequence was started on December 28th 1998 and completed by December 31st. TOMS-EP was successfully returned to science operations by the beginning of 1999. This paper describes the TOMS-EP Safe Mode design and the factors which led to the spacecraft anomaly and loss of fuel. The recovery and simulation efforts are described. Flight data are presented which show the performance of the spacecraft during its return to science. Finally, lessons learned are presented.

  11. Liver maximum capacity (LiMAx) test as a helpful prognostic tool in acute liver failure with sepsis: a case report.

    PubMed

    Buechter, Matthias; Gerken, Guido; Hoyer, Dieter P; Bertram, Stefanie; Theysohn, Jens M; Thodou, Viktoria; Kahraman, Alisan

    2018-06-20

    Acute liver failure (ALF) is a life-threatening entity particularly when infectious complications worsen the clinical course. Urgent liver transplantation (LT) is frequently the only curative treatment. However, in some cases, recovery is observed under conservative treatment. Therefore, prognostic tools for estimating course of the disease are of great clinical interest. Since laboratory parameters sometimes lack sensitivity and specificity, enzymatic liver function measured by liver maximum capacity (LiMAx) test may offer novel and valuable additional information in this setting. We here report the case of a formerly healthy 20-year old male caucasian patient who was admitted to our clinic for ALF of unknown origin in December 2017. Laboratory parameters confirmed the diagnosis with an initial MELD score of 28 points. Likewise, enzymatic liver function was significantly impaired with a value of 147 [> 315] μg/h/kg. Clinical and biochemical analyses for viral-, autoimmune-, or drug-induced hepatitis were negative. Liver synthesis parameters further deteriorated reaching a MELD score of 40 points whilst clinical course was complicated by septic pneumonia leading to severe hepatic encephalopathy grade III-IV, finally resulting in mechanical ventilation of the patient. Interestingly, although clinical course and laboratory data suggested poor outcome, serial LiMAx test revealed improvement of the enzymatic liver function at this time point increasing to 169 μg/h/kg. Clinical condition and laboratory data slowly improved likewise, however with significant time delay of 11 days. Finally, the patient could be dismissed from our clinic after 37 days. Estimating prognosis in patients with ALF is challenging by use of the established scores. In our case, improvement of enzymatic liver function measured by the LiMAx test was the first parameter predicting beneficial outcome in a patient with ALF complicated by sepsis.

  12. Mixed quantum/classical investigation of the photodissociation of NH3(Ã) and a practical method for maintaining zero-point energy in classical trajectories

    NASA Astrophysics Data System (ADS)

    Bonhommeau, David; Truhlar, Donald G.

    2008-07-01

    The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode ν2 with n2=0,…,6 quanta of vibration) in the à electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTU /SD+trajectory projection onto ZPE orbit (TRAPZ) and FSTU /SD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH2 internal energy distributions obtained for n2=0 and n2>1, as observed in experiments. Distributions obtained for n2=1 present an intermediate behavior between distributions obtained for smaller and larger n2 values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH2 internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n2=0 and n2=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.

  13. Mixed quantum/classical investigation of the photodissociation of NH3(A) and a practical method for maintaining zero-point energy in classical trajectories.

    PubMed

    Bonhommeau, David; Truhlar, Donald G

    2008-07-07

    The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode nu(2) with n(2)=0,[ellipsis (horizontal)],6 quanta of vibration) in the A electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTUSD+trajectory projection onto ZPE orbit (TRAPZ) and FSTUSD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH(2) internal energy distributions obtained for n(2)=0 and n(2)>1, as observed in experiments. Distributions obtained for n(2)=1 present an intermediate behavior between distributions obtained for smaller and larger n(2) values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH(2) internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n(2)=0 and n(2)=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.

  14. Stairs and Doors Recognition as Natural Landmarks Based on Clouds of 3D Edge-Points from RGB-D Sensors for Mobile Robot Localization.

    PubMed

    Souto, Leonardo A V; Castro, André; Gonçalves, Luiz Marcos Garcia; Nascimento, Tiago P

    2017-08-08

    Natural landmarks are the main features in the next step of the research in localization of mobile robot platforms. The identification and recognition of these landmarks are crucial to better localize a robot. To help solving this problem, this work proposes an approach for the identification and recognition of natural marks included in the environment using images from RGB-D (Red, Green, Blue, Depth) sensors. In the identification step, a structural analysis of the natural landmarks that are present in the environment is performed. The extraction of edge points of these landmarks is done using the 3D point cloud obtained from the RGB-D sensor. These edge points are smoothed through the S l 0 algorithm, which minimizes the standard deviation of the normals at each point. Then, the second step of the proposed algorithm begins, which is the proper recognition of the natural landmarks. This recognition step is done as a real-time algorithm that extracts the points referring to the filtered edges and determines to which structure they belong to in the current scenario: stairs or doors. Finally, the geometrical characteristics that are intrinsic to the doors and stairs are identified. The approach proposed here has been validated with real robot experiments. The performed tests verify the efficacy of our proposed approach.

  15. Stairs and Doors Recognition as Natural Landmarks Based on Clouds of 3D Edge-Points from RGB-D Sensors for Mobile Robot Localization†

    PubMed Central

    Castro, André; Nascimento, Tiago P.

    2017-01-01

    Natural landmarks are the main features in the next step of the research in localization of mobile robot platforms. The identification and recognition of these landmarks are crucial to better localize a robot. To help solving this problem, this work proposes an approach for the identification and recognition of natural marks included in the environment using images from RGB-D (Red, Green, Blue, Depth) sensors. In the identification step, a structural analysis of the natural landmarks that are present in the environment is performed. The extraction of edge points of these landmarks is done using the 3D point cloud obtained from the RGB-D sensor. These edge points are smoothed through the Sl0 algorithm, which minimizes the standard deviation of the normals at each point. Then, the second step of the proposed algorithm begins, which is the proper recognition of the natural landmarks. This recognition step is done as a real-time algorithm that extracts the points referring to the filtered edges and determines to which structure they belong to in the current scenario: stairs or doors. Finally, the geometrical characteristics that are intrinsic to the doors and stairs are identified. The approach proposed here has been validated with real robot experiments. The performed tests verify the efficacy of our proposed approach. PMID:28786925

  16. Fast generation of complex modulation video holograms using temporal redundancy compression and hybrid point-source/wave-field approaches

    NASA Astrophysics Data System (ADS)

    Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce

    2015-09-01

    The hybrid point-source/wave-field method is a newly proposed approach for Computer-Generated Hologram (CGH) calculation, based on the slicing of the scene into several depth layers parallel to the hologram plane. The complex wave scattered by each depth layer is then computed using either a wave-field or a point-source approach according to a threshold criterion on the number of points within the layer. Finally, the complex waves scattered by all the depth layers are summed up in order to obtain the final CGH. Although outperforming both point-source and wave-field methods without producing any visible artifact, this approach has not yet been used for animated holograms, and the possible exploitation of temporal redundancies has not been studied. In this paper, we propose a fast computation of video holograms by taking into account those redundancies. Our algorithm consists of three steps. First, intensity and depth data of the current 3D video frame are extracted and compared with those of the previous frame in order to remove temporally redundant data. Then the CGH pattern for this compressed frame is generated using the hybrid point-source/wave-field approach. The resulting CGH pattern is finally transmitted to the video output and stored in the previous frame buffer. Experimental results reveal that our proposed method is able to produce video holograms at interactive rates without producing any visible artifact.

  17. Validation of an optical encoder during free weight resistance movements and analysis of bench press sticking point power during fatigue.

    PubMed

    Drinkwater, Eric J; Galna, Brook; McKenna, Michael J; Hunt, Patrick H; Pyne, David B

    2007-05-01

    During the concentric movement of the bench press, there is an initial high-power push after chest contact, immediately followed by a characteristic area of low power, the so-called "sticking region." During high-intensity lifting, a decline in power can result in a failed lift attempt. The purpose of this study was to determine the validity of an optical encoder to measure power and then employ this device to determine power changes during the initial acceleration and sticking region during fatiguing repeated bench press training. Twelve subjects performed a free weight bench press, a Smith Machine back squat, and a Smith Machine 40-kg bench press throw for power validation measures. All barbell movements were simultaneously monitored using cinematography and an optical encoder. Eccentric and concentric mean and peak power were calculated using time and position data derived from each method. Validity of power measures between the video (criterion) and optical encoder scores were evaluated by standard error of the estimate (SEE) and coefficient of variation (CV). Seven subjects then performed 4 sets of 6 free weight bench press repetitions progressively increasing from 85 to 95% of their 6 repetition maximum, with each repetition continually monitored by an optical encoder. The SEE for power ranged from 3.6 to 14.4 W (CV, 1.0-3.0%; correlation, 0.97-1.00). During the free weight bench press training, peak power declined by approximately 55% (p < 0.01) during the initial acceleration phase of the final 2 repetitions of the final set. Although decreases in power of the sticking point were significant (p < 0.01), as early as repetition 5 (-40%) they reached critically low levels in the final 2 repetitions (>-95%). In conclusion, the optical encoder provided valid measures of kinetics during free weight resistance training movements. The decline in power during the initial acceleration phase appears a factor in a failed lift attempt at the sticking point.

  18. Reproducibility of UAV-based earth surface topography based on structure-from-motion algorithms.

    NASA Astrophysics Data System (ADS)

    Clapuyt, François; Vanacker, Veerle; Van Oost, Kristof

    2014-05-01

    A representation of the earth surface at very high spatial resolution is crucial to accurately map small geomorphic landforms with high precision. Very high resolution digital surface models (DSM) can then be used to quantify changes in earth surface topography over time, based on differencing of DSMs taken at various moments in time. However, it is compulsory to have both high accuracy for each topographic representation and consistency between measurements over time, as DSM differencing automatically leads to error propagation. This study investigates the reproducibility of reconstructions of earth surface topography based on structure-from-motion (SFM) algorithms. To this end, we equipped an eight-propeller drone with a standard reflex camera. This equipment can easily be deployed in the field, as it is a lightweight, low-cost system in comparison with classic aerial photo surveys and terrestrial or airborne LiDAR scanning. Four sets of aerial photographs were created for one test field. The sets of airphotos differ in focal length, and viewing angles, i.e. nadir view and ground-level view. In addition, the importance of the accuracy of ground control points for the construction of a georeferenced point cloud was assessed using two different GPS devices with horizontal accuracy at resp. the sub-meter and sub-decimeter level. Airphoto datasets were processed with SFM algorithm and the resulting point clouds were georeferenced. Then, the surface representations were compared with each other to assess the reproducibility of the earth surface topography. Finally, consistency between independent datasets is discussed.

  19. Stability of Alprostadil in 0.9% Sodium Chloride Stored in Polyvinyl Chloride Containers.

    PubMed

    McCluskey, Susan V; Kirkham, Kylian; Munson, Jessica M

    2017-01-01

    The stability of alprostadil diluted in 0.9% sodium chloride stored in polyvinyl chloride (VIAFLEX) containers at refrigerated temperature, protected from light, is reported. Five solutions of alprostadil 11 mcg/mL were prepared in 250 mL 0.9% sodium chloride polyvinyl chloride (PL146) containers. The final concentration of alcohol was 2%. Samples were stored under refrigeration (2°C to 8°C) with protection from light. Two containers were submitted for potency testing and analyzed in duplicate with the stability-indicating high-performance liquid chromatography assay at specific time points over 14 days. Three containers were submitted for pH and visual testing at specific time points over 14 days. Stability was defined as retention of 90% to 110% of initial alprostadil concentration, with maintenance of the original clear, colorless, and visually particulate-free solution. Study results reported retention of 90% to 110% initial alprostadil concentration at all time points through day 10. One sample exceeded 110% potency at day 14. pH values did not change appreciably over the 14 days. There were no color changes or particle formation detected in the solutions over the study period. This study concluded that during refrigerated, light-protected storage in polyvinyl chloride (VIAFLEX) containers, a commercial alcohol-containing alprostadil formulation diluted to 11 mcg/mL with 0.9% sodium chloride 250 mL was stable for 10 days. Copyright© by International Journal of Pharmaceutical Compounding, Inc.

  20. Memory for the September 11, 2001, terrorist attacks one year later in patients with Alzheimer's disease, patients with mild cognitive impairment, and healthy older adults.

    PubMed

    Budson, Andrew E; Simons, Jon S; Waring, Jill D; Sullivan, Alison L; Hussoin, Trisha; Schacter, Daniel L

    2007-10-01

    Although there are many opportunities to study memory in patients with Alzheimer's disease (AD) in the laboratory, there are few opportunities to study memory for real world events in these patients. The September 11, 2001 terrorist attacks provided one such opportunity. Patients with AD, patients with mild cognitive impairment (MCI), and healthy older adults were given a telephone questionnaire in the initial weeks after the event, again three to four months later, and finally one year afterwards to evaluate their memory for the September 11, 2001 terrorist attacks. We were particularly interested in using the attacks as an opportunity to examine the decline of episodic memory in patients with AD, patients with MCI, and older adult controls over a period of months. We found that compared to healthy older adults, patients with AD and MCI showed impaired memory at the initial time point, more rapid forgetting from the initial to the three-month time point, and very similar changes in memory from the three-month to the one-year time point. We speculated that these findings were consistent with patients with AD and MCI showing initial impaired encoding and a more rapid rate of forgetting compared with healthy older adults, but that once the memories had been consolidated, their decay rate became similar to that of healthy older adults. Lastly, although memory distortions were common among all groups, they were greatest in the patients with AD.

  1. Perceived Barriers by University Students in the Practice of Physical Activities

    PubMed Central

    Gómez-López, Manuel; Gallegos, Antonio Granero; Extremera, Antonio Baena

    2010-01-01

    The main goal of this research is to study in detail the main characteristics of university students in order to find out the reasons why they have adopted an inactive lifestyle. In order to do so, a questionnaire on the analysis of sports habits and lifestyle was given to 323 students. They were taken from a representative sample of 1834 students. These 323 students had pointed out at the moment of the fieldwork, not having practiced any sport in their spare time. Our findings point out that there are diverse reasons for this. On one hand, reasons referred to as external barriers such as lack of time, on the other hand, internal barriers such as not liking the physical activity, not seeing its practicality or usefulness, feeling lazy or with apathy, or thinking that they are not competent in this type of activities. Other reasons such as the lack of social support are grouped within the external barriers. Finally, it is important to stress that there are also differences based on gender with respect to motivation. Key points External barriers prevail in university students. The lack of time is among the most highlighted ones. Statistically significant results have been found regarding the gender variable. The results are very important since they are considered to be valuable information for university institutions when guiding and diversifying their offer of physical and sport activities. Also as a guide in the design of support policies and national sport management guidelines. PMID:24149629

  2. Perceived barriers by university students in the practice of physical activities.

    PubMed

    Gómez-López, Manuel; Gallegos, Antonio Granero; Extremera, Antonio Baena

    2010-01-01

    The main goal of this research is to study in detail the main characteristics of university students in order to find out the reasons why they have adopted an inactive lifestyle. In order to do so, a questionnaire on the analysis of sports habits and lifestyle was given to 323 students. They were taken from a representative sample of 1834 students. These 323 students had pointed out at the moment of the fieldwork, not having practiced any sport in their spare time. Our findings point out that there are diverse reasons for this. On one hand, reasons referred to as external barriers such as lack of time, on the other hand, internal barriers such as not liking the physical activity, not seeing its practicality or usefulness, feeling lazy or with apathy, or thinking that they are not competent in this type of activities. Other reasons such as the lack of social support are grouped within the external barriers. Finally, it is important to stress that there are also differences based on gender with respect to motivation. Key pointsExternal barriers prevail in university students. The lack of time is among the most highlighted ones.Statistically significant results have been found regarding the gender variable.The results are very important since they are considered to be valuable information for university institutions when guiding and diversifying their offer of physical and sport activities. Also as a guide in the design of support policies and national sport management guidelines.

  3. Kinetics of Static Strain Aging in Polycrystalline NiAl-based Alloys

    NASA Technical Reports Server (NTRS)

    Weaver, M. L.; Kaufman, M. J.; Noebe, R. D.

    1996-01-01

    The kinetics of yield point return have been studied in two NiAl-based alloys as a function of aging time at temperatures between 300 and 700 K. The results indicate that the upper yield stress increment, Delta sigma(sub u) (i.e., stress difference between the upper yield point and the final flow stress achieved during prestraining), in conventional purity (CP-NiAl) and in high purity carbon-doped (NiAl-C) material first increased with a t(exp 2/3) relationship before reaching a plateau. This behavior suggests that a Cottrell locking mechanism is the cause for yield points in NiAl. In addition, positive y-axis intercepts were observed in plots of Delta sigma(sub u) versus t(exp 2/3) suggesting the operation of a Snoek mechanism. Analysis according to the Cottrell Bilby model of atmosphere formation around dislocations yields an activation energy for yield point return in the range 70 to 76 kJ/mol which is comparable to the activation energy for diffusion of interstitial impurities in bcc metals. It is, thus, concluded that the kinetics of static strain aging in NiAl are controlled by the locking of dislocations by Cottrell atmospheres of carbon atoms around dislocations.

  4. Robust curb detection with fusion of 3D-Lidar and camera data.

    PubMed

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-05-21

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.

  5. Quantum group spin nets: Refinement limit and relation to spin foams

    NASA Astrophysics Data System (ADS)

    Dittrich, Bianca; Martin-Benito, Mercedes; Steinhaus, Sebastian

    2014-07-01

    So far spin foam models are hardly understood beyond a few of their basic building blocks. To make progress on this question, we define analogue spin foam models, so-called "spin nets," for quantum groups SU(2)k and examine their effective continuum dynamics via tensor network renormalization. In the refinement limit of this coarse-graining procedure, we find a vast nontrivial fixed-point structure beyond the degenerate and the BF phase. In comparison to previous work, we use fixed-point intertwiners, inspired by Reisenberger's construction principle [M. P. Reisenberger, J. Math. Phys. (N.Y.) 40, 2046 (1999)] and the recent work [B. Dittrich and W. Kaminski, arXiv:1311.1798], as the initial parametrization. In this new parametrization fine-tuning is not required in order to flow to these new fixed points. Encouragingly, each fixed point has an associated extended phase, which allows for the study of phase transitions in the future. Finally we also present an interpretation of spin nets in terms of melonic spin foams. The coarse-graining flow of spin nets can thus be interpreted as describing the effective coupling between two spin foam vertices or space time atoms.

  6. Final report on CCT-K6: Comparison of local realisations of dew-point temperature scales in the range -50 °C to +20 °C

    NASA Astrophysics Data System (ADS)

    Bell, S.; Stevens, M.; Abe, H.; Benyon, R.; Bosma, R.; Fernicola, V.; Heinonen, M.; Huang, P.; Kitano, H.; Li, Z.; Nielsen, J.; Ochi, N.; Podmurnaya, O. A.; Scace, G.; Smorgon, D.; Vicente, T.; Vinge, A. F.; Wang, L.; Yi, H.

    2015-01-01

    A key comparison in dew-point temperature was carried out among the national standards held by NPL (pilot), NMIJ, INTA, VSL, INRIM, MIKES, NIST, NIM, VNIIFTRI-ESB and NMC. A pair of condensation-principle dew-point hygrometers was circulated and used to compare the local realisations of dew point for participant humidity generators in the range -50 °C to +20 °C. The duration of the comparison was prolonged by numerous problems with the hygrometers, requiring some repairs, and several additional check measurements by the pilot. Despite the problems and the extended timescale, the comparison was effective in providing evidence of equivalence. Agreement with the key comparison reference value was achieved in the majority of cases, and bilateral degrees of equivalence are also reported. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  7. PSFGAN: a generative adversarial network system for separating quasar point sources and host galaxy light

    NASA Astrophysics Data System (ADS)

    Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.

    2018-06-01

    The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.

  8. Shock interaction with deformable particles using a constrained interface reinitialization scheme

    NASA Astrophysics Data System (ADS)

    Sridharan, P.; Jackson, T. L.; Zhang, J.; Balachandar, S.; Thakur, S.

    2016-02-01

    In this paper, we present axisymmetric numerical simulations of shock propagation in nitromethane over an aluminum particle for post-shock pressures up to 10 GPa. We use the Mie-Gruneisen equation of state to describe both the medium and the particle. The numerical method is a finite-volume based solver on a Cartesian grid, that allows for multi-material interfaces and shocks, and uses a novel constrained reinitialization scheme to precisely preserve particle mass and volume. We compute the unsteady inviscid drag coefficient as a function of time, and show that when normalized by post-shock conditions, the maximum drag coefficient decreases with increasing post-shock pressure. We also compute the mass-averaged particle pressure and show that the observed oscillations inside the particle are on the particle-acoustic time scale. Finally, we present simplified point-particle models that can be used for macroscale simulations. In the Appendix, we extend the isothermal or isentropic assumption concerning the point-force models to non-ideal equations of state, thus justifying their use for the current problem.

  9. Design of set-point weighting PIλ + Dμ controller for vertical magnetic flux controller in Damavand tokamak.

    PubMed

    Rasouli, H; Fatehi, A

    2014-12-01

    In this paper, a simple method is presented for tuning weighted PI(λ) + D(μ) controller parameters based on the pole placement controller of pseudo-second-order fractional systems. One of the advantages of this controller is capability of reducing the disturbance effects and improving response to input, simultaneously. In the following sections, the performance of this controller is evaluated experimentally to control the vertical magnetic flux in Damavand tokamak. For this work, at first a fractional order model is identified using output-error technique in time domain. For various practical experiments, having desired time responses for magnetic flux in Damavand tokamak, is vital. To approach this, at first the desired closed loop reference models are obtained based on generalized characteristic ratio assignment method in fractional order systems. After that, for the identified model, a set-point weighting PI(λ) + D(μ) controller is designed and simulated. Finally, this controller is implemented on digital signal processor control system of the plant to fast/slow control of magnetic flux. The practical results show appropriate performance of this controller.

  10. Global Identification of MicroRNAs and Their Targets in Barley under Salinity Stress

    PubMed Central

    Cui, Licao; Feng, Kewei; Liu, Fuyan; Du, Xianghong; Tong, Wei; Nie, Xiaojun; Ji, Wanquan; Weining, Song

    2015-01-01

    Salinity is a major limiting factor for agricultural production worldwide. A better understanding of the mechanisms of salinity stress response will aid efforts to improve plant salt tolerance. In this study, a combination of small RNA and mRNA degradome sequencing was used to identify salinity responsive-miRNAs and their targets in barley. A total of 152 miRNAs belonging to 126 families were identified, of which 44 were found to be salinity responsive with 30 up-regulated and 25 down-regulated respectively. The majority of the salinity-responsive miRNAs were up-regulated at the 8h time point, while down-regulated at the 3h and 27h time points. The targets of these miRNAs were further detected by degradome sequencing coupled with bioinformatics prediction. Finally, qRT-PCR was used to validate the identified miRNA and their targets. Our study systematically investigated the expression profile of miRNA and their targets in barley during salinity stress phase, which can contribute to understanding how miRNAs respond to salinity stress in barley and other cereal crops. PMID:26372557

  11. Non-musicians also have a piano in the head: evidence for spatial-musical associations from line bisection tracking.

    PubMed

    Hartmann, Matthias

    2017-02-01

    The spatial representation of ordinal sequences (numbers, time, tones) seems to be a fundamental cognitive property. While an automatic association between horizontal space and pitch height (left-low pitch, right-high pitch) is constantly reported in musicians, the evidence for such an association in non-musicians is mixed. In this study, 20 non-musicians performed a line bisection task while listening to irrelevant high- and low-pitched tones and white noise (control condition). While pitch height had no influence on the final bisection point, participants' movement trajectories showed systematic biases: When approaching the line and touching the line for the first time (initial bisection point), the mouse cursor was directed more rightward for high-pitched tones compared to low-pitched tones and noise. These results show that non-musicians also have a subtle but nevertheless automatic association between pitch height and the horizontal space. This suggests that spatial-musical associations do not necessarily depend on constant sensorimotor experiences (as it is the case for musicians) but rather reflect the seemingly inescapable tendency to represent ordinal information on a horizontal line.

  12. Offline motion planning and simulation of two-robot welding coordination

    NASA Astrophysics Data System (ADS)

    Zhang, Tie; Ouyang, Fan

    2012-03-01

    This paper focuses on the two-robot welding coordination of complex curve seam which means one robot grasp the workpiece, the other hold the torch, the two robots work on the same workpiece simultaneously. This paper builds the dual-robot coordinate system at the beginning, and three point calibration method of two robots' relative base coordinate system is presented. After that, the non master/slave scheme is chosen for the motion planning, the non master/slave scheme sets the poses versus time function of the point u on the workpiece, and calculates the two robot end effecter trajectories through the constrained relationship matrix automatically. Moreover, downhand welding is employed which can guarantee the torch and the seam keep in good contact condition all the time during the welding. Finally, a Solidworks-Sim Mechanics simulation platform is established, and a simulation of curved steel pipe welding is conducted. The results of the simulation illustrate the welding process can meet the requirements of downhand welding, the joint displacement curves are smooth and continuous and no joint velocities are out of working scope.

  13. Quality of social interaction in foster dyads at child age 2 and 3 years.

    PubMed

    Jacobsen, Heidi; Vang, Kristin Alvestad; Lindahl, Karoline Mentzoni; Wentzel-Larsen, Tore; Smith, Lars; Moe, Vibeke

    2018-06-30

    The main aim of this study was to investigate the quality of social interaction between 60 foster parents and their foster children compared to a group of 55 non-foster families at 2 (T1) and again at 3 (T2) years of age. Video observations were used to investigate child-parent interaction at both time-points. "This is My Baby" interview was administered to investigate foster parents' commitment at T1. The main results revealed significant group differences at T1 on all child-parent social interaction measures, although not at T2. Further, a significant group by time interaction was identified for parental sensitivity, revealing a positive development over time in the foster group. Finally, a significant positive relation was found between commitment at T1 and parental sensitivity. The results convey an optimistic view of the possibilities for foster dyads to develop positive patterns of social interaction over time.

  14. Heavy point frog performance under passenger vehicles : final report.

    DOT National Transportation Integrated Search

    2016-06-01

    Federal Railroad Administration contracted with the Transportation Technology Center, Inc., Pueblo, Colorado, to conduct an : investigation of passenger vehicle performance running through heavy point frog (HPF) up to speeds of 110 mph. A NUCARS : ...

  15. In-Flight Guidance, Navigation, and Control Performance Results for the GOES-16 Spacecraft

    NASA Technical Reports Server (NTRS)

    Chapel, Jim; Stancliffe, Devin; Bevacqua, Tim; Winkler, Stephen; Clapp, Brian; Rood, Tim; Freesland, Doug; Reth, Alan; Early, Derrick; Walsh, Tim; hide

    2017-01-01

    The Geostationary Operational Environmental Satellite-R Series (GOES-R), which launched in November 2016, is the first of the next generation geostationary weather satellites. GOES-R provides 4 times the resolution, 5 times the observation rate, and 3 times the number of spectral bands for Earth observations compared with its predecessor spacecraft. Additionally, Earth relative and Sun-relative pointing and pointing stability requirements are maintained throughout reaction wheel desaturation events and station keeping activities, allowing GOES-R to provide continuous Earth and sun observations. This paper reviews the pointing control, pointing stability, attitude knowledge, and orbit knowledge requirements necessary to realize the ambitious Image Navigation and Registration (INR) objectives of GOES-R. This paper presents a comparison between low-frequency on-orbit pointing results and simulation predictions for both the Earth Pointed Platform (EPP) and Sun Pointed Platform (SPP). Results indicate excellent agreement between simulation predictions and observed on-orbit performance, and compliance with pointing performance requirements. The EPP instrument suite includes 6 seismic accelerometers sampled at 2 KHz, allowing in-flight verification of jitter responses and comparison back to simulation predictions. This paper presents flight results of acceleration, shock response spectrum (SRS), and instrument line of sight responses for various operational scenarios and instrument observation modes. The results demonstrate the effectiveness of the dual-isolation approach employed on GOES-R. The spacecraft provides attitude and rate data to the primary Earth-observing instrument at 100 Hz, which are used to adjust instrument scanning. The data must meet accuracy and latency numbers defined by the Integrated Rate Error (IRE) requirements. This paper discusses the on-orbit IRE results, showing compliance to these requirements with margin. During the spacecraft checkout period, IRE disturbances were observed and subsequently attributed to thermal control of the Inertial Measurement Unit (IMU) mounting interface. Adjustments of IMU thermal control and the resulting improvements in IRE are presented. Orbit knowledge represents the final element of INR performance. Extremely accurate orbital position is achieved by GPS navigation at Geosynchronous Earth Orbit (GEO). On-orbit performance results are shown demonstrating compliance with the stringent orbit position accuracy requirements of GOES-R, including during station keeping activities and momentum desaturation events. As we show in this paper, the on-orbit performance of the GNC design provides the necessary capabilities to achieve GOES-R mission objectives.

  16. The severe acute respiratory syndrome: impact on travel and tourism.

    PubMed

    Wilder-Smith, Annelies

    2006-03-01

    SARS and travel are intricately interlinked. Travelers belonged to those primarily affected in the early stages of the outbreak, travelers became vectors of the disease, and finally, travel and tourism themselves became the victims. The outbreak of SARS created international anxiety because of its novelty, its ease of transmission in certain settings, and the speed of its spread through jet travel, combined with extensive media coverage. The psychological impacts of SARS, coupled with travel restrictions imposed by various national and international authorities, have diminished international travel in 2003, far beyond the limitations to truly SARS hit areas. Governments and press, especially in non SARS affected areas, have been slow to strike the right balance between timely and frequent risk communication and placing risk in the proper context. Screening at airport entry points is costly, has a low yield and is not sufficient in itself. The low yield in detecting SARS is most likely due to a combination of factors, such as travel advisories which resulted in reduced travel to and from SARS affected areas, implementation of effective pre-departure screening at airports in SARS-hit countries, and a rapid decline in new cases at the time when screening was finally introduced. Rather than investing in airport screening measures to detect rare infectious diseases, investments should be used to strengthen screening and infection control capacities at points of entry into the healthcare system. If SARS reoccurs, the subsequent outbreak will be smaller and more easily contained if the lessons learnt from the recent epidemic are applied. Lessons learnt during the outbreak in relation to international travel will be discussed.

  17. A novel technique for reference point generation to aid in intraoral scan alignment.

    PubMed

    Renne, Walter G; Evans, Zachary P; Mennito, Anthony; Ludlow, Mark

    2017-11-12

    When using a completely digital workflow on larger prosthetic cases it is often difficult to communicate to the laboratory or chairside Computer Aided Design and Computer Aided Manufacturing system the provisional prosthetic information. The problem arises when common hard tissue data points are limited or non-existent such as in complete arch cases in which the 3D model of the complete arch provisional restorations must be aligned perfectly with the 3D model of the complete arch preparations. In these instances, soft tissue is not enough to ensure an accurate automatic or manual alignment due to a lack of well-defined reference points. A new technique is proposed for the proper digital alignment of the 3D virtual model of the provisional prosthetic to the 3D virtual model of the prepared teeth in cases where common and coincident hard tissue data points are limited. Clinical considerations: A technique is described in which fiducial composite resin dots are temporarily placed on the intraoral keratinized tissue in strategic locations prior to final impressions. These fiducial dots provide coincident and clear 3D data points that when scanned into a digital impression allow superimposition of the 3D models. Composite resin dots on keratinized tissue were successful at allowing accurate merging of provisional restoration and post-preparation 3D models for the purpose of using the provisional restorations as a guide for final CLINICAL SIGNIFICANCE: Composite resin dots placed temporarily on attached tissue were successful at allowing accurate merging of the provisional restoration 3D models to the preparation 3D models for the purposes of using the provisional restorations as a guide for final restoration design and manufacturing. In this case, they allowed precise superimposition of the 3D models made in the absence of any other hard tissue reference points, resulting in the fabrication of ideal final restorations. © 2017 Wiley Periodicals, Inc.

  18. 2D first break tomographic processing of data measured for celebration profiles: CEL01, CEL04, CEL05, CEL06, CEL09, CEL11

    NASA Astrophysics Data System (ADS)

    Bielik, M.; Vozar, J.; Hegedus, E.; Celebration Working Group

    2003-04-01

    The contribution informs about the preliminary results that relate to the first arrival p-wave seismic tomographic processing of data measured along the profiles CEL01, CEL04, CEL05, CEL06, CEL09 and CEL11. These profiles were measured in a framework of the seismic project called CELEBRATION 2000. Data acquisition and geometric parameters of the processed profiles, tomographic processing’s principle, particular processing steps and program parameters are described. Characteristic data (shot points, geophone points, total length of profiles, for all profiles, sampling, sensors and record lengths) of observation profiles are given. The fast program package developed by C. Zelt was applied for tomographic velocity inversion. This process consists of several steps. First step is a creation of the starting velocity field for which the calculated arrival times are modelled by the method of finite differences. The next step is minimization of differences between the measured and modelled arrival time till the deviation is small. Elimination of equivalency problem by including a priori information in the starting velocity field was done too. A priori information consists of the depth to the pre-Tertiary basement, estimation of its overlying sedimentary velocity from well-logging and or other seismic velocity data, etc. After checking the reciprocal times, pickings were corrected. The final result of the processing is a reliable travel time curve set considering the reciprocal times. We carried out picking of travel time curves, enhancement of signal-to-noise ratio on the seismograms using the program system of PROMAX. Tomographic inversion was carried out by so called 3D/2D procedure taking into account 3D wave propagation. It means that a corridor along the profile, which contains the outlying shot points and geophone points as well was defined and we carried out 3D processing within this corridor. The preliminary results indicate the seismic anomalous zones within the crust and the uppermost part of the upper mantle in the area consists of the Western Carpathians, the North European platform, the Pannonian basin and the Bohemian Massif.

  19. Real-time viability and apoptosis kinetic detection method of 3D multicellular tumor spheroids using the Celigo Image Cytometer.

    PubMed

    Kessel, Sarah; Cribbes, Scott; Bonasu, Surekha; Rice, William; Qiu, Jean; Chan, Leo Li-Ying

    2017-09-01

    The development of three-dimensional (3D) multicellular tumor spheroid models for cancer drug discovery research has increased in the recent years. The use of 3D tumor spheroid models may be more representative of the complex in vivo tumor microenvironments in comparison to two-dimensional (2D) assays. Currently, viability of 3D multicellular tumor spheroids has been commonly measured on standard plate-readers using metabolic reagents such as CellTiter-Glo® for end point analysis. Alternatively, high content image cytometers have been used to measure drug effects on spheroid size and viability. Previously, we have demonstrated a novel end point drug screening method for 3D multicellular tumor spheroids using the Celigo Image Cytometer. To better characterize the cancer drug effects, it is important to also measure the kinetic cytotoxic and apoptotic effects on 3D multicellular tumor spheroids. In this work, we demonstrate the use of PI and caspase 3/7 stains to measure viability and apoptosis for 3D multicellular tumor spheroids in real-time. The method was first validated by staining different types of tumor spheroids with PI and caspase 3/7 and monitoring the fluorescent intensities for 16 and 21 days. Next, PI-stained and nonstained control tumor spheroids were digested into single cell suspension to directly measure viability in a 2D assay to determine the potential toxicity of PI. Finally, extensive data analysis was performed on correlating the time-dependent PI and caspase 3/7 fluorescent intensities to the spheroid size and necrotic core formation to determine an optimal starting time point for cancer drug testing. The ability to measure real-time viability and apoptosis is highly important for developing a proper 3D model for screening tumor spheroids, which can allow researchers to determine time-dependent drug effects that usually are not captured by end point assays. This would improve the current tumor spheroid analysis method to potentially better identify more qualified cancer drug candidates for drug discovery research. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  20. THEMIS Global Mosaics

    NASA Astrophysics Data System (ADS)

    Gorelick, N. S.; Christensen, P. R.

    2005-12-01

    We have developed techniques to make seamless, controlled global mosaics from the more than 50,000 multi-spectral infrared images of the Mars returned by the THEMIS instrument aboard the Mars Odyssey spacecraft. These images cover more than 95% of the surface at 100m/pixel resolution at both day and night local times. Uncertainties in the position and pointing of the spacecraft, varying local time, and imaging artifacts make creating well-registered mosaics from these datasets a challenging task. In preparation for making global mosaics, many full-resolution regional mosaics have been made. These mosaics typically cover an area 10x10 degrees or smaller, and are constructed from only a few hundred images. To make regional mosaics, individual images are geo-rectified using the USGS ISIS software. This dead-reckoning is sufficient to approximate position to within 400m in cases where the SPICE information was downlinked. Further coregistration of images is handled in two ways: grayscale differences minimization in overlapping regions through integer pixel shifting, or through automatic tie-point generation using a radial symmetry transformation (RST). The RST identifies points within an image that exhibit 4-way symmetry. Martian craters tend to to be very radially symmetric, and the RST can pin-point a crater center to sub-pixel accuracy in both daytime and nighttime images, independent of lighting, time of day, or seasonal effects. Additionally, the RST works well on visible-light images, and in a 1D application, on MOLA tracks, to provide precision tie-points across multiple data sets. The RST often finds many points of symmetry that aren't related to surface features. These "false-hits" are managed using a clustering algorithm that identifies constellations of points that occur in multiple images, independent of scaling or other affine transformations. This technique is able to make use of data in which the "good" tie-points comprise even less than 1% of total candidate tie-points. Once tie-points have been identified, the individual images are warped into their final shape and position, and then mosaiced and blended. To make seamless mosaics, each image can be level adjusted to normalize its values using histogram-fitting, but in most cases a linear contrast stretch to a fixed standard deviation is sufficient, although it destroys the absolute radiometry of the mosaic. For very large mosaics, using a high-pass/low-pass separation, and blending the two pieces separately before recombining them has also provided positive results.

  1. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments (erosion, landslide monitoring, etc) and we then tested the use of filtering techniques using 3D moving windows along the space and time, which considerably reduces data scattering due to the benefits of data redundancy. In conclusion, the simulator allowed us to improve our different algorithms and to understand how instrumental error affects final results. And also, improve the methodology of scans acquisition to find the best compromise between point density, positioning and acquisition time with the best accuracy possible to characterize the topographic change.

  2. Prosodic structure shapes the temporal realization of intonation and manual gesture movements.

    PubMed

    Esteve-Gibert, Núria; Prieto, Pilar

    2013-06-01

    Previous work on the temporal coordination between gesture and speech found that the prominence in gesture coordinates with speech prominence. In this study, the authors investigated the anchoring regions in speech and pointing gesture that align with each other. The authors hypothesized that (a) in contrastive focus conditions, the gesture apex is anchored in the intonation peak and (b) the upcoming prosodic boundary influences the timing of gesture and intonation movements. Fifteen Catalan speakers pointed at a screen while pronouncing a target word with different metrical patterns in a contrastive focus condition and followed by a phrase boundary. A total of 702 co-speech deictic gestures were acoustically and gesturally analyzed. Intonation peaks and gesture apexes showed parallel behavior with respect to their position within the accented syllable: They occurred at the end of the accented syllable in non-phrase-final position, whereas they occurred well before the end of the accented syllable in phrase-final position. Crucially, the position of intonation peaks and gesture apexes was correlated and was bound by prosodic structure. The results refine the phonological synchronization rule (McNeill, 1992), showing that gesture apexes are anchored in intonation peaks and that gesture and prosodic movements are bound by prosodic phrasing.

  3. Practical Implementation of Semi-Automated As-Built Bim Creation for Complex Indoor Environments

    NASA Astrophysics Data System (ADS)

    Yoon, S.; Jung, J.; Heo, J.

    2015-05-01

    In recent days, for efficient management and operation of existing buildings, the importance of as-built BIM is emphasized in AEC/FM domain. However, fully automated as-built BIM creation is a tough issue since newly-constructed buildings are becoming more complex. To manage this problem, our research group has developed a semi-automated approach, focusing on productive 3D as-built BIM creation for complex indoor environments. In order to test its feasibility for a variety of complex indoor environments, we applied the developed approach to model the `Charlotte stairs' in Lotte World Mall, Korea. The approach includes 4 main phases: data acquisition, data pre-processing, geometric drawing, and as-built BIM creation. In the data acquisition phase, due to its complex structure, we moved the scanner location several times to obtain the entire point clouds of the test site. After which, data pre-processing phase entailing point-cloud registration, noise removal, and coordinate transformation was followed. The 3D geometric drawing was created using the RANSAC-based plane detection and boundary tracing methods. Finally, in order to create a semantically-rich BIM, the geometric drawing was imported into the commercial BIM software. The final as-built BIM confirmed that the feasibility of the proposed approach in the complex indoor environment.

  4. Economic Feasibility of Wireless Sensor Network-Based Service Provision in a Duopoly Setting with a Monopolist Operator

    PubMed Central

    Romero, Julián; Sacoto-Cabrera, Erwin J.

    2017-01-01

    We analyze the feasibility of providing Wireless Sensor Network-data-based services in an Internet of Things scenario from an economical point of view. The scenario has two competing service providers with their own private sensor networks, a network operator and final users. The scenario is analyzed as two games using game theory. In the first game, sensors decide to subscribe or not to the network operator to upload the collected sensing-data, based on a utility function related to the mean service time and the price charged by the operator. In the second game, users decide to subscribe or not to the sensor-data-based service of the service providers based on a Logit discrete choice model related to the quality of the data collected and the subscription price. The sinks and users subscription stages are analyzed using population games and discrete choice models, while network operator and service providers pricing stages are analyzed using optimization and Nash equilibrium concepts respectively. The model is shown feasible from an economic point of view for all the actors if there are enough interested final users and opens the possibility of developing more efficient models with different types of services. PMID:29186847

  5. Short-term Low-strain Vibration Enhances Chemo-transport Yet Does Not Stimulate Osteogenic Gene Expression or Cortical Bone Formation in Adult Mice

    PubMed Central

    Kotiya, Akhilesh A.; Bayly, Philip V.; Silva, Matthew J.

    2010-01-01

    Development of low-magnitude mechanical stimulation (LMMS) based treatment strategies for a variety of orthopaedic issues requires better understanding of mechano-transduction and bone adaptation. Our overall goal was to study the tissue and molecular level changes in cortical bone in response to low-strain vibration (LSV: 70 Hz, 0.5 g, 300 με) and compare these to changes in response to a known anabolic stimulus: high-strain compression (HSC: rest inserted loading, 1000 με). Adult (6–7 month) C57BL/6 mice were used for the study and non-invasive axial compression of the tibia was used as a loading model. We first studied bone adaptation at the tibial mid-diaphysis, using dynamic histomorphometry, in response to daily loading of 15 min LSV or 60 cycles HSC for 5 consecutive days. We found that bone formation rate and mineral apposition rate were significantly increased in response to HSC but not LSV. The second aim was to compare chemo-transport in response to 5 min of LSV versus 5 min (30 cycles) of HSC. Chemo-transport increased significantly in response to both loading stimuli, particularly in the medial and the lateral quadrants of the cross section. Finally, we evaluated the expression of genes related to mechano-responsiveness, osteoblast differentiation, and matrix mineralization in tibias subjected to 15 min LSV or 60 cycles HSC for 1 day (4-hour time point) or 4 consecutive days (4-day time point). The expression level of most of the genes remained unchanged in response to LSV at both time points. In contrast, the expression level of all the genes changed significantly in response to HSC at the 4-hour time point. We conclude that short-term, low-strain vibration results in increased chemo-transport, yet does not stimulate an increase in mechano-responsive or osteogenic gene expression, and cortical bone formation in tibias of adult mice. PMID:20937421

  6. Game Related Statistics Which Discriminate Between Winning and Losing Under-16 Male Basketball Games

    PubMed Central

    Lorenzo, Alberto; Gómez, Miguel Ángel; Ortega, Enrique; Ibáñez, Sergio José; Sampaio, Jaime

    2010-01-01

    The aim of the present study was to identify the game-related statistics which discriminate between winning and losing teams in under-16 years old male basketball games. The sample gathered all 122 games in the 2004 and 2005 Under-16 European Championships. The game-related statistics analysed were the free-throws (both successful and unsuccessful), 2- and 3-points field-goals (both successful and unsuccessful) offensive and defensive rebounds, blocks, assists, fouls, turnovers and steals. The winning teams exhibited lower ball possessions per game and better offensive and defensive efficacy coefficients than the losing teams. Results from discriminant analysis were statistically significant and allowed to emphasize several structure coefficients (SC). In close games (final score differences below 9 points), the discriminant variables were the turnovers (SC = -0.47) and the assists (SC = 0.33). In balanced games (final score differences between 10 and 29 points), the variables that discriminated between the groups were the successful 2-point field-goals (SC = -0.34) and defensive rebounds (SC = -0. 36); and in unbalanced games (final score differences above 30 points) the variables that best discriminated both groups were the successful 2-point field-goals (SC = 0.37). These results allowed understanding that these players' specific characteristics result in a different game-related statistical profile and helped to point out the importance of the perceptive and decision making process in practice and in competition. Key points The players' game-related statistical profile varied according to game type, game outcome and in formative categories in basketball. The results of this work help to point out the different player's performance described in U-16 men's basketball teams compared with senior and professional men's basketball teams. The results obtained enhance the importance of the perceptive and decision making process in practice and in competition. PMID:24149794

  7. Foodservice yield and fabrication times for beef as influenced by purchasing options and merchandising styles.

    PubMed

    Weatherly, B H; Griffin, D B; Johnson, H K; Walter, J P; De La Zerda, M J; Tipton, N C; Savell, J W

    2001-12-01

    Selected beef subprimals were obtained from fabrication lines of three foodservice purveyors to assist in the development of a software support program for the beef foodservice industry. Subprimals were fabricated into bone-in or boneless foodservice ready-to-cook portion-sized cuts and associated components by professional meat cutters. Each subprimal was cut to generate mean foodservice cutting yields and labor requirements, which were calculated from observed weights (kilograms) and processing times (seconds). Once fabrication was completed, data were analyzed to determine means and standard errors of percentage yields and processing times for each subprimal. Subprimals cut to only one end point were evaluated for mean foodservice yields and processing times, but no comparisons were made within subprimal. However, those traditionally cut into various end points were additionally compared by cutting style. Subprimals cut by a single cutting style included rib, roast-ready; ribeye roll, lip-on, bone-in; brisket, deckle-off, boneless; top (inside) round; and bottom sirloin butt, flap, boneless. Subprimals cut into multiple end points or styles included ribeye, lip-on; top sirloin, cap; tenderloin butt, defatted; shortloin, short-cut; strip loin, boneless; top sirloin butt, boneless; and tenderloin, full, side muscle on, defatted. Mean yields of portion cuts, and mean fabrication times required to manufacture these cuts differed (P < 0.05) by cutting specification of the final product. In general, as the target portion size of fabricated steaks decreased, the mean number of steaks derived from any given subprimal cut increased, causing total foodservice yield to decrease and total processing time to increase. Therefore, an inverse relationship tended to exist between processing times and foodservice yields. With a method of accurately evaluating various beef purchase options, such as traditional commodity subprimals, closely trimmed subprimals, and pre-cut portion steaks in terms of yield and labor cost, foodservice operators will be better equipped to decide what option is more viable for their operation.

  8. Space, Time, Ether, and Kant

    NASA Astrophysics Data System (ADS)

    Wong, Wing-Chun Godwin

    This dissertation focused on Kant's conception of physical matter in the Opus postumum. In this work, Kant postulates the existence of an ether which fills the whole of space and time with its moving forces. Kant's arguments for the existence of an ether in the so-called Ubergang have been acutely criticized by commentators. Guyer, for instance, thinks that Kant pushes the technique of transcendental deduction too far in trying to deduce the empirical ether. In defense of Kant, I held that it is not the actual existence of the empirical ether, but the concept of the ether as a space-time filler that is subject to a transcendental deduction. I suggested that Kant is doing three things in the Ubergang: First, he deduces the pure concept of a space-time filler as a conceptual hybrid of the transcendental object and permanent substance to replace the category of substance in the Critique. Then he tries to prove the existence of such a space-time filler as a reworking of the First Analogy. Finally, he takes into consideration the empirical determinations of the ether by adding the concept of moving forces to the space -time filler. In reconstructing Kant's proofs, I pointed out that Kant is absolutely committed to the impossibility of action-at-a-distance. If we add this new principle of no-action-at-a-distance to the Third Analogy, the existence of a space-time filler follows. I argued with textual evidence that Kant's conception of ether satisfies the basic structure of a field: (1) the ether is a material continuum; (2) a physical quantity is definable on each point in the continuum; and (3) the ether provides a medium to support the continuous transmission of action. The thrust of Kant's conception of ether is to provide a holistic ontology for the transition to physics, which can best be understood from a field-theoretical point of view. This is the main thesis I attempted to establish in this dissertation.

  9. Toward an integrated ice core chronology using relative and orbital tie-points

    NASA Astrophysics Data System (ADS)

    Bazin, L.; Landais, A.; Lemieux-Dudon, B.; Toyé Mahamadou Kele, H.; Blunier, T.; Capron, E.; Chappellaz, J.; Fischer, H.; Leuenberger, M.; Lipenkov, V.; Loutre, M.-F.; Martinerie, P.; Parrenin, F.; Prié, F.; Raynaud, D.; Veres, D.; Wolff, E.

    2012-04-01

    Precise ice cores chronologies are essential to better understand the mechanisms linking climate change to orbital and greenhouse gases concentration forcing. A tool for ice core dating (DATICE [developed by Lemieux-Dudon et al., 2010] permits to generate a common time-scale integrating relative and absolute dating constraints on different ice cores, using an inverse method. Nevertheless, this method has only been applied for a 4-ice cores scenario and for the 0-50 kyr time period. Here, we present the bases for an extension of this work back to 800 ka using (1) a compilation of published and new relative and orbital tie-points obtained from measurements of air trapped in ice cores and (2) an adaptation of the DATICE inputs to 5 ice cores for the last 800 ka. We first present new measurements of δ18Oatm and δO2/N2 on the Talos Dome and EPICA Dome C (EDC) ice cores with a particular focus on Marine Isotopic Stages (MIS) 5, and 11. Then, we show two tie-points compilations. The first one is based on new and published CH4 and δ18Oatm measurements on 5 ice cores (NorthGRIP, EPICA Dronning Maud Land, EDC, Talos Dome and Vostok) in order to produce a table of relative gas tie-points over the last 400 ka. The second one is based on new and published records of δO2/N2, δ18Oatm and air content to provide a table of orbital tie-points over the last 800 ka. Finally, we integrate the different dating constraints presented above in the DATICE tool adapted to 5 ice cores to cover the last 800 ka and show how these constraints compare with the established gas chronologies of each ice core.

  10. POLARBEAR constraints on cosmic birefringence and primordial magnetic fields

    DOE PAGES

    Ade, Peter A. R.; Arnold, Kam; Atlas, Matt; ...

    2015-12-08

    Here, we constrain anisotropic cosmic birefringence using four-point correlations of even-parity E-mode and odd-parity B-mode polarization in the cosmic microwave background measurements made by the POLARization of the Background Radiation (POLARBEAR) experiment in its first season of observations. We find that the anisotropic cosmic birefringence signal from any parity-violating processes is consistent with zero. The Faraday rotation from anisotropic cosmic birefringence can be compared with the equivalent quantity generated by primordial magnetic fields if they existed. The POLARBEAR nondetection translates into a 95% confidence level (C.L.) upper limit of 93 nanogauss (nG) on the amplitude of an equivalent primordial magneticmore » field inclusive of systematic uncertainties. This four-point correlation constraint on Faraday rotation is about 15 times tighter than the upper limit of 1380 nG inferred from constraining the contribution of Faraday rotation to two-point correlations of B-modes measured by Planck in 2015. Metric perturbations sourced by primordial magnetic fields would also contribute to the B-mode power spectrum. Using the POLARBEAR measurements of the B-mode power spectrum (two-point correlation), we set a 95% C.L. upper limit of 3.9 nG on primordial magnetic fields assuming a flat prior on the field amplitude. This limit is comparable to what was found in the Planck 2015 two-point correlation analysis with both temperature and polarization. Finally, we perform a set of systematic error tests and find no evidence for contamination. This work marks the first time that anisotropic cosmic birefringence or primordial magnetic fields have been constrained from the ground at subdegree scales.« less

  11. Projecting Event-Based Analysis Dates in Clinical Trials: An Illustration Based on the International Duration Evaluation of Adjuvant Chemotherapy (IDEA) Collaboration. Projecting analysis dates for the IDEA collaboration

    PubMed Central

    Renfro, Lindsay A.; Grothey, Axel M.; Paul, James; Floriani, Irene; Bonnetain, Franck; Niedzwiecki, Donna; Yamanaka, Takeharu; Souglakos, Ioannis; Yothers, Greg; Sargent, Daniel J.

    2015-01-01

    Purpose Clinical trials are expensive and lengthy, where success of a given trial depends on observing a prospectively defined number of patient events required to answer the clinical question. The point at which this analysis time occurs depends on both patient accrual and primary event rates, which typically vary throughout the trial's duration. We demonstrate real-time analysis date projections using data from a collection of six clinical trials that are part of the IDEA collaboration, an international preplanned pooling of data from six trials testing the duration of adjuvant chemotherapy in stage III colon cancer, and we additionally consider the hypothetical impact of one trial's early termination of follow-up. Patients and Methods In the absence of outcome data from IDEA, monthly accrual rates for each of the six IDEA trials were used to project subsequent trial-specific accrual, while historical data from similar Adjuvant Colon Cancer Endpoints (ACCENT) Group trials were used to construct a parametric model for IDEA's primary endpoint, disease-free survival, under the same treatment regimen. With this information and using the planned total accrual from each IDEA trial protocol, individual patient accrual and event dates were simulated and the overall IDEA interim and final analysis times projected. Projections were then compared with actual (previously undisclosed) trial-specific event totals at a recent census time for validation. The change in projected final analysis date assuming early termination of follow-up for one IDEA trial was also calculated. Results Trial-specific predicted event totals were close to the actual number of events per trial for the recent census date at which the number of events per trial was known, with the overall IDEA projected number of events only off by eight patients. Potential early termination of follow-up by one IDEA trial was estimated to postpone the overall IDEA final analysis date by 9 months. Conclusions Real-time projection of the final analysis time during a trial, or the overall analysis time during a trial collaborative such as IDEA, has practical implications for trial feasibility when these projections are translated into additional time and resources required. PMID:26989447

  12. Real-Time Precise Point Positioning (RTPPP) with raw observations and its application in real-time regional ionospheric VTEC modeling

    NASA Astrophysics Data System (ADS)

    Liu, Teng; Zhang, Baocheng; Yuan, Yunbin; Li, Min

    2018-01-01

    Precise Point Positioning (PPP) is an absolute positioning technology mainly used in post data processing. With the continuously increasing demand for real-time high-precision applications in positioning, timing, retrieval of atmospheric parameters, etc., Real-Time PPP (RTPPP) and its applications have drawn more and more research attention in recent years. This study focuses on the models, algorithms and ionospheric applications of RTPPP on the basis of raw observations, in which high-precision slant ionospheric delays are estimated among others in real time. For this purpose, a robust processing strategy for multi-station RTPPP with raw observations has been proposed and realized, in which real-time data streams and State-Space-Representative (SSR) satellite orbit and clock corrections are used. With the RTPPP-derived slant ionospheric delays from a regional network, a real-time regional ionospheric Vertical Total Electron Content (VTEC) modeling method is proposed based on Adjusted Spherical Harmonic Functions and a Moving-Window Filter. SSR satellite orbit and clock corrections from different IGS analysis centers are evaluated. Ten globally distributed real-time stations are used to evaluate the positioning performances of the proposed RTPPP algorithms in both static and kinematic modes. RMS values of positioning errors in static/kinematic mode are 5.2/15.5, 4.7/17.4 and 12.8/46.6 mm, for north, east and up components, respectively. Real-time slant ionospheric delays from RTPPP are compared with those from the traditional Carrier-to-Code Leveling (CCL) method, in terms of function model, formal precision and between-receiver differences of short baseline. Results show that slant ionospheric delays from RTPPP are more precise and have a much better convergence performance than those from the CCL method in real-time processing. 30 real-time stations from the Asia-Pacific Reference Frame network are used to model the ionospheric VTECs over Australia in real time, with slant ionospheric delays from both RTPPP and CCL methods for comparison. RMS of the VTEC differences between RTPPP/CCL method and CODE final products is 0.91/1.09 TECU, and RMS of the VTEC differences between RTPPP and CCL methods is 0.67 TECU. Slant Total Electron Contents retrieved from different VTEC models are also validated with epoch-differenced Geometry-Free combinations of dual-frequency phase observations, and mean RMS values are 2.14, 2.33 and 2.07 TECU for RTPPP method, CCL method and CODE final products, respectively. This shows the superiority of RTPPP-derived slant ionospheric delays in real-time ionospheric VTEC modeling.

  13. A systematic FPGA acceleration design for applications based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Dong, Hao; Jiang, Li; Li, Tianjian; Liang, Xiaoyao

    2018-04-01

    Most FPGA accelerators for convolutional neural network are designed to optimize the inner acceleration and are ignored of the optimization for the data path between the inner accelerator and the outer system. This could lead to poor performance in applications like real time video object detection. We propose a brand new systematic FPFA acceleration design to solve this problem. This design takes the data path optimization between the inner accelerator and the outer system into consideration and optimizes the data path using techniques like hardware format transformation, frame compression. It also takes fixed-point, new pipeline technique to optimize the inner accelerator. All these make the final system's performance very good, reaching about 10 times the performance comparing with the original system.

  14. Critical quench dynamics in confined systems.

    PubMed

    Collura, Mario; Karevski, Dragi

    2010-05-21

    We analyze the coherent quantum evolution of a many-particle system after slowly sweeping a power-law confining potential. The amplitude of the confining potential is varied in time along a power-law ramp such that the many-particle system finally reaches or crosses a critical point. Under this protocol we derive general scaling laws for the density of excitations created during the nonadiabatic sweep of the confining potential. It is found that the mean excitation density follows an algebraic law as a function of the sweeping rate with an exponent that depends on the space-time properties of the potential. We confirm our scaling laws by first order adiabatic calculation and exact results on the Ising quantum chain with a varying transverse field.

  15. Modal control of an unstable periodic orbit

    NASA Astrophysics Data System (ADS)

    Wiesel, W.; Shelton, W.

    1983-03-01

    Floquet theory is applied to the problem of designing a control system for a satellite in an unstable periodic orbit. Expansion about a periodic orbit produces a time-periodic linear system, which is augmented by a time-periodic control term. It is shown that this can be done such that (1) the application of control produces only inertial accelerations, (2) positive real Poincareexponents are shifted into the left half-plane, and (3) the shift of the exponent is linear with control gain. These developments are applied to an unstable orbit near the earth-moon L(3) point pertubed by the sun. Finally, it is shown that the control theory can be extended to include first order perturbations about the periodic orbit without increase in control cost.

  16. Modal control of an unstable periodic orbit

    NASA Technical Reports Server (NTRS)

    Wiesel, W.; Shelton, W.

    1983-01-01

    Floquet theory is applied to the problem of designing a control system for a satellite in an unstable periodic orbit. Expansion about a periodic orbit produces a time-periodic linear system, which is augmented by a time-periodic control term. It is shown that this can be done such that (1) the application of control produces only inertial accelerations, (2) positive real Poincareexponents are shifted into the left half-plane, and (3) the shift of the exponent is linear with control gain. These developments are applied to an unstable orbit near the earth-moon L(3) point pertubed by the sun. Finally, it is shown that the control theory can be extended to include first order perturbations about the periodic orbit without increase in control cost.

  17. Quantum decay model with exact explicit analytical solution

    NASA Astrophysics Data System (ADS)

    Marchewka, Avi; Granot, Er'El

    2009-01-01

    A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.

  18. Space-time models based on random fields with local interactions

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios T.; Tsantili, Ivi C.

    2016-08-01

    The analysis of space-time data from complex, real-life phenomena requires the use of flexible and physically motivated covariance functions. In most cases, it is not possible to explicitly solve the equations of motion for the fields or the respective covariance functions. In the statistical literature, covariance functions are often based on mathematical constructions. In this paper, we propose deriving space-time covariance functions by solving “effective equations of motion”, which can be used as statistical representations of systems with diffusive behavior. In particular, we propose to formulate space-time covariance functions based on an equilibrium effective Hamiltonian using the linear response theory. The effective space-time dynamics is then generated by a stochastic perturbation around the equilibrium point of the classical field Hamiltonian leading to an associated Langevin equation. We employ a Hamiltonian which extends the classical Gaussian field theory by including a curvature term and leads to a diffusive Langevin equation. Finally, we derive new forms of space-time covariance functions.

  19. Power analysis on the time effect for the longitudinal Rasch model.

    PubMed

    Feddag, M L; Blanchin, M; Hardouin, J B; Sebille, V

    2014-01-01

    Statistics literature in the social, behavioral, and biomedical sciences typically stress the importance of power analysis. Patient Reported Outcomes (PRO) such as quality of life and other perceived health measures (pain, fatigue, stress,...) are increasingly used as important health outcomes in clinical trials or in epidemiological studies. They cannot be directly observed nor measured as other clinical or biological data and they are often collected through questionnaires with binary or polytomous items. The Rasch model is the well known model in the item response theory (IRT) for binary data. The article proposes an approach to evaluate the statistical power of the time effect for the longitudinal Rasch model with two time points. The performance of this method is compared to the one obtained by simulation study. Finally, the proposed approach is illustrated on one subscale of the SF-36 questionnaire.

  20. High Points of Human Genetics

    ERIC Educational Resources Information Center

    Stern, Curt

    1975-01-01

    Discusses such high points of human genetics as the study of chromosomes, somatic cell hybrids, the population formula: the Hardy-Weinberg Law, biochemical genetics, the single-active X Theory, behavioral genetics and finally how genetics can serve humanity. (BR)

  1. Synthetic Minor NSR Permit: Red Cedar Gathering Company - South Ignacio Central Delivery Point

    EPA Pesticide Factsheets

    This page contains the final synthetic minor NSR permit for the Red Cedar Gathering Company, South Ignacio Central Delivery Point, located on the Southern Ute Indian Reservation in La Plata County, CO.

  2. SPY: A new scission point model based on microscopic ingredients to predict fission fragments properties

    NASA Astrophysics Data System (ADS)

    Lemaître, J.-F.; Dubray, N.; Hilaire, S.; Panebianco, S.; Sida, J.-L.

    2013-12-01

    Our purpose is to determine fission fragments characteristics in a framework of a scission point model named SPY for Scission Point Yields. This approach can be considered as a theoretical laboratory to study fission mechanism since it gives access to the correlation between the fragments properties and their nuclear structure, such as shell correction, pairing, collective degrees of freedom, odd-even effects. Which ones are dominant in final state? What is the impact of compound nucleus structure? The SPY model consists in a statistical description of the fission process at the scission point where fragments are completely formed and well separated with fixed properties. The most important property of the model relies on the nuclear structure of the fragments which is derived from full quantum microscopic calculations. This approach allows computing the fission final state of extremely exotic nuclei which are inaccessible by most of the fission model available on the market.

  3. Belgian methodological guidelines for pharmacoeconomic evaluations: toward standardization of drug reimbursement requests.

    PubMed

    Cleemput, Irina; van Wilder, Philippe; Huybrechts, Michel; Vrijens, France

    2009-06-01

    To develop methodological guidelines for pharmacoeconomic evaluation (PE) submitted to the Belgian Drug Reimbursement Committee as part of a drug reimbursement request. In 2006, preliminary pharmacoeconomic guidelines were developed by a multidisciplinary research team. Their feasibility was tested and discussed with all stakeholders. The guidelines were adapted and finalized in 2008. The literature review should be transparent and reproducible. PE should be performed from the perspective of the health-care payer, including the governmental payer and the patient. The target population should reflect the population identified for routine use. The comparator to be considered in the evaluation is the treatment most likely to be replaced. Cost-effectiveness and cost-utility analyses are accepted as reference case techniques, under specific conditions. A final end point-as opposed to a surrogate end point-should be used in the incremental cost-effectiveness ratio (ICER). For the calculation of quality-adjusted life-years (QALYs), a generic quality-of-life measure should be used. PE should in principle apply a lifetime horizon. Application of shorter time horizons requires appropriate justification. Uncertainty around the ICER should always be assessed. Costs and outcomes should be discounted at 3% and 1.5%, respectively. The current guidelines are the result of a constructive collaboration between the Belgian Health Care Knowledge Centre, the National Institute for Health and Disability Insurance and the pharmaceutical industry. A point of special attention is the accessibility of existing Belgian resource use data for PE. As PE should serve Belgian health-care policy, they should preferably be based on the best available data.

  4. Understanding survival analysis: Kaplan-Meier estimate.

    PubMed

    Goel, Manish Kumar; Khanna, Pardeep; Kishore, Jugal

    2010-10-01

    Kaplan-Meier estimate is one of the best options to be used to measure the fraction of subjects living for a certain amount of time after treatment. In clinical trials or community trials, the effect of an intervention is assessed by measuring the number of subjects survived or saved after that intervention over a period of time. The time starting from a defined point to the occurrence of a given event, for example death is called as survival time and the analysis of group data as survival analysis. This can be affected by subjects under study that are uncooperative and refused to be remained in the study or when some of the subjects may not experience the event or death before the end of the study, although they would have experienced or died if observation continued, or we lose touch with them midway in the study. We label these situations as censored observations. The Kaplan-Meier estimate is the simplest way of computing the survival over time in spite of all these difficulties associated with subjects or situations. The survival curve can be created assuming various situations. It involves computing of probabilities of occurrence of event at a certain point of time and multiplying these successive probabilities by any earlier computed probabilities to get the final estimate. This can be calculated for two groups of subjects and also their statistical difference in the survivals. This can be used in Ayurveda research when they are comparing two drugs and looking for survival of subjects.

  5. Boolean network identification from perturbation time series data combining dynamics abstraction and logic programming.

    PubMed

    Ostrowski, M; Paulevé, L; Schaub, T; Siegel, A; Guziolowski, C

    2016-11-01

    Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use Answer Set Programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks; for a larger case study, our method provides optimal answers after 7min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Risk for emerging bipolar disorder, variants, and symptoms in children with attention deficit hyperactivity disorder, now grown up

    PubMed Central

    Elmaadawi, Ahmed Z; Jensen, Peter S; Arnold, L Eugene; Molina, Brooke SG; Hechtman, Lily; Abikoff, Howard B; Hinshaw, Stephen P; Newcorn, Jeffrey H; Greenhill, Laurence Lee; Swanson, James M; Galanter, Cathryn A

    2015-01-01

    AIM: To determine the prevalence of bipolar disorder (BD) and sub-threshold symptoms in children with attention deficit hyperactivity disorder (ADHD) through 14 years’ follow-up, when participants were between 21-24 years old. METHODS: First, we examined rates of BD type I and II diagnoses in youth participating in the NIMH-funded Multimodal Treatment Study of ADHD (MTA). We used the diagnostic interview schedule for children (DISC), administered to both parents (DISC-P) and youth (DISCY). We compared the MTA study subjects with ADHD (n = 579) to a local normative comparison group (LNCG, n = 289) at 4 different assessment points: 6, 8, 12, and 14 years of follow-ups. To evaluate the bipolar variants, we compared total symptom counts (TSC) of DSM manic and hypomanic symptoms that were generated by DISC in ADHD and LNCG subjects. Then we sub-divided the TSC into pathognomonic manic (PM) and non-specific manic (NSM) symptoms. We compared the PM and NSM in ADHD and LNCG at each assessment point and over time. We also evaluated the irritability as category A2 manic symptom in both groups and over time. Finally, we studied the irritability symptom in correlation with PM and NSM in ADHD and LNCG subjects. RESULTS: DISC-generated BD diagnosis did not differ significantly in rates between ADHD (1.89%) and LNCG 1.38%). Interestingly, no participant met BD diagnosis more than once in the 4 assessment points in 14 years. However, on the symptom level, ADHD subjects reported significantly higher mean TSC scores: ADHD 3.0; LNCG 1.7; P < 0.001. ADHD status was associated with higher mean NSM: ADHD 2.0 vs LNCG 1.1; P < 0.0001. Also, ADHD subjects had higher PM symptoms than LNCG, with PM means over all time points of 1.3 ADHD; 0.9 LNCG; P = 0.0001. Examining both NSM and PM, ADHD status associated with greater NSM than PM. However, Over 14 years, the NSM symptoms declined and changed to PM over time (df 3, 2523; F = 20.1; P < 0.0001). Finally, Irritability (BD DSM criterion-A2) rates were significantly higher in ADHD than LNCG (χ2 = 122.2, P < 0.0001), but irritability was associated more strongly with NSM than PM (df 3, 2538; F = 43.2; P < 0.0001). CONCLUSION: Individuals with ADHD do not appear to be at significantly greater risk for developing BD, but do show higher rates of BD symptoms, especially NSM. The greater linkage of irritability to NSM than to PM suggests caution when making BD diagnoses based on irritability alone as one of 2 (A-level) symptoms for BD diagnosis, particularly in view of its frequent presentation with other psychopathologies. PMID:26740933

  7. On future's doorstep: RNA interference and the pharmacopeia of tomorrow.

    PubMed

    Gewirtz, Alan M

    2007-12-01

    Small molecules and antibodies have revolutionized the treatment of malignant diseases and appear promising for the treatment of many others. Nonetheless, there are many candidate therapeutic targets that are not amenable to attack by the current generation of targeted therapies, and in a small but growing number of patients, resistance to initially successful treatments evolves. This Review Series on the medicinal promise of posttranscriptional gene silencing with small interfering RNA and other molecules capable of inducing RNA interference (RNAi) is motivated by the hypothesis that effectors of RNAi can be developed into effective drugs for treating malignancies as well as many other types of disease. As this Review Series points out, there is still much to do, but many in the field now hope that the time has finally arrived when "antisense" therapies will finally come of age and fulfill their promise as the magic bullets of the 21st century.

  8. An Anomalous External Force on the MAP Spacecraft

    NASA Technical Reports Server (NTRS)

    Starin, Scott R.; Bay, P. Michael; Wollack, Edward J.; Fink, Dale R.; Ward, David K.; ODonnell, James R., Jr.; Bauer, Frank H. (Technical Monitor)

    2002-01-01

    A common theme in discussions of the Microwave Anisotropy Probe (MAP) is the attainment of mission goals for minimal cost. One area of cost savings was a reduction in the fuel budget required. To reach orbit around the L2 notation point of the Earth-Sun system, the MAP spacecraft was guided very close to the Moon, allowing a gravity-assisted trajectory out to L2. In order to property time the lunar swing-by, MAP followed a trajectory of three-and-a-half highly elliptical phasing loops. At each perigee of this trajectory MAP executed a thruster maneuver to increase orbit velocity; maneuvers were required at one or both clothe first two perigees (called P1 and P2) and at the third and final perigee (P-final). The preference was for successful maneuvers at all three perigees because this scheme provided a small, additional fuel savings.

  9. Final muon cooling for a muon collider

    NASA Astrophysics Data System (ADS)

    Acosta Castillo, John Gabriel

    To explore the new energy frontier, a new generation of particle accelerators is needed. Muon colliders are a promising alternative if muon cooling can be made to work. Muons are 200 times heavier than electrons, so they produce less synchrotron radiation, and they behave like point particles. However, they have a short lifetime of 2.2 mus and the beam is more difficult to cool than an electron beam. The Muon Accelerator Program (MAP) was created to develop concepts and technologies required by a muon collider. An important effort has been made in the program to design and optimize a muon beam cooling system. The goal is to achieve the small beam emittance required by a muon collider. This work explores a final ionization cooling system using magnetic quadrupole lattices with a low enough beta* region to cool the beam to the required limit with available low Z absorbers.

  10. BRANCHING PATTERNS OF INDIVIDUAL SYMPATHETIC NEURONS IN CULTURE

    PubMed Central

    Bray, D.

    1973-01-01

    The growth of single sympathetic neurons in tissue culture was examined with particular regard to the way in which the patterns of axonal or dendritic processes (here called nerve fibers), were formed. The tips of the fibers were seen to advance in straight lines and to grow at rates that did not vary appreciably with time, with their position in the cell outgrowth, or with the fiber diameter. Most of the branch points were formed by the bifurcation of a fiber tip (growth cone), apparently at random, and thereafter remained at about the same distance from the cell body. It seemed that the final shape of a neuron was the result of the reiterated and largely autonomous activities of the growth cones. The other parts of the cell played a supportive role but, apart from this, had no obvious influence on the final pattern of branches formed. PMID:4687915

  11. Friedrich Nietzsche's mental illness--general paralysis of the insane vs. frontotemporal dementia.

    PubMed

    Orth, M; Trimble, M R

    2006-12-01

    For a long time it was thought that Nietzsche suffered from general paralysis of the insane (GPI). However, this diagnosis has been questioned recently, and alternative diagnoses have been proposed. We have charted Friedrich Nietzsche's final fatal illness, and viewed the differential diagnosis in the light of recent neurological understandings of dementia syndromes. It is unclear that Nietzsche ever had syphilis. He lacked progressive motor and other neurological features of a progressive syphilitic central nervous system (CNS) infection and lived at least 12 years following the onset of his CNS signs, which would be extremely rare for patients with untreated GPI. Finally, his flourish of productivity in 1888 would be quite uncharacteristic of GPI, but in keeping with reports of burgeoning creativity at some point in the progression of frontotemporal dementia (FTD). We suggest that Nietzsche did not have GPI, but died from a chronic dementia, namely FTD.

  12. Outcomes of operative treatment of unstable ankle fractures: a comparison of metallic and biodegradable implants.

    PubMed

    Noh, Jung Ho; Roh, Young Hak; Yang, Bo Gyu; Kim, Seong Wan; Lee, Jun Suk; Oh, Moo Kyung

    2012-11-21

    Biodegradable implants for internal fixation of ankle fractures may overcome some disadvantages of metallic implants, such as imaging interference and the potential need for additional surgery to remove the implants. The purpose of this study was to evaluate the outcomes after fixation of ankle fractures with biodegradable implants compared with metallic implants. In this prospectively randomized study, 109 subjects with an ankle fracture underwent surgery with metallic (Group I) or biodegradable implants (Group II). Radiographic results were assessed by the criteria of the Klossner classification system and time to bone union. Clinical results were assessed with use of the American Orthopaedic Foot & Ankle Society (AOFAS) ankle-hindfoot scale, Short Musculoskeletal Function Assessment (SMFA) dysfunction index, and the SMFA bother index at three, six, and twelve months after surgery. One hundred and two subjects completed the study. At a mean of 19.7 months, there were no differences in reduction quality between the groups. The mean operative time was 30.2 minutes in Group I and 56.4 minutes in Group II (p < 0.001). The mean time to bone union was 15.8 weeks in Group I and 17.6 weeks in Group II (p = 0.002). The mean AOFAS score was 87.5 points in Group I and 84.3 points in Group II at twelve months after surgery (p = 0.004). The mean SMFA dysfunction index was 8.7 points in Group I and 10.5 points in Group II at twelve months after surgery (p = 0.060). The mean SMFA bother index averaged 3.3 points in Group I and 4.6 points in Group II at twelve months after surgery (p = 0.052). No difference existed between the groups with regard to clinical outcomes for the subjects with an isolated lateral malleolar fracture. The outcomes after fixation of bimalleolar ankle fractures with biodegradable implants were inferior to those after fixation with metallic implants in terms of the score on the AOFAS scale and time to bone union. However, the difference in the final AOFAS score between the groups may not be clinically important. The outcomes associated with the use of biodegradable implants for the fixation of isolated lateral malleolar fractures were comparable with those for metallic implants.

  13. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  14. Participation in regular leisure-time physical activity among individuals with type 2 diabetes not meeting Canadian guidelines: the influence of intention, perceived behavioral control, and moral norm.

    PubMed

    Boudreau, François; Godin, Gaston

    2014-12-01

    Most people with type 2 diabetes do not engage in regular leisure-time physical activity. The theory of planned behavior and moral norm construct can enhance our understanding of physical activity intention and behavior among this population. This study aims to identify the determinants of both intention and behavior to participate in regular leisure-time physical activity among individuals with type 2 diabetes who not meet Canada's physical activity guidelines. By using secondary data analysis of a randomized computer-tailored print-based intervention, participants (n = 200) from the province of Quebec (Canada) completed and returned a baseline questionnaire measuring their attitude, perceived behavioral control, and moral norm. One month later, they self-reported their level of leisure-time physical activity. A hierarchical regression equation showed that attitude (beta = 0.10, P < 0.05), perceived behavioral control (beta = 0.37, P < 0.001), and moral norm (beta = 0.45, P < 0.001) were significant determinants of intention, with the final model explaining 63% of the variance. In terms of behavioral prediction, intention (beta = 0.34, P < 0.001) and perceived behavioral control (beta = 0.16, P < 0.05) added 17% to the variance, after controlling the effects of the experimental condition (R (2) = 0.04, P < 0.05) and past participation in leisure-time physical activity (R (2) = 0.22, P < 0.001). The final model explained 43% of the behavioral variance. Finally, the bootstrapping procedure indicated that the influence of moral norm on behavior was mediated by intention and perceived behavioral control. The determinants investigated offered an excellent starting point for designing appropriate counseling messages to promote leisure-time physical activity among individuals with type 2 diabetes.

  15. An intermittency route to global instability in low-density jets

    NASA Astrophysics Data System (ADS)

    Murugesan, Meenatchidevi; Zhu, Yuanhang; Li, Larry K. B.

    2017-11-01

    Above a critical Reynolds number (Re), a low-density jet can become globally unstable, transitioning from a steady state (i.e. a fixed point) to a self-excited oscillatory state (i.e. a limit cycle) via a Hopf bifurcation. In this experimental study, we show that this transition can sometimes involve intermittency. When Re is just slightly above the critical point, intermittent bursts of high-amplitude periodic oscillations emerge amidst a background of low-amplitude aperiodic fluctuations. As Re increases further, these intermittent bursts persist longer in time until they dominate the overall dynamics, causing the jet to transition fully to a periodic limit cycle. We identify this as Type-II Pomeau-Manneville intermittency by quantifying the statistical distribution of the duration of the aperiodic fluctuations at the onset of intermittency. This study shows that the transition to global instability in low-density jets is not always abrupt but can involve an intermediate state with characteristics of both the initial fixed point and the final limit cycle. This work was supported by the Research Grants Council of Hong Kong (Project No. 16235716 and 26202815).

  16. Is that your final answer? Relationship of changed answers to overall performance on a computer-based medical school course examination.

    PubMed

    Ferguson, Kristi J; Kreiter, Clarence D; Peterson, Michael W; Rowat, Jane A; Elliott, Scott T

    2002-01-01

    Whether examinees benefit from the opportunity to change answers to examination questions has been discussed widely. This study was undertaken to document the impact of answer changing on exam performance on a computer-based course examination in a second-year medical school course. This study analyzed data from a 2 hour, 80-item computer delivered multiple-choice exam administered to 190 students (166 second-year medical students and 24 physician's assistant students). There was a small but significant net improvement in overall score when answers were changed: one student's score increased by 7 points, 93 increased by 1 to 4 points, and 38 decreased by 1 to 3 points. On average, lower-performing students benefited slightly less than higher-performing students. Students spent more time on questions for which they changed the answers and were more likely to change items that were more difficult. Students should not be discouraged from changing answers, especially to difficult questions that require careful consideration, although the net effect is quite small.

  17. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Relaxation Property and Stability Analysis of the Quasispecies Models

    NASA Astrophysics Data System (ADS)

    Feng, Xiao-Li; Li, Yu-Xiao; Gu, Jian-Zhong; Zhuo, Yi-Zhong

    2009-10-01

    The relaxation property of both Eigen model and Crow-Kimura model with a single peak fitness landscape is studied from phase transition point of view. We first analyze the eigenvalue spectra of the replication mutation matrices. For sufficiently long sequences, the almost crossing point between the largest and second-largest eigenvalues locates the error threshold at which critical slowing down behavior appears. We calculate the critical exponent in the limit of infinite sequence lengths and compare it with the result from numerical curve fittings at sufficiently long sequences. We find that for both models the relaxation time diverges with exponent 1 at the error (mutation) threshold point. Results obtained from both methods agree quite well. From the unlimited correlation length feature, the first order phase transition is further confirmed. Finally with linear stability theory, we show that the two model systems are stable for all ranges of mutation rate. The Eigen model is asymptotically stable in terms of mutant classes, and the Crow-Kimura model is completely stable.

  18. Development of a predictive limited sampling strategy for estimation of mycophenolic acid area under the concentration time curve in patients receiving concomitant sirolimus or cyclosporine.

    PubMed

    Figurski, Michal J; Nawrocki, Artur; Pescovitz, Mark D; Bouw, Rene; Shaw, Leslie M

    2008-08-01

    Limited sampling strategies for estimation of the area under the concentration time curve (AUC) for mycophenolic acid (MPA) co-administered with sirolimus (SRL) have not been previously evaluated. The authors developed and validated 68 regression models for estimation of MPA AUC for two groups of patients, one with concomitant SRL (n = 24) and the second with concomitant cyclosporine (n=14), using various combinations of time points between 0 and 4 hours after drug administration. To provide as robust a model as possible, a dataset-splitting method similar to a bootstrap was used. In this method, the dataset was randomly split in half 100 times. Each time, one half of the data was used to estimate the equation coefficients, and the other half was used to test and validate the models. Final models were obtained by calculating the median values of the coefficients. Substantial differences were found in the pharmacokinetics of MPA between these groups. The mean MPA AUC as well as the standard deviation was much greater in the SRL group, 56.4 +/- 23.5 mg.h/L, compared with 30.4 +/- 11.0 mg.h/L in the cyclosporine group (P < 0.001). Mean maximum concentration was also greater in the SRL group: 16.4 +/- 7.7 mg/L versus 11.7 +/- 7.1mg/L (P < 0.005). The second absorption peak in the pharmacokinetic profile, presumed to result from enterohepatic recycling of glucuronide MPA, was observed in 70% of the profiles in the SRL group and in 35% of profiles from the cyclosporine group. Substantial differences in the predictive performance of the regression models, based on the same time points, were observed between the two groups. The best model for the SRL group was based on 0 (trough) and 40 minutes and 4 hour time points with R2, root mean squared error, and predictive performance values of 0.82, 10.0, and 78%, respectively. In the cyclosporine group, the best model was 0 and 40 minutes and 2 hours, with R2, RMSE, and predictive performance values of 0.86, 4.1, and 83%, respectively. The model with 2 hours as the last time point is also recommended for the SRL group for practical reasons, with the above parameters of 0.77, 11.3, and 69%, respectively.

  19. Improving "lab-on-a-chip" techniques using biomedical nanotechnology: a review.

    PubMed

    Gorjikhah, Fatemeh; Davaran, Soodabeh; Salehi, Roya; Bakhtiari, Mohsen; Hasanzadeh, Arash; Panahi, Yunes; Emamverdy, Masumeh; Akbarzadeh, Abolfazl

    2016-11-01

    Nanotechnology and its applications in biomedical sciences principally in molecular nanodiagnostics are known as nanomolecular diagnostics, which provides new options for clinical nanodiagnostic techniques. Molecular nanodiagnostics are a critical role in the development of personalized medicine, which features point-of care performance of diagnostic procedure. This can to check patients at point-of-care facilities or in remote or resource-poor locations, therefore reducing checking time from days to minutes. In this review, applications of nanotechnology suited to biomedicine are discussed in two main class: biomedical applications for use inside (such as drugs, diagnostic techniques, prostheses, and implants) and outside the body (such as "lab-on-a-chip" techniques). A lab-on-a-chip (LOC) is a tool that incorporates numerous laboratory tasks onto a small device, usually only millimeters or centimeters in size. Finally, are discussed the applications of biomedical nanotechnology in improving "lab-on-a-chip" techniques.

  20. Turbulence and deterministic chaos. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Deissler, Robert G.

    1992-01-01

    Several turbulent and nonturbulent solutions of the Navier-Stokes equations are obtained. The unaveraged equations are used numerically in conjunction with tools and concepts from nonlinear dynamics, including time series, phase portraits, Poincare sections, largest Liapunov exponents, power spectra, and strange attractors. Initially neighboring solutions for a low Reynolds number fully developed turbulence are compared. Several flows are noted: fully chaotic, complex periodic, weakly chaotic, simple periodic, and fixed-point. Of these, only fully chaotic is classified as turbulent. Besides the sustained flows, a flow which decays as it becomes turbulent is examined. For the finest grid, 128(exp 3) points, the spatial resolution appears to be quite good. As a final note, the variation of the velocity derivatives skewness of a Navier-Stokes flow as the Reynolds number goes to zero is calculated numerically. The value of the skewness is shown to become small at low Reynolds numbers, in agreement with intuitive arguments that nonlinear terms should be negligible.

  1. Subjective Accounts of the Turning Points that Facilitate Desistance From Intimate Partner Violence.

    PubMed

    Walker, Kate; Bowen, Erica; Brown, Sarah; Sleath, Emma

    2017-03-01

    The transition from persistence to desistance in male perpetrators of intimate partner violence (IPV) is an understudied phenomenon. This article examines the factors that initiate and facilitate primary desistance from IPV. The narratives of 22 male perpetrators of IPV (13 desisters and 9 persisters), 7 female survivors, and 9 programme (IPV interventions) facilitators, in England, were analysed using thematic analysis. In their accounts, the participants described how the change from persister to desister did not happen as a result of discrete unique incidents but instead occurred through a number of catalysts or stimuli of change. These triggers were experienced gradually and accumulated over time in number and in type. In particular, Negative consequences of violence and Negative emotional responses needed to accumulate so that the Point of resolve: Autonomous decision to change was finally realised. This process facilitated and initiated the path of change and thus primary desistance from IPV.

  2. Retinal biometrics based on Iterative Closest Point algorithm.

    PubMed

    Hatanaka, Yuji; Tajima, Mikiya; Kawasaki, Ryo; Saito, Koko; Ogohara, Kazunori; Muramatsu, Chisako; Sunayama, Wataru; Fujita, Hiroshi

    2017-07-01

    The pattern of blood vessels in the eye is unique to each person because it rarely changes over time. Therefore, it is well known that retinal blood vessels are useful for biometrics. This paper describes a biometrics method using the Jaccard similarity coefficient (JSC) based on blood vessel regions in retinal image pairs. The retinal image pairs were rough matched by the center of their optic discs. Moreover, the image pairs were aligned using the Iterative Closest Point algorithm based on detailed blood vessel skeletons. For registration, perspective transform was applied to the retinal images. Finally, the pairs were classified as either correct or incorrect using the JSC of the blood vessel region in the image pairs. The proposed method was applied to temporal retinal images, which were obtained in 2009 (695 images) and 2013 (87 images). The 87 images acquired in 2013 were all from persons already examined in 2009. The accuracy of the proposed method reached 100%.

  3. 3D Printing and Assay Development for Point-of-Care Applications

    NASA Astrophysics Data System (ADS)

    Jagadeesh, Shreesha

    Existing centralized labs do not serve patients adequately in remote areas. To enable universal timely healthcare, there is a need to develop low cost, portable systems that can diagnose multiple disease (Point-of-Care (POC) devices). Future POC diagnostics can be more multi-functional if medical device vendors can develop interoperability standards. This thesis developed the following medical diagnostic modules: Plasma from 25 microl blood was extracted through a filter membrane to demonstrate a 3D printed sample preparation module. Sepsis biomarker, C - reactive protein, was quantified through adsorption on nylon beads to demonstrate bead-based assay suitable for 3D printed disposable cartridge module. Finally, a modular fluorescent detection kit was built using 3D printed parts to detect CD4 cells in a disposable cartridge from ChipCare Corp. Due to the modularity enabled by 3D printing technique, the developed units can be easily adapted to detect other diseases.

  4. Investigation of merging/reconnection heating during solenoid-free startup of plasmas in the MAST Spherical Tokamak

    NASA Astrophysics Data System (ADS)

    Tanabe, H.; Yamada, T.; Watanabe, T.; Gi, K.; Inomoto, M.; Imazawa, R.; Gryaznevich, M.; Scannell, R.; Conway, N. J.; Michael, C.; Crowley, B.; Fitzgerald, I.; Meakins, A.; Hawkes, N.; McClements, K. G.; Harrison, J.; O'Gorman, T.; Cheng, C. Z.; Ono, Y.; The MAST Team

    2017-05-01

    We present results of recent studies of merging/reconnection heating during central solenoid (CS)-free plasma startup in the Mega Amp Spherical Tokamak (MAST). During this process, ions are heated globally in the downstream region of an outflow jet, and electrons locally around the X-point produced by the magnetic field of two internal P3 coils and of two plasma rings formed around these coils, the final temperature being proportional to the reconnecting field energy. There is an effective confinement of the downstream thermal energy, due to a thick layer of reconnected flux. The characteristic structure is sustained for longer than an ion-electron energy relaxation time, and the energy exchange between ions and electrons contributes to the bulk electron heating in the downstream region. The peak electron temperature around the X-point increases with toroidal field, but the downstream electron and ion temperatures do not change.

  5. Distance-based microfluidic quantitative detection methods for point-of-care testing.

    PubMed

    Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James

    2016-04-07

    Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.

  6. Health care sensor--based systems for point of care monitoring and diagnostic applications: a brief survey.

    PubMed

    Tsakalakis, Michail; Bourbakis, Nicolaos G

    2014-01-01

    Continuous, real-time remote monitoring through medical point--of--care (POC) systems appears to draw the interest of the scientific community for healthcare monitoring and diagnostic applications the last decades. Towards this direction a significant merit has been due to the advancements in several scientific fields. Portable, wearable and implantable apparatus may contribute to the betterment of today's healthcare system which suffers from fundamental hindrances. The number and heterogeneity of such devices and systems regarding both software and hardware components, i.e sensors, antennas, acquisition circuits, as well as the medical applications that are designed for, is impressive. Objective of the current study is to present the major technological advancements that are considered to be the driving forces in the design of such systems, to briefly state the new aspects they can deliver in healthcare and finally, the identification, categorization and a first level evaluation of them.

  7. CAD system of design and engineering provision of die forming of compressor blades for aircraft engines

    NASA Astrophysics Data System (ADS)

    Khaimovich, I. N.

    2017-10-01

    The articles provides the calculation algorithms for blank design and die forming fitting to produce the compressor blades for aircraft engines. The design system proposed in the article allows generating drafts of trimming and reducing dies automatically, leading to significant reduction of work preparation time. The detailed analysis of the blade structural elements features was carried out, the taken limitations and technological solutions allowed forming generalized algorithms of forming parting stamp face over the entire circuit of the engraving for different configurations of die forgings. The author worked out the algorithms and programs to calculate three dimensional point locations describing the configuration of die cavity. As a result the author obtained the generic mathematical model of final die block in the form of three-dimensional array of base points. This model is the base for creation of engineering documentation of technological equipment and means of its control.

  8. Holodiagram: elliptic visualizing interferometry, relativity, and light-in-flight.

    PubMed

    Abramson, Nils H

    2014-04-10

    In holographic interferometry, there is usually a static distance separating the point of illumination and the point of observation. In Special Relativity, this separation is dynamic and is caused by the velocity of the observer. The corrections needed to compensate for these separations are similar in the two fields. We use the ellipsoids of the holodiagram for measurement and in a graphic way to explain and evaluate optical resolution, gated viewing, radar, holography, three-dimensional interferometry, Special Relativity, and light-in-flight recordings. Lorentz contraction together with time dilation is explained as the result of the eccentricity of the measuring ellipsoid, caused by its velocity. The extremely thin ellipsoid of the very first light appears as a beam aimed directly at the observer, which might explain the wave or ray duality of light and entanglement. Finally, we introduce the concept of ellipsoids of observation.

  9. Synthetic Minor NSR Permit: BP America Production Company - Salvador I/II Central Delivery Point

    EPA Pesticide Factsheets

    This page contains the final synthetic minor NSR permit for the BP America Production Company, Salvador I/II Central Delivery Point, located on the Southern Ute Indian Reservation in La Plata County, CO.

  10. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  11. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations

    PubMed Central

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-01-01

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed. PMID:28398256

  12. Competency-based learning in an ambulatory care setting: Implementation of simulation training in the Ambulatory Care Rotation during the final year of the MaReCuM model curriculum.

    PubMed

    Dusch, Martin; Narciß, Elisabeth; Strohmer, Renate; Schüttpelz-Brauns, Katrin

    2018-01-01

    Aim: As part of the MaReCuM model curriculum at Medical Faculty Mannheim Heidelberg University, a final year rotation in ambulatory care was implemented and augmented to include ambulatory care simulation. In this paper we describe this ambulatory care simulation, the designated competency-based learning objectives, and evaluate the educational effect of the ambulatory care simulation training. Method: Seventy-five final year medical students participated in the survey (response rate: 83%). The control group completed the ambulatory rotation prior to the implementation of the ambulatory care simulation. The experimental group was required to participate in the simulation at the beginning of the final year rotation in ambulatory care. A survey of both groups was conducted at the beginning and at the end of the rotation. The learning objectives were taken from the National Competency-based Catalogue of Learning Objectives for Undergraduate Medical Education (NKLM). Results: The ambulatory care simulation had no measurable influence on students' subjectively perceived learning progress, the evaluation of the ambulatory care rotation, or working in an ambulatory care setting. At the end of the rotation participants in both groups reported having gained better insight into treating outpatients. At the beginning of the rotation members of both groups assessed their competencies to be at the same level. The simulated ambulatory scenarios were evaluated by the participating students as being well structured and easy to understand. The scenarios successfully created a sense of time pressure for those confronted with them. The ability to correctly fill out a narcotic prescription form as required was rated significantly higher by those who participated in the simulation. Participation in the ambulatory care simulation had no effect on the other competencies covered by the survey. Discussion: The effect of the four instructional units comprising the ambulatory care simulation was not measurable due to the current form or the measurement point at the end of the 12-week rotation. The reasons for this could be the many and statistically elusive factors relevant to the individual and the wide variety among final year rotation placements, the late point in time of the final survey, and the selection of simulated scenarios. The course is slated to undergo specific further development and should be supplemented with additional learning opportunities to ensure that the main learning objectives are covered. The description of the teaching format is meant to contribute to the ongoing development of medical education with an emphasis on competency in the areas of ambulatory care, communication, prevention and health promotion.

  13. Development of a methodology for electronic waste estimation: A material flow analysis-based SYE-Waste Model.

    PubMed

    Yedla, Sudhakar

    2016-01-01

    Improved living standards and the share of services sector to the economy in Asia, and the use of electronic equipment is on the rise and results in increased electronic waste generation. A peculiarity of electronic waste is that it has a 'significant' value even after its life time, and to add complication, even after its extended life in its 'dump' stage. Thus, in Indian situations, after its life time is over, the e-material changes hands more than once and finally ends up either in the hands of informal recyclers or in the store rooms of urban dwellings. This character makes it extremely difficult to estimate electronic waste generation. The present study attempts to develop a functional model based on a material flow analysis approach by considering all possible end uses of the material, its transformed goods finally arriving at disposal. It considers various degrees of uses derived of the e-goods regarding their primary use (life time), secondary use (first degree extension of life), third-hand use (second degree extension of life), donation, retention at the respective places (without discarding), fraction shifted to scrap vendor, and the components reaching the final dump site from various end points of use. This 'generic functional model' named SYE-Waste Model, developed based on a material flow analysis approach, can be used to derive 'obsolescence factors' for various degrees of usage of e-goods and also to make a comprehensive estimation of electronic waste in any city/country. © The Author(s) 2015.

  14. Final report on VMI-NMISA bilateral comparison APMP.T-K3.1: Realizations of the ITS-90 over the range -38.8344 °C to 419.527 °C

    NASA Astrophysics Data System (ADS)

    Liedberg, Hans

    2012-01-01

    The Comité Consulatif de Thermométrie (CCT) has organized several key comparisons to compare realizations of the ITS-90 in different National Metrology Institutes. To keep the organization, time scale and data processing of such a comparison manageable, the number of participants in a CCT key comparison (CCT KC) is limited to a few laboratories in each major economic region. Subsequent regional key comparisons are linked to the applicable CCT KC by two or more linking laboratories. For the temperature range from 83.8058 K (triple point of Ar) to 933.473 K (freezing point of Al), a key comparison, CCT-K3, was carried out from 1997 to 2001 among representative laboratories in North America, Europe and Asia. Following CCT-K3, the Asia-Pacific Metrology Program Key Comparison 3 (APMP.T-K3) was organized for National Metrology Institutes in the Asia/Pacific region. NMIA (Australia) and KRISS (South Korea) provided the link between CCT-K3 and APMP.T-K3. APMP.T-K3, which took place from February 2000 to June 2003, covered the temperature range from -38.8344 °C (triple point of Hg) to 419.527 °C (freezing point of Zn), using a standard platinum resistance thermometer (SPRT) as the artefact. In June 2007 the Vietnam Metrology Institute (VMI) requested a bilateral comparison to link their SPRT calibration capabilities to APMP.T-K3, and in October 2007 the National Metrology Institute of South Africa (NMISA) agreed to provide the link to APMP.T-K3. Like APMP.T-K3, the comparison was restricted to the Hg to Zn temperature range to reduce the chance of drift in the SPRT artefact. The comparison was carried out in a participant-pilot-participant topology (with NMISA as the pilot and VMI as the participant). VMI's results in the comparison were linked to the Average Reference Values of CCT-K3 via NMISA's results in APMP.T-K3. The resistance ratios measured by VMI and NMISA at Zn, Sn, Ga and Hg fixed points agree within their combined uncertainties, and VMI's results also agree with the CCT-K3 reference values at these fixed points. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  15. Global exponential stability analysis on impulsive BAM neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Li, Yao-Tang; Yang, Chang-Bo

    2006-12-01

    Using M-matrix and topological degree tool, sufficient conditions are obtained for the existence, uniqueness and global exponential stability of the equilibrium point of bidirectional associative memory (BAM) neural networks with distributed delays and subjected to impulsive state displacements at fixed instants of time by constructing a suitable Lyapunov functional. The results remove the usual assumptions that the boundedness, monotonicity, and differentiability of the activation functions. It is shown that in some cases, the stability criteria can be easily checked. Finally, an illustrative example is given to show the effectiveness of the presented criteria.

  16. The method of projected characteristics for the evolution of magnetic arches

    NASA Technical Reports Server (NTRS)

    Nakagawa, Y.; Hu, Y. Q.; Wu, S. T.

    1987-01-01

    A numerical method of solving fully nonlinear MHD equation is described. In particular, the formulation based on the newly developed method of projected characteristics (Nakagawa, 1981) suitable to study the evolution of magnetic arches due to motions of their foot-points is presented. The final formulation is given in the form of difference equations; therefore, the analysis of numerical stability is also presented. Further, the most important derivation of physically self-consistent, time-dependent boundary conditions (i.e. the evolving boundary equations) is given in detail, and some results obtained with such boundary equations are reported.

  17. Application of Neural Network Optimized by Mind Evolutionary Computation in Building Energy Prediction

    NASA Astrophysics Data System (ADS)

    Song, Chen; Zhong-Cheng, Wu; Hong, Lv

    2018-03-01

    Building Energy forecasting plays an important role in energy management and plan. Using mind evolutionary algorithm to find the optimal network weights and threshold, to optimize the BP neural network, can overcome the problem of the BP neural network into a local minimum point. The optimized network is used for time series prediction, and the same month forecast, to get two predictive values. Then two kinds of predictive values are put into neural network, to get the final forecast value. The effectiveness of the method was verified by experiment with the energy value of three buildings in Hefei.

  18. Zeta Function Regularization in Casimir Effect Calculations and J. S. DOWKER's Contribution

    NASA Astrophysics Data System (ADS)

    Elizalde, Emilio

    2012-06-01

    A summary of relevant contributions, ordered in time, to the subject of operator zeta functions and their application to physical issues is provided. The description ends with the seminal contributions of Stephen Hawking and Stuart Dowker and collaborators, considered by many authors as the actual starting point of the introduction of zeta function regularization methods in theoretical physics, in particular, for quantum vacuum fluctuation and Casimir effect calculations. After recalling a number of the strengths of this powerful and elegant method, some of its limitations are discussed. Finally, recent results of the so-called operator regularization procedure are presented.

  19. Zeta Function Regularization in Casimir Effect Calculations and J. S. Dowker's Contribution

    NASA Astrophysics Data System (ADS)

    Elizalde, Emilio

    2012-07-01

    A summary of relevant contributions, ordered in time, to the subject of operator zeta functions and their application to physical issues is provided. The description ends with the seminal contributions of Stephen Hawking and Stuart Dowker and collaborators, considered by many authors as the actual starting point of the introduction of zeta function regularization methods in theoretical physics, in particular, for quantum vacuum fluctuation and Casimir effect calculations. After recalling a number of the strengths of this powerful and elegant method, some of its limitations are discussed. Finally, recent results of the so called operator regularization procedure are presented.

  20. Obesity and Postmenopausal Breast Cancer Risk: Determining the Role of Growth Factor-Induced Aromatase Expression

    DTIC Science & Technology

    2014-03-01

    those placed on the low -fat control diet at the end of each of the study’s time points (the final average weight for each group excludes mice...diet in the 2-month and 4-month on diet groups, but not the 6-month group.  Figure 1c: No difference in tumor latency between the DIO and control...not promote more or less total ER expression in MCF-7 or ZR75 cells vs Con sera. Tasks 5g and 5j:  Figure 5: MCF-7 cells exposed to Ob serum for

  1. The Search for Neutrinos from Gamma Ray Bursts with AMANDA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuehn, Kyler

    2006-05-19

    We report on the combined analysis of over 400 GRB time periods that occurred during seven years of AMANDA observations. AMANDA has seen no neutrinos correlated with these bursts, thus we report a neutrino flux limit that is the most stringent observational limit to date. In light of the new observational opportunities afforded by Swift, we also discuss the future potential for GRB neutrino detection with AMANDA'S successor, IceCube. Finally, we briefly discuss the expansion of AMANDA'S transient point-source search to other phenomena, such as jet-driven supernovae and gamma-ray dark bursts.

  2. GPU-accelerated Modeling and Element-free Reverse-time Migration with Gauss Points Partition

    NASA Astrophysics Data System (ADS)

    Zhen, Z.; Jia, X.

    2014-12-01

    Element-free method (EFM) has been applied to seismic modeling and migration. Compared with finite element method (FEM) and finite difference method (FDM), it is much cheaper and more flexible because only the information of the nodes and the boundary of the study area are required in computation. In the EFM, the number of Gauss points should be consistent with the number of model nodes; otherwise the accuracy of the intermediate coefficient matrices would be harmed. Thus when we increase the nodes of velocity model in order to obtain higher resolution, we find that the size of the computer's memory will be a bottleneck. The original EFM can deal with at most 81×81 nodes in the case of 2G memory, as tested by Jia and Hu (2006). In order to solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition (GPP), and utilize the GPUs to improve the computation efficiency. Considering the characteristics of the Gaussian points, the GPP method doesn't influence the propagation of seismic wave in the velocity model. To overcome the time-consuming computation of the stiffness matrix (K) and the mass matrix (M), we also use the GPUs in our computation program. We employ the compressed sparse row (CSR) format to compress the intermediate sparse matrices and try to simplify the operations by solving the linear equations with the CULA Sparse's Conjugate Gradient (CG) solver instead of the linear sparse solver 'PARDISO'. It is observed that our strategy can significantly reduce the computational time of K and Mcompared with the algorithm based on CPU. The model tested is Marmousi model. The length of the model is 7425m and the depth is 2990m. We discretize the model with 595x298 nodes, 300x300 Gauss cells and 3x3 Gauss points in each cell. In contrast to the computational time of the conventional EFM, the GPUs-GPP approach can substantially improve the efficiency. The speedup ratio of time consumption of computing K, M is 120 and the speedup ratio time consumption of RTM is 11.5. At the same time, the accuracy of imaging is not harmed. Another advantage of the GPUs-GPP method is its easy applications in other numerical methods such as the FEM. Finally, in the GPUs-GPP method, the arrays require quite limited memory storage, which makes the method promising in dealing with large-scale 3D problems.

  3. Guidance, Navigation, and Control Performance for the GOES-R Spacecraft

    NASA Technical Reports Server (NTRS)

    Chapel, Jim D.; Stancliffe, Devin; Bevacqua, Tim; Winkler, Stephen; Clapp, Brian; Rood, Tim; Gaylor, David; Freesland, Douglas C.; Krimchansky, Alexander

    2014-01-01

    The Geostationary Operational Environmental Satellite-R Series (GOES-R) is the first of the next generation geostationary weather satellites, scheduled for delivery in late 2015 and launch in early 2016. Relative to the current generation of GOES satellites, GOES-R represents a dramatic increase in Earth and solar weather observation capabilities, with 4 times the resolution, 5 times the observation rate, and 3 times the number of spectral bands for Earth observations. GOES-R will also provide unprecedented availability, with less than 120 minutes per year of lost observation time. The Guidance Navigation & Control (GN&C) design requirements to achieve these expanded capabilities are extremely demanding. This paper first presents the pointing control, pointing stability, attitude knowledge, and orbit knowledge requirements necessary to realize the ambitious Image Navigation and Registration (INR) objectives of GOES-R. Because the GOES-R suite of instruments is sensitive to disturbances over a broad spectral range, a high fidelity simulation of the vehicle has been created with modal content over 500 Hz to assess the pointing stability requirements. Simulation results are presented showing acceleration, shock response spectrum (SRS), and line of sight responses for various disturbances from 0 Hz to 512 Hz. These disturbances include gimbal motion, reaction wheel disturbances, thruster firings for station keeping and momentum management, and internal instrument disturbances. Simulation results demonstrate excellent performance relative to the pointing and pointing stability requirements, with line of sight jitter of the isolated instrument platform of approximately 1 micro-rad. Low frequency motion of the isolated instrument platform is internally compensated within the primary instrument. Attitude knowledge and rate are provided directly to the instrument with an accuracy defined by the Integrated Rate Error (IRE) requirements. The allowable IRE ranges from 1 to 18.5 micro-rad, depending upon the time window of interest. The final piece of the INR performance is orbit knowledge. Extremely accurate orbital position is achieved by GPS navigation at Geosynchronous Earth Orbit (GEO). Performance results are shown demonstrating compliance with the 50 to 75 m orbit position accuracy requirements of GOES-R, including during station-keeping and momentum management maneuvers. As shown in this paper, the GN&C performance for the GOES-R series of spacecraft supports the challenging mission objectives of the next generation GEO Earth-observation satellites.

  4. Evaluating Continuous-Time Slam Using a Predefined Trajectory Provided by a Robotic Arm

    NASA Astrophysics Data System (ADS)

    Koch, B.; Leblebici, R.; Martell, A.; Jörissen, S.; Schilling, K.; Nüchter, A.

    2017-09-01

    Recently published approaches to SLAM algorithms process laser sensor measurements and output a map as a point cloud of the environment. Often the actual precision of the map remains unclear, since SLAMalgorithms apply local improvements to the resulting map. Unfortunately, it is not trivial to compare the performance of SLAMalgorithms objectively, especially without an accurate ground truth. This paper presents a novel benchmarking technique that allows to compare a precise map generated with an accurate ground truth trajectory to a map with a manipulated trajectory which was distorted by different forms of noise. The accurate ground truth is acquired by mounting a laser scanner on an industrial robotic arm. The robotic arm is moved on a predefined path while the position and orientation of the end-effector tool are monitored. During this process the 2D profile measurements of the laser scanner are recorded in six degrees of freedom and afterwards used to generate a precise point cloud of the test environment. For benchmarking, an offline continuous-time SLAM algorithm is subsequently applied to remove the inserted distortions. Finally, it is shown that the manipulated point cloud is reversible to its previous state and is slightly improved compared to the original version, since small errors that came into account by imprecise assumptions, sensor noise and calibration errors are removed as well.

  5. Differential expression of genes and proteins associated with wool follicle cycling.

    PubMed

    Liu, Nan; Li, Hegang; Liu, Kaidong; Yu, Juanjuan; Cheng, Ming; De, Wei; Liu, Jifeng; Shi, Shuyan; He, Yanghua; Zhao, Jinshan

    2014-08-01

    Sheep are valuable resources for the wool industry. Wool growth of Aohan fine wool sheep has cycled during different seasons in 1 year. Therefore, identifying genes that control wool growth cycling might lead to ways for improving the quality and yield of fine wool. In this study, we employed Agilent sheep gene expression microarray and proteomic technology to compare the gene expression patterns of the body side skins at August and December time points in Aohan fine wool sheep (a Chinese indigenous breed). Microarray study revealed that 2,223 transcripts were differentially expressed, including 1,162 up-regulated and 1,061 down-regulated transcripts, comparing body side skin at the August time point to the December one (A/D) in Aohan fine wool sheep. Then seven differentially expressed genes were selected to validated the reliability of the gene chip data. The majority of the genes possibly related to follicle development and wool growth could be assigned into the categories including regulation of receptor binding, extracellular region, protein binding and extracellular space. Proteomic study revealed that 84 protein spots showed significant differences in expression levels. Of the 84, 63 protein spots were upregulated and 21 were downregulated in A/D. Finally, 55 protein points were determined through MALDI-TOF/MS analyses. Furthermore, the regulation mechanism of hair follicle might resemble that of fetation.

  6. An improved ASIFT algorithm for indoor panorama image matching

    NASA Astrophysics Data System (ADS)

    Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong

    2017-07-01

    The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.

  7. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.

  8. Toward a Global Bundle Adjustment of SPOT 5 - HRS Images

    NASA Astrophysics Data System (ADS)

    Massera, S.; Favé, P.; Gachet, R.; Orsoni, A.

    2012-07-01

    The HRS (High Resolution Stereoscopic) instrument carried on SPOT 5 enables quasi-simultaneous acquisition of stereoscopic images on wide segments - 120 km wide - with two forward and backward-looking telescopes observing the Earth with an angle of 20° ahead and behind the vertical. For 8 years IGN (Institut Géographique National) has been developing techniques to achieve spatiotriangulation of these images. During this time the capacities of bundle adjustment of SPOT 5 - HRS spatial images have largely improved. Today a global single block composed of about 20,000 images can be computed in reasonable calculation time. The progression was achieved step by step: first computed blocks were only composed of 40 images, then bigger blocks were computed. Finally only one global block is now computed. In the same time calculation tools have improved: for example the adjustment of 2,000 images of North Africa takes about 2 minutes whereas 8 hours were needed two years ago. To reach such a result a new independent software was developed to compute fast and efficient bundle adjustments. In the same time equipment - GCPs (Ground Control Points) and tie points - and techniques have also evolved over the last 10 years. Studies were made to get recommendations about the equipment in order to make an accurate single block. Tie points can now be quickly and automatically computed with SURF (Speeded Up Robust Features) techniques. Today the updated equipment is composed of about 500 GCPs and studies show that the ideal configuration is around 100 tie points by square degree. With such an equipment, the location of the global HRS block becomes a few meters accurate whereas non adjusted images are only 15 m accurate. This paper will describe the methods used in IGN Espace to compute a global single block composed of almost 20,000 HRS images, 500 GCPs and several million of tie points in reasonable calculation time. Many advantages can be found to use such a block. Because the global block is unique it becomes easier to manage the historic and the different evolutions of the computations (new images, new GCPs or tie points). The location is now unique and consequently coherent all around the world, avoiding steps and artifacts on the borders of DSMs (Digital Surface Models) and OrthoImages historically calculated from different blocks. No extrapolation far from GCPs in the limits of images is done anymore. Using the global block as a reference will allow new images from other sources to be easily located on this reference.

  9. Anesthesiology Point of Care project.

    PubMed

    McDonald, John S; Noback, Carl R; Cheng, Drew; Lee, T K; Nenov, Val

    2002-01-01

    We are developing a dynamic prototype visual communication system for the operating room environs. This has classically been viewed as an isolated and impenetrable workplace. All medical experiences and all teaching remain in a one to one closed loop with no recall or subsequent sharing for the training and education of other colleagues. The "Anesthesia Point of Care" (APOC) concept embraces the sharing of, recording of, and presentation of various physiological and pharmacological events so that real time memory can be shared at a later time for the edification of other colleagues who were not present at the time of the primary learning event. In addition it also provides a remarkably rapid tool for fellow faculty to respond to obvious stress and crisis events that can be broadcast instantly at the time of happening. Finally, it also serves as an efficient and effective means of paging and general communication throughout the daily routines among various healthcare providers in anesthesiology who work as a team unit; these include the staff, residents, CRNAs, physician assistants, and technicians. This system offers a unique opportunity to eventually develop future advanced ideas that can include training exercises, presurgical evaluations, surgical scheduling and improvements in efficiency based upon earlier than expected case completion or conversely later than expected case completion and even as a unique window to development of improved billing itemization and coordination.

  10. Study of Automatic Image Rectification and Registration of Scanned Historical Aerial Photographs

    NASA Astrophysics Data System (ADS)

    Chen, H. R.; Tseng, Y. H.

    2016-06-01

    Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS) of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process of matching historical aerial images by SIFT (Scale Invariant Feature Transform) for handling the great quantity of images by computer vision. SIFT is one of the most popular method of image feature extracting and matching. This algorithm extracts extreme values in scale space into invariant image features, which are robust to changing in rotation scale, noise, and illumination. We also use RANSAC (Random sample consensus) to remove outliers, and obtain good conjugated points between photographs. Finally, we manually add control points for registration through least square adjustment based on collinear equation. In the future, we can use image feature points of more photographs to build control image database. Every new image will be treated as query image. If feature points of query image match the features in database, it means that the query image probably is overlapped with control images.With the updating of database, more and more query image can be matched and aligned automatically. Other research about multi-time period environmental changes can be investigated with those geo-referenced temporal spatial data.

  11. On the upper part load vortex rope in Francis turbine: Experimental investigation

    NASA Astrophysics Data System (ADS)

    Nicolet, C.; Zobeiri, A.; Maruzewski, P.; Avellan, F.

    2010-08-01

    The swirling flow developing in Francis turbine draft tube under part load operation leads to pressure fluctuations usually in the range of 0.2 to 0.4 times the runner rotational frequency resulting from the so-called vortex breakdown. For low cavitation number, the flow features a cavitation vortex rope animated with precession motion. Under given conditions, these pressure fluctuations may lead to undesirable pressure fluctuations in the entire hydraulic system and also produce active power oscillations. For the upper part load range, between 0.7 and 0.85 times the best efficiency discharge, pressure fluctuations may appear in a higher frequency range of 2 to 4 times the runner rotational speed and feature modulations with vortex rope precession. It has been pointed out that for this particular operating point, the vortex rope features elliptical cross section and is animated of a self-rotation. This paper presents an experimental investigation focusing on this peculiar phenomenon, defined as the upper part load vortex rope. The experimental investigation is carried out on a high specific speed Francis turbine scale model installed on a test rig of the EPFL Laboratory for Hydraulic Machines. The selected operating point corresponds to a discharge of 0.83 times the best efficiency discharge. Observations of the cavitation vortex carried out with high speed camera have been recorded and synchronized with pressure fluctuations measurements at the draft tube cone. First, the vortex rope self rotation frequency is evidenced and the related frequency is deduced. Then, the influence of the sigma cavitation number on vortex rope shape and pressure fluctuations is presented. The waterfall diagram of the pressure fluctuations evidences resonance effects with the hydraulic circuit. The time evolution of the vortex rope volume is compared with pressure fluctuations time evolution using image processing. Finally, the influence of the Froude number on the vortex rope shape and the associated pressure fluctuations is analyzed by varying the rotational speed.

  12. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields.

  13. Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data

    NASA Astrophysics Data System (ADS)

    Du, L.; Zhong, R.; Sun, H.; Wu, Q.

    2017-09-01

    An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel

  14. The coordination of boundary tones and its interaction with prominence.

    PubMed

    Katsika, Argyro; Krivokapić, Jelena; Mooshammer, Christine; Tiede, Mark; Goldstein, Louis

    2014-05-01

    This study investigates the coordination of boundary tones as a function of stress and pitch accent. Boundary tone coordination has not been experimentally investigated previously, and the effect of prominence on this coordination, and whether it is lexical (stress-driven) or phrasal (pitch accent-driven) in nature is unclear. We assess these issues using a variety of syntactic constructions to elicit different boundary tones in an Electromagnetic Articulography (EMA) study of Greek. The results indicate that the onset of boundary tones co-occurs with the articulatory target of the final vowel. This timing is further modified by stress, but not by pitch accent: boundary tones are initiated earlier in words with non-final stress than in words with final stress regardless of accentual status. Visual data inspection reveals that phrase-final words are followed by acoustic pauses during which specific articulatory postures occur. Additional analyses show that these postures reach their achievement point at a stable temporal distance from boundary tone onsets regardless of stress position. Based on these results and parallel findings on boundary lengthening reported elsewhere, a novel approach to prosody is proposed within the context of Articulatory Phonology: rather than seeing prosodic (lexical and phrasal) events as independent entities, a set of coordination relations between them is suggested. The implications of this account for prosodic architecture are discussed.

  15. 77 FR 23740 - Sears Point Wetland and Watershed Restoration Project, Sonoma County, CA; Final Environmental...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-20

    ...We, the U.S. Fish and Wildlife Service (Service) and the California Department of Fish and Game (CDFG), in cooperation with the Sonoma Land Trust (SLT), announce that a final environmental impact report and environmental impact statement (EIR/EIS) for the Sears Point Wetland and Watershed Restoration Project is now available. The final EIR/EIS, which we prepared and now announce in accordance with the National Environmental Policy Act of 1969 (NEPA), describes the restoration of approximately 2,300 acres (ac) of former farmland located in Sonoma County, California, near the San Pablo Bay. The final EIR/EIS responds to all comments we received on the draft document. The restoration project, which would be implemented by the SLT, would restore natural estuarine ecosystems on diked baylands, while providing public access and recreational and educational opportunities compatible with ecological and cultural resources protection. The U.S. Army Corps of Engineers, San Francisco District, and the National Marine Fisheries Service of the National Oceanic and Atmospheric Administration are cooperating agencies on the final EIR/EIS.

  16. The point of entry contributes to the organization of exploratory behavior of rats on an open field: an example of spontaneous episodic memory.

    PubMed

    Nemati, Farshad; Whishaw, Ian Q

    2007-08-22

    The exploratory behavior of rats on an open field is organized in that animals spend disproportionate amounts of time at certain locations, termed home bases, which serve as centers for excursions. Although home bases are preferentially formed near distinctive cues, including visual cues, animals also visit and pause and move slowly, or linger, at many other locations in a test environment. In order to further examine the organization of exploratory behavior, the present study examined the influence of the point of entry on animals placed on an open field table that was illuminated either by room light or infrared light (a wavelength in which they cannot see) and near which, or on which, distinctive cues were placed. The main findings were that in both room light and infrared light tests, rats visited and lingered at the point of entry significantly more often than comparative control locations. Although the rats also visited and lingered in the vicinity of salient visual cues, the point of entry still remained a focus of visits. Finally, the preference for the point of entry increased as a function of salience of the cues marking that location. That the point of entry influences the organization of exploratory behavior is discussed in relation to the idea that the exploratory behavior of the rat is directed toward optimizing security as well as forming a spatial representation of the environment.

  17. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  18. Automatic Road Sign Inventory Using Mobile Mapping Systems

    NASA Astrophysics Data System (ADS)

    Soilán, M.; Riveiro, B.; Martínez-Sánchez, J.; Arias, P.

    2016-06-01

    The periodic inspection of certain infrastructure features plays a key role for road network safety and preservation, and for developing optimal maintenance planning that minimize the life-cycle cost of the inspected features. Mobile Mapping Systems (MMS) use laser scanner technology in order to collect dense and precise three-dimensional point clouds that gather both geometric and radiometric information of the road network. Furthermore, time-stamped RGB imagery that is synchronized with the MMS trajectory is also available. In this paper a methodology for the automatic detection and classification of road signs from point cloud and imagery data provided by a LYNX Mobile Mapper System is presented. First, road signs are detected in the point cloud. Subsequently, the inventory is enriched with geometrical and contextual data such as orientation or distance to the trajectory. Finally, semantic content is given to the detected road signs. As point cloud resolution is insufficient, RGB imagery is used projecting the 3D points in the corresponding images and analysing the RGB data within the bounding box defined by the projected points. The methodology was tested in urban and road environments in Spain, obtaining global recall results greater than 95%, and F-score greater than 90%. In this way, inventory data is obtained in a fast, reliable manner, and it can be applied to improve the maintenance planning of the road network, or to feed a Spatial Information System (SIS), thus, road sign information can be available to be used in a Smart City context.

  19. Studies of ZVS soft switching of dual-active-bridge isolated bidirectional DC-DC converters

    NASA Astrophysics Data System (ADS)

    Xu, Fei; Zhao, Feng; Shi, Qibiao; Wen, Xuhui

    2018-05-01

    To operate dual-active-bridge isolated bidirectional dc- dc converter (DAB) at high efficiency, the two bridge switches must operate with Zero-Voltage-Switching (ZVS) over as wide an operating range as possible. This paper proposes a new perspective on realizing ZVS in dead-time. An exact theoretical analysis and mathematical mode is built to explain the process of ZVS switching in dead-time under Single Phase Shift (SPS) control strategy. In order to assure the two bridge switches operate on soft switching, every SPS switching point is analyzed. Generally, dead-time will be determined when the power electronic devices is selected. The key factor to realizing ZVS is the size of the end time of resonance comparing to dead-time. Through detailed analysis, it can obtain the conditions of all switches achieving ZVS turn-on and turn-off. Finally, simulation validates the theoretical analysis and some advice are given to realize the ZVS soft switching.

  20. Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research

    PubMed Central

    Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi

    2016-01-01

    The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637

  1. Geovisualisation of relief in a virtual reality system on the basis of low-level aerial imagery

    NASA Astrophysics Data System (ADS)

    Halik, Łukasz; Smaczyński, Maciej

    2017-12-01

    The aim of the following paper was to present the geomatic process of transforming low-level aerial imagery obtained with unmanned aerial vehicles (UAV) into a digital terrain model (DTM) and implementing the model into a virtual reality system (VR). The object of the study was a natural aggretage heap of an irregular shape and denivelations up to 11 m. Based on the obtained photos, three point clouds (varying in the level of detail) were generated for the 20,000-m2-area. For further analyses, the researchers selected the point cloud with the best ratio of accuracy to output file size. This choice was made based on seven control points of the heap surveyed in the field and the corresponding points in the generated 3D model. The obtained several-centimetre differences between the control points in the field and the ones from the model might testify to the usefulness of the described algorithm for creating large-scale DTMs for engineering purposes. Finally, the chosen model was implemented into the VR system, which enables the most lifelike exploration of 3D terrain plasticity in real time, thanks to the first person view mode (FPV). In this mode, the user observes an object with the aid of a Head- mounted display (HMD), experiencing the geovisualisation from the inside, and virtually analysing the terrain as a direct animator of the observations.

  2. The relationship between students' counseling center contact and long-term educational outcomes.

    PubMed

    Scofield, Brett E; Stauffer, Ashley L; Locke, Benjamin D; Hayes, Jeffrey A; Hung, Ya-Chi; Nyce, Megan L; Christensen, Adam E; Yin, Alexander C

    2017-11-01

    Numerous studies have demonstrated that counseling centers deliver a positive impact on the emotional and social development of college students who receive services. These healthy outcomes, in turn, can lead to increased academic success, such as improved performance, retention, and persistence. While these short-term academic outcomes have been widely investigated, very few studies have explored the relationship between counseling center services and longer-term educational outcomes, such as final grade point average (GPA), time spent at the university, and degree completion. In the current study, counseling center usage, including appointments that were attended, cancelled, and no showed, as well as distal educational variables were examined within 2 cohorts of first-time full-time students over a 6-year period. Findings revealed that both users and nonusers of counseling center services spent a similar amount of time to degree completion and achieved comparable final semester GPAs as well. However, students who utilized counseling services graduated at a significantly lower rate (79.8%) than those who did not use services (86.2%) across the 6-year time span. Post hoc analyses indicated that among students who used counseling services, those who did not graduate scheduled significantly more services than those who graduated, suggesting that students who use the counseling center, and have more chronic and severe mental health problems, may be graduating at a lower rate. Implications are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Design and Implementation of a C++ Multithreaded Operational Tool for the Generation of Detection Time Grids in 2D for P- and S-waves taking into Consideration Seismic Network Topology and Data Latency

    NASA Astrophysics Data System (ADS)

    Sardina, V.

    2017-12-01

    The Pacific Tsunami Warning Center's round the clock operations rely on the rapid determination of the source parameters of earthquakes occurring around the world. To rapidly estimate source parameters such as earthquake location and magnitude the PTWC analyzes data streams ingested in near-real time from a global network of more than 700 seismic stations. Both the density of this network and the data latency of its member stations at any given time have a direct impact on the speed at which the PTWC scientists on duty can locate an earthquake and estimate its magnitude. In this context, it turns operationally advantageous to have the ability of assessing how quickly the PTWC operational system can reasonably detect and locate and earthquake, estimate its magnitude, and send the corresponding tsunami message whenever appropriate. For this purpose, we designed and implemented a multithreaded C++ software package to generate detection time grids for both P- and S-waves after taking into consideration the seismic network topology and the data latency of its member stations. We first encapsulate all the parameters of interest at a given geographic point, such as geographic coordinates, P- and S-waves detection time in at least a minimum number of stations, and maximum allowed azimuth gap into a DetectionTimePoint class. Then we apply composition and inheritance to define a DetectionTimeLine class that handles a vector of DetectionTimePoint objects along a given latitude. A DetectionTimesGrid class in turn handles the dynamic allocation of new TravelTimeLine objects and assigning the calculation of the corresponding P- and S-waves' detection times to new threads. Finally, we added a GUI that allows the user to interactively set all initial calculation parameters and output options. Initial testing in an eight core system shows that generation of a global 2D grid at 1 degree resolution setting detection on at least 5 stations and no azimuth gap restriction takes under 25 seconds. Under the same initial conditions, generation of a 2D grid at 0.1 degree resolution (2.6 million grid points) takes no more than 22 minutes. This preliminary results show a significant gain in grid generation speed when compared to other implementation via either scripts, or previous versions of the C++ code that did not implement multithreading.

  4. The utility of the KJOC score in professional baseball in the United States.

    PubMed

    Franz, Justin O; McCulloch, Patrick C; Kneip, Chris J; Noble, Philip C; Lintner, David M

    2013-09-01

    The Kerlan-Jobe Orthopaedic Clinic (KJOC) Shoulder and Elbow questionnaire has been shown by previous studies to be more sensitive than other validated subjective measurement tools in the detection of upper extremity dysfunction in overhead-throwing athletes. The primary objective was to establish normative data for KJOC scores in professional baseball players in the United States. The secondary objectives were to evaluate the effect of player age, playing position, professional competition level, history of injury, history of surgery, and time point of administration on the KJOC score. Cross-sectional study; Level of evidence, 3. From 2011 to 2012, a total of 203 major league and minor league baseball players within the Houston Astros professional baseball organization completed the KJOC questionnaire. The questionnaire was administered at 3 time points: spring training 2011, end of season 2011, and spring training 2012. The KJOC scores were analyzed for significant differences based on player age, injury history, surgery history, fielding position, competition level, self-reported playing status, and time point of KJOC administration. The average KJOC score among healthy players with no history of injury was 97.1 for major league players and 96.8 for minor league players. The time point of administration did not significantly affect the final KJOC score (P = .224), and KJOC outcomes did not vary with player age (r = -0.012; P = .867). Significantly lower average KJOC scores were reported by players with a history of upper extremity injury (86.7; P < .001) and upper extremity surgery (75.4; P < .0001). The KJOC results did vary with playing position (P = .0313), with the lowest average scores being reported by pitchers (90.9) and infielders (91.3). This study establishes a quantitative baseline for the future evaluation of professional baseball players with the KJOC score. Age and time of administration had no significant effect on the outcome of the KJOC score. Missed practices or games within the previous year because of injury were the most significant demographic predictors of lower KJOC scores. The KJOC score was shown to be a sensitive measurement tool for detecting subtle changes in the upper extremity performance of the professional baseball population studied.

  5. In utero programming alters adult response to chronic mild stress: part 3 of a longitudinal study.

    PubMed

    Baker, Stephanie L; Mileva, Guergana; Huta, Veronika; Bielajew, Catherine

    2014-11-07

    Exposure to stress before birth may lay the foundation for the development of sensitivities or protection from psychiatric disorders while later stress exposure may trigger either their expression or suppression. This report, part three of a longitudinal study conducted in our laboratory, aimed to examine the interaction between early and adult stress and their effects on measures of anxiety and depression. In parts one and two, we reported the effects of gestational stress (GS) in Long Evans rat dams and their juvenile and young adult offspring. In this third and final installment, we evaluated the effects of GS and chronic mild stress (CMS) in the adult female offspring at 6 month and 12 month time-points. The two by two design included a combination of GS and CMS and the appropriate control groups. Using Hierarchical Linear Modeling, main effects of GS on corticosterone level at the 12 month time-point was found while main effects of CMS were seen in body weight, sucrose preference, and corticosterone, and significant interactions between group at the 6 and 12 month time-points. The GS group had the lowest sucrose preference during CMS at 6 months supporting a cumulative effect of early and later life stress. The GS/CMS group showed lower corticosterone at 12 months than the GS/noCMS group indicating a possible mismatch between prenatal programming and later life stress. These results highlight the importance of early life factors in exerting potentially protective effects in models involving later life stress. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. 34 CFR 685.202 - Charges for which Direct Loan Program borrowers are responsible.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... held prior to that June 1 plus 3.1 percentage points, but does not exceed 8.25 percent. (ii) Loans... plus 2.5 percentage points, but does not exceed 8.25 percent. (B) During all other periods. The...-day Treasury bills auctioned at the final auction held prior to that June 1 plus 3.1 percentage points...

  7. Reporting the cost-effectiveness of interventions with nonsignificant effect differences: example from a trial of secondary prevention of coronary heart disease.

    PubMed

    Johnston, Katharine; Gray, Alastair; Moher, Michael; Yudkin, Patricia; Wright, Lucy; Mant, David

    2003-01-01

    This study reports the cost-effectiveness of interventions with nonsignificant differences in effect, and considers reporting of cost-effectiveness in situations where nonsignificant differences arise in some but not all end points. Data on costs and effects associated with three end points (adequate assessment, risk factors, and life-years) were derived from a trial of methods to promote secondary prevention of coronary heart disease. Incremental cost per life-year gained figures were calculated, and the uncertainty around these was displayed on cost-effectiveness planes in the form of ellipses. There was a significant difference in one of the intermediate end points (adequate assessment) but nonsignificant differences in the other intermediate end point (risk factors) and the final end point (life-years). Estimation of cost per life-year figures revealed the cost-effectiveness of the interventions to be unfavorable. Cost-effectiveness ratios based on final end points should be calculated even in situations where nonsignificant differences in life-years arise, to avoid publication bias and to provide decision makers with useful information. Uncertainty in the incremental cost-effectiveness ratios should be estimated and presented graphically.

  8. On the Coupling Time of the Heat-Bath Process for the Fortuin-Kasteleyn Random-Cluster Model

    NASA Astrophysics Data System (ADS)

    Collevecchio, Andrea; Elçi, Eren Metin; Garoni, Timothy M.; Weigel, Martin

    2018-01-01

    We consider the coupling from the past implementation of the random-cluster heat-bath process, and study its random running time, or coupling time. We focus on hypercubic lattices embedded on tori, in dimensions one to three, with cluster fugacity at least one. We make a number of conjectures regarding the asymptotic behaviour of the coupling time, motivated by rigorous results in one dimension and Monte Carlo simulations in dimensions two and three. Amongst our findings, we observe that, for generic parameter values, the distribution of the appropriately standardized coupling time converges to a Gumbel distribution, and that the standard deviation of the coupling time is asymptotic to an explicit universal constant multiple of the relaxation time. Perhaps surprisingly, we observe these results to hold both off criticality, where the coupling time closely mimics the coupon collector's problem, and also at the critical point, provided the cluster fugacity is below the value at which the transition becomes discontinuous. Finally, we consider analogous questions for the single-spin Ising heat-bath process.

  9. Classical and quantum Reissner-Nordström black hole thermodynamics and first order phase transition

    NASA Astrophysics Data System (ADS)

    Ghaffarnejad, Hossein

    2016-01-01

    First we consider classical Reissner-Nordström black hole (CRNBH) metric which is obtained by solving Einstein-Maxwell metric equation for a point electric charge e inside of a spherical static body with mass M. It has 2 interior and exterior horizons. Using Bekenstein-Hawking entropy theorem we calculate interior and exterior entropy, temperature, Gibbs free energy and heat capacity at constant electric charge. We calculate first derivative of the Gibbs free energy with respect to temperature which become a singular function having a singularity at critical point Mc=2|e|/√{3} with corresponding temperature Tc=1/24π√{3|e|}. Hence we claim first order phase transition is happened there. Temperature same as Gibbs free energy takes absolutely positive (negative) values on the exterior (interior) horizon. The Gibbs free energy takes two different positive values synchronously for 0< T< Tc but not for negative values which means the system is made from two subsystem. For negative temperatures entropy reaches to zero value at Tto-∞ and so takes Bose-Einstein condensation single state. Entropy increases monotonically in case 0< T< Tc. Regarding results of the work presented at Wang and Huang (Phys. Rev. D 63:124014, 2001) we calculate again the mentioned thermodynamical variables for remnant stable final state of evaporating quantum Reissner-Nordström black hole (QRNBH) and obtained results same as one in case of the CRNBH. Finally, we solve mass loss equation of QRNBH against advance Eddington-Finkelstein time coordinate and derive luminosity function. We obtain switching off of QRNBH evaporation before than the mass completely vanishes. It reaches to a could Lukewarm type of RN black hole which its final remnant mass is m_{final}=|e| in geometrical units. Its temperature and luminosity vanish but not in Schwarzschild case of evaporation. Our calculations can be take some acceptable statements about information loss paradox (ILP).

  10. The Geodetic Monitoring of the Engineering Structure - A Practical Solution of the Problem in 3D Space

    NASA Astrophysics Data System (ADS)

    Filipiak-Kowszyk, Daria; Janowski, Artur; Kamiński, Waldemar; Makowska, Karolina; Szulwic, Jakub; Wilde, Krzysztof

    2016-12-01

    The study raises the issues concerning the automatic system designed for the monitoring of movement of controlled points, located on the roof covering of the Forest Opera in Sopot. It presents the calculation algorithm proposed by authors. It takes into account the specific design and location of the test object. High forest stand makes it difficult to use distant reference points. Hence the reference points used to study the stability of the measuring position are located on the ground elements of the sixmeter-deep concrete foundations, from which the steel arches are derived to support the roof covering (membrane) of the Forest Opera. The tacheometer used in the measurements is located in the glass body placed on a special platform attached to the steel arcs. Measurements of horizontal directions, vertical angles and distances can be additionally subject to errors caused by the laser beam penetration through the glass. Dynamic changes of weather conditions, including the temperature and pressure also have a significant impact on the value of measurement errors, and thus the accuracy of the final determinations represented by the relevant covariance matrices. The estimated coordinates of the reference points, controlled points and tacheometer along with the corresponding covariance matrices obtained from the calculations in the various epochs are used to determine the significance of acquired movements. In case of the stability of reference points, the algorithm assumes the ability to study changes in the position of tacheometer in time, on the basis of measurements performed on these points.

  11. Study of a hydraulic dicalcium phosphate dihydrate/calcium oxide-based cement for dental applications.

    PubMed

    el-Briak, Hasna; Durand, Denis; Nurit, Josiane; Munier, Sylvie; Pauvert, Bernard; Boudeville, Phillipe

    2002-01-01

    By mixing CaHPO(4) x 2H(2)O (DCPD) and CaO with water or sodium phosphate buffers as liquid phase, a calcium phosphate cement was obtained. Its physical and mechanical properties, such as compressive strength, initial and final setting times, cohesion time, dough time, swelling time, dimensional and thermal behavior, and injectability were investigated by varying different parameters such as liquid to powder (L/P) ratio (0.35-0.7 ml g(-1)), molar calcium to phosphate (Ca/P) ratio (1.67-2.5) and the pH (4, 7, and 9) and the concentration (0-1 M) of the sodium phosphate buffer. The best results were obtained with the pH 7 sodium phosphate buffer at the concentration of 0.75 M. With this liquid phase, physical and mechanical properties depended on the Ca/P and L/P ratios, varying from 3 to 11 MPa (compressive strength), 6 to 10 min (initial setting time), 11 to 15 min (final setting time), 15 to 30 min (swelling time), 7 to 20 min (time of 100% injectability). The dough or working time was over 16 min. This cement expanded during its setting (1.2-5 % according to Ca/P and L/P ratios); this would allow a tight filling. Given the mechanical and rheological properties of this new DCPD/CaO-based cement, its use as root canal sealing material can be considered as classical calcium hydroxide or ZnO/eugenol-based pastes, without or with a gutta-percha point. Copyright 2002 Wiley Periodicals, Inc. J Biomed Mater Res (Appl Biomater) 63: 447-453, 2002

  12. Automatic Tie Pointer for In-Situ Pointing Correction

    NASA Technical Reports Server (NTRS)

    Deen, Robert G/

    2011-01-01

    The MARSAUTOTIE program generates tie points for use with the Mars pointing correction software "In-Situ Pointing Correction and Rover Microlocalization," (NPO-46696) Soft ware Tech Briefs, Vol. 34, No. 9 (September 2010), page 18, in a completely automated manner, with no operator intervention. It takes the place of MARSTIE, although MARSTIE can be used to interactively edit the tie points afterwards. These tie points are used to create a mosaic whose seams (boundaries of input images) have been geometrically corrected to reduce or eliminate errors and mis-registrations. The methods used to find appropriate tie points for use in creating a mosaic are unique, having been designed to work in concert with the "MARSNAV" program to be most effective in reducing or eliminating geometric seams in a mosaic. The program takes the input images and finds overlaps according to the nominal pointing. It then finds the most interesting areas using a scene activity metric. Points with higher scene activity are more likely to correlate successfully in the next step. It then uses correlation techniques to find matching points in the overlapped image. Finally, it performs a series of steps to reduce the number of tie points to a manageable level. These steps incorporate a number of heuristics that have been devised using experience gathered by tie pointing mosaics manually during MER operations. The software makes use of the PIG library as described in "Planetary Image Geometry Library" (NPO-46658), NASA Tech Briefs, Vol. 34, No. 12 (December 2010), page 30, so it is multi-mission, applicable without change to any in-situ mission supported by PIG. The MARSAUTOTIE algorithm is automated, so it requires no user intervention. Although at the time of this reporting it has not been done, this program should be suitable for integration into a fully automated mosaic production pipeline.

  13. Final report: Bilateral key comparison SIM.T-K6.3 on humidity standards in the dew/frost-point temperature range from -30°C to 20°C

    NASA Astrophysics Data System (ADS)

    Huang, Peter; Meyer, Christopher; Brionizio, Julio D.

    2015-01-01

    A Regional Metrology Organization (RMO) Key Comparison of dew/frost point temperatures was carried out by the National Institute of Standards and Technology (NIST, USA) and the Instituto Nacional de Metrologia, Qualidade e Tecnologia (INMETRO, Brazil) between October 2009 and March 2010. The results of this comparison are reported here, along with descriptions of the humidity laboratory standards for NIST and INMETRO and the uncertainty budget for these standards. This report also describes the protocol for the comparison and presents the data acquired. The results are analyzed, determining degree of equivalence between the dew/frost-point standards of NIST and INMETRO. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  14. Utilizing the Iterative Closest Point (ICP) algorithm for enhanced registration of high resolution surface models - more than a simple black-box application

    NASA Astrophysics Data System (ADS)

    Stöcker, Claudia; Eltner, Anette

    2016-04-01

    Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore data quality depends particularly on parameterization and choice of error metric, especially for erroneous data sets as in the case of sparse vegetation cover. At this, the point-to-point metric is more sensitive to data "noise" than the point-to-plane metric which results in considerably higher cloud-to-cloud distances. Concluding, in order to comply with accuracy demands of high resolution surface reconstruction and the aspect that ground control surveys can reach their limits both in time exposure and terrain accessibility ICP algorithm represents a great tool to refine rough initial alignment. Here different variants of registration modules allow for individual application according to the quality of the input data.

  15. Evaluation of Rocky Point Viaduct Concrete Beam : Final Report

    DOT National Transportation Integrated Search

    2000-06-01

    This study was intended to determine why it was necessary to replace the Rocky Point Viaduct after a period of service that was much shorter than that of many other reinforced concrete bridges on the Oregon coast; to identify construction practices t...

  16. 77 FR 38495 - Safety Zone; Village of Sodus Point Fireworks Display, Sodus Bay, Sodus Point, NY

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-28

    .... Regulatory History and Information The Coast Guard is issuing this temporary final rule without prior notice... of any grant or loan recipients, and will not raise any novel legal or policy issues. The safety zone...

  17. 77 FR 8855 - Final Reissuance of the NPDES General Permit for Facilities Related to Oil and Gas Extraction in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-15

    ... Protection Agency. ACTION: Notice of Final NPDES General Permit. SUMMARY: The Director of the Water Quality... Extraction Point Source Category as authorized by section 402 of the Clean Water Act, 33 U.S.C. 1342 (CWA... change to the proposed permit. A copy of the Region's responses to comments and the final permit may be...

  18. Trend analysis of a tropical urban river water quality in Malaysia.

    PubMed

    Othman, Faridah; M E, Alaa Eldin; Mohamed, Ibrahim

    2012-12-01

    Rivers play a significant role in providing water resources for human and ecosystem survival and health. Hence, river water quality is an important parameter that must be preserved and monitored. As the state of Selangor and the city of Kuala Lumpur, Malaysia, are undergoing tremendous development, the river is subjected to pollution from point and non-point sources. The water quality of the Klang River basin, one of the most densely populated areas within the region, is significantly degraded due to human activities as well as urbanization. Evaluation of the overall river water quality status is normally represented by a water quality index (WQI), which consists of six parameters, namely dissolved oxygen, biochemical oxygen demand, chemical oxygen demand, suspended solids, ammoniacal nitrogen and pH. The objectives of this study are to assess the water quality status for this tropical, urban river and to establish the WQI trend. Using monthly WQI data from 1997 to 2007, time series were plotted and trend analysis was performed by employing the first-order autocorrelated trend model on the moving average values for every station. The initial and final values of either the moving average or the trend model were used as the estimates of the initial and final WQI at the stations. It was found that Klang River water quality has shown some improvement between 1997 and 2007. Water quality remains good in the upper stream area, which provides vital water sources for water treatment plants in the Klang valley. Meanwhile, the water quality has also improved in other stations. Results of the current study suggest that the present policy on managing river quality in the Klang River has produced encouraging results; the policy should, however, be further improved alongside more vigorous monitoring of pollution discharge from various point sources such as industrial wastewater, municipal sewers, wet markets, sand mining and landfills, as well as non-point sources such as agricultural or urban runoff and commercial activity.

  19. Investigation of Strain Aging in the Ordered Intermetallic Compound beta-NiAl. Ph.D. Thesis Final Contractor Report

    NASA Technical Reports Server (NTRS)

    Weaver, Mark Lovell

    1995-01-01

    The phenomenon of strain aging has been investigated in polycrystalline and single crystal NiAl alloys at temperatures between 300 and 1200 K. Static strain aging studies revealed that after annealing at 1100 K for 7200 s (i.e., 2h) followed by furnace cooling, high purity, nitrogen-doped and titanium-doped polycrystalline alloys exhibited continuous yielding, while conventional-purity and carbon-doped alloys exhibited distinct yield points and Luders strains. Prestraining by hydrostatic pressurization removed the yield points, but they could be reintroduced by further annealing treatments. Yield points could be reintroduced more rapidly if the specimens were prestrained uniaxially rather than hydrostatically, owing to the arrangement of dislocations into cell structures during uniaxial deformation. The time dependence of the strain aging events followed at t(exp 2/3) relationship suggesting that the yield points observed in polycrystalline NiAl were the result of the pinning of mobile dislocations by interstitials, specifically carbon. Between 700 and 800 K, yield stress plateaus, yield stress transients upon a ten-fold increase in strain rate, work hardening peaks, and dips in the strain rate sensitivity (SRS) have been observed in conventional-purity and carbon-doped polycrystals. In single crystals, similar behavior was observed; in conventional-purity single crystals, however, the strain rate sensitivity became negative resulting in serrated yielding, whereas, the strain rate sensitivity stayed positive in high purity and in molybdenum-doped NiAl. These observations are indicative of dynamic strain aging (DSA) and are discussed in terms of conventional strain aging theories. The impact of these phenomena on the composition-structure-property relations are discerned. Finally, a good correlation has been demonstrated between the properties of NiAl alloys and a recently developed model for strain aging in metals and alloys developed by Reed-Hill et al.

  20. On the reliability of the holographic method for measurement of soft tissue modifications during periodontal therapy

    NASA Astrophysics Data System (ADS)

    Stratul, Stefan-Ioan; Sinescu, Cosmin; Negrutiu, Meda; de Sabata, Aldo; Rominu, Mihai; Ogodescu, Alexandru; Rusu, Darian

    2014-01-01

    Holographic evaluations count among recent measurement tools in orthodontics and prosthodontics. This research introduces holography as an assessment method of 3D variations of gingival retractions. The retraction of gingiva on frontal regions of 5 patients with periodontitis was measured in six points and was evaluated by holographic methods using a He-Ne laser device (1mV, Superlum, Carrigtwohill, Ireland) inside a holographic bank of 200 x 100cm. Impressions were taken during first visit and cast models were manufactured. Six months after the end of periodontal treatment, clinical measurements were repeated and the hologram of the first model was superimposed on a final model cast, by using reference points, while maintaining the optical geometric perimeters. The retractions were evaluated 3D in every point using a dedicated software (Sigma Scan Pro,Systat Software, SanJose, CA, USA). The Wilcoxon test was used to compare the mean recession changes between baseline and six months after treatment, and between values in vivo and the values on hologram. No statistically significant differences between values in vivo and on the hologram were found. In conclusion, holography provides a valuable tool for assessing gingival retractions on virtual models. The data can be stored, reproduced, transmitted and compared at a later time point with accuracy.

Top