The Woodworker's Website: A Project Management Case Study
ERIC Educational Resources Information Center
Jance, Marsha
2014-01-01
A case study that focuses on building a website for a woodworking business is discussed. Project management and linear programming techniques can be used to determine the time required to complete the website project discussed in the case. This case can be assigned to students in an undergraduate or graduate decision modeling or management science…
2007-08-01
Infinite plate with a hole: sequence of meshes produced by h-refinement. The geometry of the coarsest mesh...recalled with an emphasis on k -refinement. In Section 3, the use of high-order NURBS within a projection technique is studied in the geometri - cally linear...case with a B̄ method to investigate the choice of approximation and projection spaces with NURBS.
Anomaly General Circulation Models.
NASA Astrophysics Data System (ADS)
Navarra, Antonio
The feasibility of the anomaly model is assessed using barotropic and baroclinic models. In the barotropic case, both a stationary and a time-dependent model has been formulated and constructed, whereas only the stationary, linear case is considered in the baroclinic case. Results from the barotropic model indicate that a relation between the stationary solution and the time-averaged non-linear solution exists. The stationary linear baroclinic solution can therefore be considered with some confidence. The linear baroclinic anomaly model poses a formidable mathematical problem because it is necessary to solve a gigantic linear system to obtain the solution. A new method to find solution of large linear system, based on a projection on the Krylov subspace is shown to be successful when applied to the linearized baroclinic anomaly model. The scheme consists of projecting the original linear system on the Krylov subspace, thereby reducing the dimensionality of the matrix to be inverted to obtain the solution. With an appropriate setting of the damping parameters, the iterative Krylov method reaches a solution even using a Krylov subspace ten times smaller than the original space of the problem. This generality allows the treatment of the important problem of linear waves in the atmosphere. A larger class (nonzonally symmetric) of basic states can now be treated for the baroclinic primitive equations. These problem leads to large unsymmetrical linear systems of order 10000 and more which can now be successfully tackled by the Krylov method. The (R7) linear anomaly model is used to investigate extensively the linear response to equatorial and mid-latitude prescribed heating. The results indicate that the solution is deeply affected by the presence of the stationary waves in the basic state. The instability of the asymmetric flows, first pointed out by Simmons et al. (1983), is active also in the baroclinic case. However, the presence of baroclinic processes modifies the dominant response. The most sensitive areas are identified; they correspond to north Japan, the Pole and Greenland regions. A limited set of higher resolution (R15) experiments indicate that this situation is still present and enhanced at higher resolution. The linear anomaly model is also applied to a realistic case. (Abstract shortened with permission of author.).
Projective formulation of Maggi's method for nonholonomic systems analysis
NASA Astrophysics Data System (ADS)
Blajer, Wojciech
1992-04-01
A projective interpretation of Maggi'a approach to dynamic analysis of nonholonomic systems is presented. Both linear and nonlinear constraint cases are treatment in unified fashion, using the language of vector spaces and tensor algebra analysis.
Local projection stabilization for linearized Brinkman-Forchheimer-Darcy equation
NASA Astrophysics Data System (ADS)
Skrzypacz, Piotr
2017-09-01
The Local Projection Stabilization (LPS) is presented for the linearized Brinkman-Forchheimer-Darcy equation with high Reynolds numbers. The considered equation can be used to model porous medium flows in chemical reactors of packed bed type. The detailed finite element analysis is presented for the case of nonconstant porosity. The enriched variant of LPS is based on the equal order interpolation for the velocity and pressure. The optimal error bounds for the velocity and pressure errors are justified numerically.
On a new iterative method for solving linear systems and comparison results
NASA Astrophysics Data System (ADS)
Jing, Yan-Fei; Huang, Ting-Zhu
2008-10-01
In Ujevic [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujevic's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujevic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, J; Qi, H; Wu, S
Purpose: In transmitted X-ray tomography imaging, projections are sometimes incomplete due to a variety of reasons, such as geometry inaccuracy, defective detector cells, etc. To address this issue, we have derived a direct consistency condition based on John’s Equation, and proposed a method to effectively restore incomplete projections based on this consistency condition. Methods: Through parameter substitutions, we have derived a direct consistency condition equation from John’s equation, in which the left side is only projection derivative of view and the right side is projection derivative of other geometrical parameters. Based on this consistency condition, a projection restoration method ismore » proposed, which includes five steps: 1) Forward projecting reconstructed image and using linear interpolation to estimate the incomplete projections as the initial result; 2) Performing Fourier transform on the projections; 3) Restoring the incomplete frequency data using the consistency condition equation; 4) Performing inverse Fourier transform; 5) Repeat step 2)∼4) until our criteria is met to terminate the iteration. Results: A beam-blocking-based scatter correction case and a bad-pixel correction case were used to demonstrate the efficacy and robustness of our restoration method. The mean absolute error (MAE), signal noise ratio (SNR) and mean square error (MSE) were employed as our evaluation metrics of the reconstructed images. For the scatter correction case, the MAE is reduced from 63.3% to 71.7% with 4 iterations. Compared with the existing Patch’s method, the MAE of our method is further reduced by 8.72%. For the bad-pixel case, the SNR of the reconstructed image by our method is increased from 13.49% to 21.48%, with the MSE being decreased by 45.95%, compared with linear interpolation method. Conclusion: Our studies have demonstrated that our restoration method based on the new consistency condition could effectively restore the incomplete projections, especially for their high frequency component.« less
ERIC Educational Resources Information Center
Shi, Yixun
2009-01-01
Based on a sequence of points and a particular linear transformation generalized from this sequence, two recent papers (E. Mauch and Y. Shi, "Using a sequence of number pairs as an example in teaching mathematics". Math. Comput. Educ., 39 (2005), pp. 198-205; Y. Shi, "Case study projects for college mathematics courses based on a particular…
Method of Conjugate Radii for Solving Linear and Nonlinear Systems
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.
1999-01-01
This paper describes a method to solve a system of N linear equations in N steps. A quadratic form is developed involving the sum of the squares of the residuals of the equations. Equating the quadratic form to a constant yields a surface which is an ellipsoid. For different constants, a family of similar ellipsoids can be generated. Starting at an arbitrary point an orthogonal basis is constructed and the center of the family of similar ellipsoids is found in this basis by a sequence of projections. The coordinates of the center in this basis are the solution of linear system of equations. A quadratic form in N variables requires N projections. That is, the current method is an exact method. It is shown that the sequence of projections is equivalent to a special case of the Gram-Schmidt orthogonalization process. The current method enjoys an advantage not shared by the classic Method of Conjugate Gradients. The current method can be extended to nonlinear systems without modification. For nonlinear equations the Method of Conjugate Gradients has to be augmented with a line-search procedure. Results for linear and nonlinear problems are presented.
SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, C; Qi, H; Chen, Z
Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using meanmore » filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.« less
Park, Bum-Sik; Hong, In-Seok; Jang, Ji-Ho; Jin, Hyunchang; Choi, Sukjin; Kim, Yonghwan
2016-02-01
A 28 GHz electron cyclotron resonance (ECR) ion source is being developed for use as an injector for the superconducting linear accelerator of the Rare Isotope Science Project. Beam extraction from the ECR ion source has been simulated using the KOBRA3-INP software. The simulation software can calculate charged particle trajectories in three dimensional complex magnetic field structures, which in this case are formed by the arrangement of five superconducting magnets. In this study, the beam emittance is simulated to understand the effects of plasma potential, mass-to-charge ratio, and spatial distribution. The results of these simulations and their comparison to experimental results are presented in this paper.
Prediction of cancer incidence and mortality in Korea, 2014.
Jung, Kyu-Won; Won, Young-Joo; Kong, Hyun-Joo; Oh, Chang-Mo; Lee, Duk Hyoung; Lee, Jin Soo
2014-04-01
We studied and reported on cancer incidence and mortality rates as projected for the year 2014 in order to estimate Korea's current cancer burden. Cancer incidence data from 1999 to 2011 were obtained from the Korea National Cancer Incidence Database, and cancer mortality data from 1993 to 2012 were acquired from Statistics Korea. Cancer incidence in 2014 was projected by fitting a linear regression model to observed age-specific cancer incidence rates against observed years, then multiplying the projected age-specific rates by the age-specific population. For cancer mortality, a similar procedure was employed, except that a Joinpoint regression model was used to determine at which year the linear trend changed significantly. A total of 265,813 new cancer cases and 74,981 cancer deaths are expected to occur in Korea in 2014. Further, the crude incidence rate per 100,000 of all sites combined will likely reach 524.7 and the age-standardized incidence rate, 338.5. Meanwhile, the crude mortality rate of all sites combined and age-standardized rate are projected to be 148.0 and 84.6, respectively. Given the rapid rise in prostate cancer cases, it is anticipated to be the fourth most frequently occurring cancer site in men for the first time. Cancer has become the most prominent public health concern in Korea, and as the population ages, the nation's cancer burden will continue to increase.
Nikazad, T; Davidi, R; Herman, G. T.
2013-01-01
We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data. PMID:23440911
Nikazad, T; Davidi, R; Herman, G T
2012-03-01
We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.
Interior point techniques for LP and NLP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evtushenko, Y.
By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.
Project SQUID. A Program of Fundamental Research on Liquid Rocket and Pulse Jet Propulsion
1947-01-01
However, while the acoustical case can very well be represented by a correspond- ing linear electrical system, no way lias been found to represent...a carbon tube containing the gas to be decomposed thermally will be heated and its tem- perature determined by an optical pyrometer ; by the ojier
Stochastic Feshbach Projection for the Dynamics of Open Quantum Systems
NASA Astrophysics Data System (ADS)
Link, Valentin; Strunz, Walter T.
2017-11-01
We present a stochastic projection formalism for the description of quantum dynamics in bosonic or spin environments. The Schrödinger equation in the coherent state representation with respect to the environmental degrees of freedom can be reformulated by employing the Feshbach partitioning technique for open quantum systems based on the introduction of suitable non-Hermitian projection operators. In this picture the reduced state of the system can be obtained as a stochastic average over pure state trajectories, for any temperature of the bath. The corresponding non-Markovian stochastic Schrödinger equations include a memory integral over the past states. In the case of harmonic environments and linear coupling the approach gives a new form of the established non-Markovian quantum state diffusion stochastic Schrödinger equation without functional derivatives. Utilizing spin coherent states, the evolution equation for spin environments resembles the bosonic case with, however, a non-Gaussian average for the reduced density operator.
On the rate of convergence of the alternating projection method in finite dimensional spaces
NASA Astrophysics Data System (ADS)
Galántai, A.
2005-10-01
Using the results of Smith, Solmon, and Wagner [K. Smith, D. Solomon, S. Wagner, Practical and mathematical aspects of the problem of reconstructing objects from radiographs, Bull. Amer. Math. Soc. 83 (1977) 1227-1270] and Nelson and Neumann [S. Nelson, M. Neumann, Generalizations of the projection method with application to SOR theory for Hermitian positive semidefinite linear systems, Numer. Math. 51 (1987) 123-141] we derive new estimates for the speed of the alternating projection method and its relaxed version in . These estimates can be computed in at most O(m3) arithmetic operations unlike the estimates in papers mentioned above that require spectral information. The new and old estimates are equivalent in many practical cases. In cases when the new estimates are weaker, the numerical testing indicates that they approximate the original bounds in papers mentioned above quite well.
Stewart, Barclay; Wong, Evan; Papillon-Smith, Jessica; Trelles Centurion, Miguel Antonio; Dominguez, Lynette; Ao, Supongmeren; Jean-Paul, Basimuoneye Kahutsi; Kamal, Mustafa; Helmand, Rahmatullah; Naseer, Aamer; Kushner, Adam L
2015-03-27
Surgical capacity assessments in low-income countries have demonstrated critical deficiencies. Though vital for planning capacity improvements, these assessments are resource intensive and impractical during the planning phase of a humanitarian crisis. This study aimed to determine cesarean sections to total operations performed (CSR) and emergency herniorrhaphies to all herniorrhaphies performed (EHR) ratios from Médecins Sans Frontières Operations Centre Brussels (MSF-OCB) projects and examine if these established metrics are useful proxies for surgical capacity in low-income countries affected by crisis. All procedures performed in MSF-OCB operating theatres from July 2008 through June 2014 were reviewed. Projects providing only specialty care, not fully operational or not offering elective surgeries were excluded. Annual CSRs and EHRs were calculated for each project. Their relationship was assessed with linear regression. After applying the exclusion criteria, there were 47,472 cases performed at 13 sites in 8 countries. There were 13,939 CS performed (29% of total cases). Of the 4,632 herniorrhaphies performed (10% of total cases), 30% were emergency procedures. CSRs ranged from 0.06 to 0.65 and EHRs ranged from 0.03 to 1.0. Linear regression of annual ratios at each project did not demonstrate statistical evidence for the CSR to predict EHR [F(2,30)=2.34, p=0.11, R2=0.11]. The regression equation was: EHR = 0.25 + 0.52(CSR) + 0.10(reason for MSF-OCB assistance). Surgical humanitarian assistance projects operate in areas with critical surgical capacity deficiencies that are further disrupted by crisis. Rapid, accurate assessments of surgical capacity are necessary to plan cost- and clinically-effective humanitarian responses to baseline and acute unmet surgical needs in LICs affected by crisis. Though CSR and EHR may meet these criteria in 'steady-state' healthcare systems, they may not be useful during humanitarian emergencies. Further study of the relationship between direct surgical capacity improvements and these ratios is necessary to document their role in humanitarian settings.
Stewart, Barclay; Wong, Evan; Papillon-Smith, Jessica; Trelles Centurion, Miguel Antonio; Dominguez, Lynette; Ao, Supongmeren; Jean-Paul, Basimuoneye Kahutsi; Kamal, Mustafa; Helmand, Rahmatullah; Naseer, Aamer; Kushner, Adam L.
2015-01-01
Background: Surgical capacity assessments in low-income countries have demonstrated critical deficiencies. Though vital for planning capacity improvements, these assessments are resource intensive and impractical during the planning phase of a humanitarian crisis. This study aimed to determine cesarean sections to total operations performed (CSR) and emergency herniorrhaphies to all herniorrhaphies performed (EHR) ratios from Médecins Sans Frontières Operations Centre Brussels (MSF-OCB) projects and examine if these established metrics are useful proxies for surgical capacity in low-income countries affected by crisis. Methods: All procedures performed in MSF-OCB operating theatres from July 2008 through June 2014 were reviewed. Projects providing only specialty care, not fully operational or not offering elective surgeries were excluded. Annual CSRs and EHRs were calculated for each project. Their relationship was assessed with linear regression. Results: After applying the exclusion criteria, there were 47,472 cases performed at 13 sites in 8 countries. There were 13,939 CS performed (29% of total cases). Of the 4,632 herniorrhaphies performed (10% of total cases), 30% were emergency procedures. CSRs ranged from 0.06 to 0.65 and EHRs ranged from 0.03 to 1.0. Linear regression of annual ratios at each project did not demonstrate statistical evidence for the CSR to predict EHR [F(2,30)=2.34, p=0.11, R2=0.11]. The regression equation was: EHR = 0.25 + 0.52(CSR) + 0.10(reason for MSF-OCB assistance). Conclusion: Surgical humanitarian assistance projects operate in areas with critical surgical capacity deficiencies that are further disrupted by crisis. Rapid, accurate assessments of surgical capacity are necessary to plan cost- and clinically-effective humanitarian responses to baseline and acute unmet surgical needs in LICs affected by crisis. Though CSR and EHR may meet these criteria in ‘steady-state’ healthcare systems, they may not be useful during humanitarian emergencies. Further study of the relationship between direct surgical capacity improvements and these ratios is necessary to document their role in humanitarian settings. PMID:25905025
Abdurahman, Abdujelil; Jiang, Haijun; Rahman, Kaysar
2015-12-01
This paper deals with the problem of function projective synchronization for a class of memristor-based Cohen-Grossberg neural networks with time-varying delays. Based on the theory of differential equations with discontinuous right-hand side, some novel criteria are obtained to realize the function projective synchronization of addressed networks by combining open loop control and linear feedback control. As some special cases, several control strategies are given to ensure the realization of complete synchronization, anti-synchronization and the stabilization of the considered memristor-based Cohen-Grossberg neural network. Finally, a numerical example and its simulations are provided to demonstrate the effectiveness of the obtained results.
NASA Technical Reports Server (NTRS)
Berthoz, A.; Pavard, B.; Young, L. R.
1975-01-01
The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.
Concentrating Solar Power Projects - Puerto Errado 2 Thermosolar Power
linear Fresnel reflector system. Status Date: April 26, 2013 Project Overview Project Name: Puerto Errado . (Novatec Biosol AG) (15%) Technology: Linear Fresnel reflector Turbine Capacity: Net: 30.0 MW Gross: 30.0 ? Background Technology: Linear Fresnel reflector Status: Operational Country: Spain City: Calasparra Region
NASA Astrophysics Data System (ADS)
Pradhan, Moumita; Pradhan, Dinesh; Bandyopadhyay, G.
2010-10-01
Fuzzy System has demonstrated their ability to solve different kinds of problem in various application domains. There is an increasing interest to apply fuzzy concept to improve tasks of any system. Here case study of a thermal power plant is considered. Existing time estimation represents time to complete tasks. Applying fuzzy linear approach it becomes clear that after each confidence level least time is taken to complete tasks. As time schedule is less than less amount of cost is needed. Objective of this paper is to show how one system becomes more efficient in applying Fuzzy Linear approach. In this paper we want to optimize the time estimation to perform all tasks in appropriate time schedules. For the case study, optimistic time (to), pessimistic time (tp), most likely time(tm) is considered as data collected from thermal power plant. These time estimates help to calculate expected time(te) which represents time to complete particular task to considering all happenings. Using project evaluation and review technique (PERT) and critical path method (CPM) concept critical path duration (CPD) of this project is calculated. This tells that the probability of fifty percent of the total tasks can be completed in fifty days. Using critical path duration and standard deviation of the critical path, total completion of project can be completed easily after applying normal distribution. Using trapezoidal rule from four time estimates (to, tm, tp, te), we can calculate defuzzyfied value of time estimates. For range of fuzzy, we consider four confidence interval level say 0.4, 0.6, 0.8,1. From our study, it is seen that time estimates at confidence level between 0.4 and 0.8 gives the better result compared to other confidence levels.
NASA Astrophysics Data System (ADS)
Ranger, N.; Millner, A.; Niehoerster, F.
2010-12-01
Traditionally, climate change risk assessments have taken a roughly four-stage linear ‘chain’ of moving from socioeconomic projections, to climate projections, to primary impacts and then finally onto economic and social impact assessment. Adaptation decisions are then made on the basis of these outputs. The escalation of uncertainty through this chain is well known; resulting in an ‘explosion’ of uncertainties in the final risk and adaptation assessment. The space of plausible future risk scenarios is growing ever wider with the application of new techniques which aim to explore uncertainty ever more deeply; such as those used in the recent ‘probabilistic’ UK Climate Projections 2009, and the stochastic integrated assessment models, for example PAGE2002. This explosion of uncertainty can make decision-making problematic, particularly given that the uncertainty information communicated can not be treated as strictly probabilistic and therefore, is not an easy fit with standard decision-making under uncertainty approaches. Additional problems can arise from the fact that the uncertainty estimated for different components of the ‘chain’ is rarely directly comparable or combinable. Here, we explore the challenges and limitations of using current projections for adaptation decision-making. We report the findings of a recent report completed for the UK Adaptation Sub-Committee on approaches to deal with these challenges and make robust adaptation decisions today. To illustrate these approaches, we take a number of illustrative case studies, including a case of adaptation to hurricane risk on the US Gulf Coast. This is a particularly interesting case as it involves urgent adaptation of long-lived infrastructure but requires interpreting highly uncertain climate change science and modelling; i.e. projections of Atlantic basin hurricane activity. An approach we outline is reversing the linear chain of assessments to put the economics and decision-making first. Such an approach forces one to focus on the information of greatest value for the specific decision. We suggest that such an approach will help to accommodate the uncertainties in the chain and facilitate robust decision-making. Initial findings of these case studies will be presented with the aim of raising open questions and promoting discussion of the methodology. Finally, we reflect on the implications for the design of climate model experiments.
The Next Linear Collider Program-News
The Next Linear Collider at SLAC Navbar The Next Linear Collider In The Press The Secretary of Linear Collider is a high-priority goal of this plan. http://www.sc.doe.gov/Sub/Facilities_for_future/20 -term projects in conceputal stages (the Linear Collider is the highest priority project in this
Redefining the lower statistical limit in x-ray phase-contrast imaging
NASA Astrophysics Data System (ADS)
Marschner, M.; Birnbacher, L.; Willner, M.; Chabior, M.; Fehringer, A.; Herzen, J.; Noël, P. B.; Pfeiffer, F.
2015-03-01
Phase-contrast x-ray computed tomography (PCCT) is currently investigated and developed as a potentially very interesting extension of conventional CT, because it promises to provide high soft-tissue contrast for weakly absorbing samples. For data acquisition several images at different grating positions are combined to obtain a phase-contrast projection. For short exposure times, which are necessary for lower radiation dose, the photon counts in a single stepping position are very low. In this case, the currently used phase-retrieval does not provide reliable results for some pixels. This uncertainty results in statistical phase wrapping, which leads to a higher standard deviation in the phase-contrast projections than theoretically expected. For even lower statistics, the phase retrieval breaks down completely and the phase information is lost. New measurement procedures rely on a linear approximation of the sinusoidal phase stepping curve around the zero crossings. In this case only two images are acquired to obtain the phase-contrast projection. The approximation is only valid for small phase values. However, typically nearly all pixels are within this regime due to the differential nature of the signal. We examine the statistical properties of a linear approximation method and illustrate by simulation and experiment that the lower statistical limit can be redefined using this method. That means that the phase signal can be retrieved even with very low photon counts and statistical phase wrapping can be avoided. This is an important step towards enhanced image quality in PCCT with very low photon counts.
What Will the Neighbors Think? Building Large-Scale Science Projects Around the World
Jones, Craig; Mrotzek, Christian; Toge, Nobu; Sarno, Doug
2017-12-22
Public participation is an essential ingredient for turning the International Linear Collider into a reality. Wherever the proposed particle accelerator is sited in the world, its neighbors -- in any country -- will have something to say about hosting a 35-kilometer-long collider in their backyards. When it comes to building large-scale physics projects, almost every laboratory has a story to tell. Three case studies from Japan, Germany and the US will be presented to examine how community relations are handled in different parts of the world. How do particle physics laboratories interact with their local communities? How do neighbors react to building large-scale projects in each region? How can the lessons learned from past experiences help in building the next big project? These and other questions will be discussed to engage the audience in an active dialogue about how a large-scale project like the ILC can be a good neighbor.
Ikonnikova, Svetlana A; Male, Frank; Scanlon, Bridget R; Reedy, Robert C; McDaid, Guinevere
2017-12-19
Production of oil from shale and tight reservoirs accounted for almost 50% of 2016 total U.S. production and is projected to continue growing. The objective of our analysis was to quantify the water outlook for future shale oil development using the Eagle Ford Shale as a case study. We developed a water outlook model that projects water use for hydraulic fracturing (HF) and flowback and produced water (FP) volumes based on expected energy prices; historical oil, natural gas, and water-production decline data per well; projected well spacing; and well economics. The number of wells projected to be drilled in the Eagle Ford through 2045 is almost linearly related to oil price, ranging from 20 000 wells at $30/barrel (bbl) oil to 97 000 wells at $100/bbl oil. Projected FP water volumes range from 20% to 40% of HF across the play. Our base reference oil price of $50/bbl would result in 40 000 additional wells and related HF of 265 × 10 9 gal and FP of 85 × 10 9 gal. The presented water outlooks for HF and FP water volumes can be used to assess future water sourcing and wastewater disposal or reuse, and to inform policy discussions.
NASA Astrophysics Data System (ADS)
Li, Jun-jun; Yang, Xiao-jun; Xiao, Ying-jie; Xu, Bo-wei; Wu, Hua-feng
2018-03-01
Immersed tunnel is an important part of the Hong Kong-Zhuhai-Macao Bridge (HZMB) project. In immersed tunnel floating, translation which includes straight and transverse movements is the main working mode. To decide the magnitude and direction of the towing force for each tug, a particle swarm-based translation control method is presented for non-power immersed tunnel element. A sort of linear weighted logarithmic function is exploited to avoid weak subgoals. In simulation, the particle swarm-based control method is evaluated and compared with traditional empirical method in the case of the HZMB project. Simulation results show that the presented method delivers performance improvement in terms of the enhanced surplus towing force.
Concentrating Solar Power Projects - Linear Fresnel Reflector Projects |
Kimberlina solar thermal power plant, a linear Fresnel reflector system located near Bakersfield, California Solar Thermal Project eLLO Solar Thermal Project (Llo) IRESEN 1 MWe CSP-ORC pilot project Kimberlina Solar Thermal Power Plant (Kimberlina) Liddell Power Station Puerto Errado 1 Thermosolar Power Plant
Reduced Order Modeling for Prediction and Control of Large-Scale Systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalashnikova, Irina; Arunajatesan, Srinivasan; Barone, Matthew Franklin
2014-05-01
This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest tomore » Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier-Stokes equations is derived, and it is demonstrated that if a Galerkin ROM is constructed in this inner product, the ROM system energy will be bounded in a way that is consistent with the behavior of the exact solution to these PDEs, i.e., the ROM will be energy-stable. The viability of the linear as well as nonlinear continuous projection model reduction approaches developed as a part of this project is evaluated on several test cases, including the cavity configuration of interest in the targeted application area. In the second part of this report, some POD/Galerkin approaches for building stable ROMs using discrete projection are explored. It is shown that, for generic linear time-invariant (LTI) systems, a discrete counterpart of the continuous symmetry inner product is a weighted L2 inner product obtained by solving a Lyapunov equation. This inner product was first proposed by Rowley et al., and is termed herein the “Lyapunov inner product“. Comparisons between the symmetry inner product and the Lyapunov inner product are made, and the performance of ROMs constructed using these inner products is evaluated on several benchmark test cases. Also in the second part of this report, a new ROM stabilization approach, termed “ROM stabilization via optimization-based eigenvalue reassignment“, is developed for generic LTI systems. At the heart of this method is a constrained nonlinear least-squares optimization problem that is formulated and solved numerically to ensure accuracy of the stabilized ROM. Numerical studies reveal that the optimization problem is computationally inexpensive to solve, and that the new stabilization approach delivers ROMs that are stable as well as accurate. Summaries of “lessons learned“ and perspectives for future work motivated by this LDRD project are provided at the end of each of the two main chapters.« less
Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saad, Yousef
2014-01-16
The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners formore » solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the problem of evaluating f(A)v which arises in statistical sampling. 11. As an application to the methods we developed, we tackled the problem of computing the diagonal of the inverse of a matrix. This arises in statistical applications as well as in many applications in physics. We explored probing methods as well as domain-decomposition type methods. 12. A collaboration with researchers from Toulouse, France, considered the important problem of computing the Schur complement in a domain-decomposition approach. 13. We explored new ways of preconditioning linear systems, based on low-rank approximations.« less
Klein–Gordon equation in curved space-time
NASA Astrophysics Data System (ADS)
Lehn, R. D.; Chabysheva, S. S.; Hiller, J. R.
2018-07-01
We report the methods and results of a computational physics project on the solution of the relativistic Klein–Gordon equation for a light particle gravitationally bound to a heavy central mass. The gravitational interaction is prescribed by the metric of a spherically symmetric space-time. Metrics are considered for an impenetrable sphere, a soft sphere of uniform density, and a soft sphere with a linear transition from constant to zero density; in each case the radius of the central mass is chosen to be sufficient to avoid any event horizon. The solutions are obtained numerically and compared with nonrelativistic Coulomb-type solutions, both directly and in perturbation theory, to study the general-relativistic corrections to the quantum solutions for a 1/r potential. The density profile with a linear transition is chosen to avoid singularities in the wave equation that can be caused by a discontinuous derivative of the density. This project should be of interest to instructors and students of computational physics at the graduate and advanced undergraduate levels.
Developing Novel Reservoir Rule Curves Using Seasonal Inflow Projections
NASA Astrophysics Data System (ADS)
Tseng, Hsin-yi; Tung, Ching-pin
2015-04-01
Due to significant seasonal rainfall variations, reservoirs and their flexible operational rules are indispensable to Taiwan. Furthermore, with the intensifying impacts of climate change on extreme climate, the frequency of droughts in Taiwan has been increasing in recent years. Drought is a creeping phenomenon, the slow onset character of drought makes it difficult to detect at an early stage, and causes delays on making the best decision of allocating water. For these reasons, novel reservoir rule curves using projected seasonal streamflow are proposed in this study, which can potentially reduce the adverse effects of drought. This study dedicated establishing new rule curves which consider both current available storage and anticipated monthly inflows with leading time of two months to reduce the risk of water shortage. The monthly inflows are projected based on the seasonal climate forecasts from Central Weather Bureau (CWB), which a weather generation model is used to produce daily weather data for the hydrological component of the GWLF. To incorporate future monthly inflow projections into rule curves, this study designs a decision flow index which is a linear combination of current available storage and inflow projections with leading time of 2 months. By optimizing linear relationship coefficients of decision flow index, the shape of rule curves and the percent of water supply in each zone, the best rule curves to decrease water shortage risk and impacts can be developed. The Shimen Reservoir in the northern Taiwan is used as a case study to demonstrate the proposed method. Existing rule curves (M5 curves) of Shimen Reservoir are compared with two cases of new rule curves, including hindcast simulations and historic seasonal forecasts. The results show new rule curves can decrease the total water shortage ratio, and in addition, it can also allocate shortage amount to preceding months to avoid extreme shortage events. Even though some uncertainties in historic forecasts would result unnecessary discounts of water supply, it still performs better than M5 curves during droughts.
Implementation of projective measurements with linear optics and continuous photon counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeoka, Masahiro; Sasaki, Masahide; Loock, Peter van
2005-02-01
We investigate the possibility of implementing a given projection measurement using linear optics and arbitrarily fast feedforward based on the continuous detection of photons. In particular, we systematically derive the so-called Dolinar scheme that achieves the minimum-error discrimination of binary coherent states. Moreover, we show that the Dolinar-type approach can also be applied to projection measurements in the regime of photonic-qubit signals. Our results demonstrate that for implementing a projection measurement with linear optics, in principle, unit success probability may be approached even without the use of expensive entangled auxiliary states, as they are needed in all known (near-)deterministic linear-opticsmore » proposals.« less
Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*
Katsevich, E.; Katsevich, A.; Singer, A.
2015-01-01
In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132
Arthroplasty Utilization in the United States is Predicted by Age-Specific Population Groups.
Bashinskaya, Bronislava; Zimmerman, Ryan M; Walcott, Brian P; Antoci, Valentin
2012-01-01
Osteoarthritis is a common indication for hip and knee arthroplasty. An accurate assessment of current trends in healthcare utilization as they relate to arthroplasty may predict the needs of a growing elderly population in the United States. First, incidence data was queried from the United States Nationwide Inpatient Sample from 1993 to 2009. Patients undergoing total knee and hip arthroplasty were identified. Then, the United States Census Bureau was queried for population data from the same study period as well as to provide future projections. Arthroplasty followed linear regression models with the population group >64 years in both hip and knee groups. Projections for procedure incidence in the year 2050 based on these models were calculated to be 1,859,553 cases (hip) and 4,174,554 cases (knee). The need for hip and knee arthroplasty is expected to grow significantly in the upcoming years, given population growth predictions.
Spotting the difference in molecular dynamics simulations of biomolecules
NASA Astrophysics Data System (ADS)
Sakuraba, Shun; Kono, Hidetoshi
2016-08-01
Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Leahy, R.M.
A new method for source localization is described that is based on a modification of the well known multiple signal classification (MUSIC) algorithm. In classical MUSIC, the array manifold vector is projected onto an estimate of the signal subspace, but errors in the estimate can make location of multiple sources difficult. Recursively applied and projected (RAP) MUSIC uses each successively located source to form an intermediate array gain matrix, and projects both the array manifold and the signal subspace estimate into its orthogonal complement. The MUSIC projection is then performed in this reduced subspace. Using the metric of principal angles,more » the authors describe a general form of the RAP-MUSIC algorithm for the case of diversely polarized sources. Through a uniform linear array simulation, the authors demonstrate the improved Monte Carlo performance of RAP-MUSIC relative to MUSIC and two other sequential subspace methods, S and IES-MUSIC.« less
Study Of Pre-Shaped Membrane Mirrors And Electrostatic Mirrors With Nonlinear-Optical Correction
2002-01-01
mirrors have been manufactured of glass-like material Zerodur with very low coefficient of linear expansion. They have a more light cellular construction...primary and flat secondary mirrors are both segmented ones. In the case of the primary mirror made of traditional materials such as Zerodur or fused...FINAL REPORT ISTC Project #2103p “Study of Pre-Shaped Membrane Mirrors and Electrostatic Mirrors with Nonlinear-Optical Correction” Manager
WE-G-18A-02: Calibration-Free Combined KV/MV Short Scan CBCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, M; Loo, B; Bazalova, M
Purpose: To combine orthogonal kilo-voltage (kV) and Mega-voltage (MV) projection data for short scan cone-beam CT to reduce imaging time on current radiation treatment systems, using a calibration-free gain correction method. Methods: Combining two orthogonal projection data sets for kV and MV imaging hardware can reduce the scan angle to as small as 110° (90°+fan) such that the total scan time is ∼18 seconds, or within a breath hold. To obtain an accurate reconstruction, the MV projection data is first linearly corrected using linear regression using the redundant data from the start and end of the sinogram, and then themore » combined data is reconstructed using the FDK method. To correct for the different changes of attenuation coefficients in kV/MV between soft tissue and bone, the forward projection of the segmented bone and soft tissue from the first reconstruction in the redundant region are added to the linear regression model. The MV data is corrected again using the additional information from the segmented image, and combined with kV for a second FDK reconstruction. We simulated polychromatic 120 kVp (conventional a-Si EPID with CsI) and 2.5 MVp (prototype high-DQE MV detector) projection data with Poisson noise using the XCAT phantom. The gain correction and combined kV/MV short scan reconstructions were tested with head and thorax cases, and simple contrast-to-noise ratio measurements were made in a low-contrast pattern in the head. Results: The FDK reconstruction using the proposed gain correction method can effectively reduce artifacts caused by the differences of attenuation coefficients in the kV/MV data. The CNRs of the short scans for kV, MV, and kV/MV are 5.0, 2.6 and 3.4 respectively. The proposed gain correction method also works with truncated projections. Conclusion: A novel gain correction and reconstruction method was developed to generate short scan CBCT from orthogonal kV/MV projections. This work is supported by NIH Grant 5R01CA138426-05.« less
Aldridge, Robert W.; Hayward, Andrew C.; Field, Nigel; Warren-Gash, Charlotte; Smith, Colette; Pebody, Richard; Fleming, Declan; McCracken, Shane
2016-01-01
Background School aged children are a key link in the transmission of influenza. Most cases have little or no interaction with health services and are therefore missed by the majority of existing surveillance systems. As part of a public engagement with science project, this study aimed to establish a web-based system for the collection of routine school absence data and determine if school absence prevalence was correlated with established surveillance measures for circulating influenza. Methods We collected data for two influenza seasons (2011/12 and 2012/13). The primary outcome was daily school absence prevalence (weighted to make it nationally representative) for children aged 11 to 16. School absence prevalence was triangulated graphically and through univariable linear regression to Royal College of General Practitioners (RCGP) influenza like illness (ILI) episode incidence rate, national microbiological surveillance data on the proportion of samples positive for influenza (A+B) and with Rhinovirus, RSV and laboratory confirmed cases of Norovirus. Results 27 schools submitted data over two respiratory seasons. During the first season, levels of influenza measured by school absence prevalence and established surveillance were low. In the 2012/13 season, a peak of school absence prevalence occurred in week 51, and week 1 in RCGP ILI surveillance data. Linear regression showed a strong association between the school absence prevalence and RCGP ILI (All ages, and 5–14 year olds), laboratory confirmed cases of influenza A & B, and weak evidence for a linear association with Rhinovirus and Norovirus. Interpretation This study provides initial evidence for using routine school illness absence prevalence as a novel tool for influenza surveillance. The network of web-based data collection platforms we established through active engagement provides an innovative model of conducting scientific research and could be used for a wide range of infectious disease studies in the future. PMID:26933880
Aldridge, Robert W; Hayward, Andrew C; Field, Nigel; Warren-Gash, Charlotte; Smith, Colette; Pebody, Richard; Fleming, Declan; McCracken, Shane
2016-01-01
School aged children are a key link in the transmission of influenza. Most cases have little or no interaction with health services and are therefore missed by the majority of existing surveillance systems. As part of a public engagement with science project, this study aimed to establish a web-based system for the collection of routine school absence data and determine if school absence prevalence was correlated with established surveillance measures for circulating influenza. We collected data for two influenza seasons (2011/12 and 2012/13). The primary outcome was daily school absence prevalence (weighted to make it nationally representative) for children aged 11 to 16. School absence prevalence was triangulated graphically and through univariable linear regression to Royal College of General Practitioners (RCGP) influenza like illness (ILI) episode incidence rate, national microbiological surveillance data on the proportion of samples positive for influenza (A+B) and with Rhinovirus, RSV and laboratory confirmed cases of Norovirus. 27 schools submitted data over two respiratory seasons. During the first season, levels of influenza measured by school absence prevalence and established surveillance were low. In the 2012/13 season, a peak of school absence prevalence occurred in week 51, and week 1 in RCGP ILI surveillance data. Linear regression showed a strong association between the school absence prevalence and RCGP ILI (All ages, and 5-14 year olds), laboratory confirmed cases of influenza A & B, and weak evidence for a linear association with Rhinovirus and Norovirus. This study provides initial evidence for using routine school illness absence prevalence as a novel tool for influenza surveillance. The network of web-based data collection platforms we established through active engagement provides an innovative model of conducting scientific research and could be used for a wide range of infectious disease studies in the future.
Analysis of energy use and CO2 emissions in the U.S. refining sector, with projections for 2025.
Hirshfeld, David S; Kolb, Jeffrey A
2012-04-03
This analysis uses linear programming modeling of the U.S. refining sector to estimate total annual energy consumption and CO(2) emissions in 2025, for four projected U.S. crude oil slates. The baseline is similar to the current U.S. crude slate; the other three contain larger proportions of higher density, higher sulfur crudes than the current or any previous U.S. crude slates. The latter cases reflect aggressive assumptions regarding the volumes of Canadian crudes in the U.S. crude slate in 2025. The analysis projects U.S. refinery energy use 3.7%-6.3% (≈ 0.13-0.22 quads/year) higher and refinery CO(2) emissions 5.4%-9.3% (≈ 0.014-0.024 gigatons/year) higher in the study cases than in the baseline. Refining heavier crude slates would require significant investments in new refinery processing capability, especially coking and hydrotreating units. These findings differ substantially from a recent estimate asserting that processing heavy oil or bitumen blends could increase industry CO(2) emissions by 1.6-3.7 gigatons/year.
NASA Astrophysics Data System (ADS)
Zaripov, D. I.; Renfu, Li
2018-05-01
The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.
Caicedo, Alexander; Varon, Carolina; Hunyadi, Borbala; Papademetriou, Maria; Tachtsidis, Ilias; Van Huffel, Sabine
2016-01-01
Clinical data is comprised by a large number of synchronously collected biomedical signals that are measured at different locations. Deciphering the interrelationships of these signals can yield important information about their dependence providing some useful clinical diagnostic data. For instance, by computing the coupling between Near-Infrared Spectroscopy signals (NIRS) and systemic variables the status of the hemodynamic regulation mechanisms can be assessed. In this paper we introduce an algorithm for the decomposition of NIRS signals into additive components. The algorithm, SIgnal DEcomposition base on Obliques Subspace Projections (SIDE-ObSP), assumes that the measured NIRS signal is a linear combination of the systemic measurements, following the linear regression model y = Ax + ϵ . SIDE-ObSP decomposes the output such that, each component in the decomposition represents the sole linear influence of one corresponding regressor variable. This decomposition scheme aims at providing a better understanding of the relation between NIRS and systemic variables, and to provide a framework for the clinical interpretation of regression algorithms, thereby, facilitating their introduction into clinical practice. SIDE-ObSP combines oblique subspace projections (ObSP) with the structure of a mean average system in order to define adequate signal subspaces. To guarantee smoothness in the estimated regression parameters, as observed in normal physiological processes, we impose a Tikhonov regularization using a matrix differential operator. We evaluate the performance of SIDE-ObSP by using a synthetic dataset, and present two case studies in the field of cerebral hemodynamics monitoring using NIRS. In addition, we compare the performance of this method with other system identification techniques. In the first case study data from 20 neonates during the first 3 days of life was used, here SIDE-ObSP decoupled the influence of changes in arterial oxygen saturation from the NIRS measurements, facilitating the use of NIRS as a surrogate measure for cerebral blood flow (CBF). The second case study used data from a 3-years old infant under Extra Corporeal Membrane Oxygenation (ECMO), here SIDE-ObSP decomposed cerebral/peripheral tissue oxygenation, as a sum of the partial contributions from different systemic variables, facilitating the comparison between the effects of each systemic variable on the cerebral/peripheral hemodynamics.
Nonlinear optimization with linear constraints using a projection method
NASA Technical Reports Server (NTRS)
Fox, T.
1982-01-01
Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.
NASA Astrophysics Data System (ADS)
Pagnoni, Gianluca; Tinti, Stefano
2016-04-01
The eastern coast of Sicily has been hit by many historical tsunamis of local and remote origin. This zone and in particular Siracusa, as test site, was selected in the FP7 European project ASTARTE (Assessment, Strategy And Risk Reduction for Tsunamis in Europe - FP7-ENV2013 6.4-3, Grant 603839). According to the project goals, in this work oscillations modes of the Siracusa harbour were analysed with focus on the typical tsunami periods range, and on the protecting effects of breakwaters by using linear and non-linear simulation models. The city of Siracusa is located north of the homonymous gulf and has two harbours, called "Piccolo" (small) and "Grande" (grand) that are connected through a narrow channel. The harbour "Piccolo" is the object of this work. It is located at the end of a bay facing east and bordered on the south by the peninsula of Ortigia and on the north by the mainland. The basin has an area of approximately 100,000 m2 and is very shallow with an average depth of 2.5 m. It is protected by two breakwaters reducing its mouth to only 40 m width. This study was carried out using the numerical code UBO-TSUFD that solves linear and non-linear shallow-water equations on a high-resolution 2m x 2m regular grid. Resonant modes were searched by sinusoidal forcing on the open boundary with periods in a range from about 60 s to 1600 s covering the typical tsunami spectrum. The work was divided into three phases. First we studied the natural resonance frequencies, and in particular the Helmholtz resonance mode by using a linear fixed-geometry model and assuming that the connecting channel between the two Siracusa ports is closed. Second, we repeated the analysis by using a non-linear simulation model accounting for flooding and for an open connection channel. Eventually, we forced the harbour by means of synthetic signals with amplitude, period and duration of the main historical tsunamis attacking Siracusa, namely the AD 365, the 1693 and the 1908 tsunami events. In this last case our attention was also focused on quantifying the role of the existing breakwaters in mitigating the incoming tsunami.
Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements
NASA Astrophysics Data System (ADS)
Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga
The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.
Dahmen, Tim; Kohr, Holger; de Jonge, Niels; Slusallek, Philipp
2015-06-01
Combined tilt- and focal series scanning transmission electron microscopy is a recently developed method to obtain nanoscale three-dimensional (3D) information of thin specimens. In this study, we formulate the forward projection in this acquisition scheme as a linear operator and prove that it is a generalization of the Ray transform for parallel illumination. We analytically derive the corresponding backprojection operator as the adjoint of the forward projection. We further demonstrate that the matched backprojection operator drastically improves the convergence rate of iterative 3D reconstruction compared to the case where a backprojection based on heuristic weighting is used. In addition, we show that the 3D reconstruction is of better quality.
Introduction of the 2nd Phase of the Integrated Hydrologic Model Intercomparison Project
NASA Astrophysics Data System (ADS)
Kollet, Stefan; Maxwell, Reed; Dages, Cecile; Mouche, Emmanuel; Mugler, Claude; Paniconi, Claudio; Park, Young-Jin; Putti, Mario; Shen, Chaopeng; Stisen, Simon; Sudicky, Edward; Sulis, Mauro; Ji, Xinye
2015-04-01
The 2nd Phase of the Integrated Hydrologic Model Intercomparison Project commenced in June 2013 with a workshop at Bonn University funded by the German Science Foundation and US National Science Foundation. Three test cases were defined and compared that are available online at www.hpsc-terrsys.de including a tilted v-catchment case; a case called superslab based on multiple slab-heterogeneities in the hydraulic conductivity along a hillslope; and the Borden site case, based on a published field experiment. The goal of this phase is to further interrogate the coupling of surface-subsurface flow implemented in various integrated hydrologic models; and to understand and quantify the impact of differences in the conceptual and technical implementations on the simulation results, which may constitute an additional source of uncertainty. The focus has been broadened considerably including e.g. saturated and unsaturated subsurface storages, saturated surface area, ponded surface storage in addition to discharge, and pressure/saturation profiles and cross-sections. Here, first results are presented and discussed demonstrating the conceptual and technical challenges in implementing essentially the same governing equations describing highly non-linear moisture redistribution processes and surface-groundwater interactions.
Investigating the unification of LOFAR-detected powerful AGN in the Boötes field
NASA Astrophysics Data System (ADS)
Morabito, Leah K.; Williams, W. L.; Duncan, Kenneth J.; Röttgering, H. J. A.; Miley, George; Saxena, Aayush; Barthel, Peter; Best, P. N.; Bruggen, M.; Brunetti, G.; Chyży, K. T.; Engels, D.; Hardcastle, M. J.; Harwood, J. J.; Jarvis, Matt J.; Mahony, E. K.; Prandoni, I.; Shimwell, T. W.; Shulevski, A.; Tasse, C.
2017-08-01
Low radio frequency surveys are important for testing unified models of radio-loud quasars and radio galaxies. Intrinsically similar sources that are randomly oriented on the sky will have different projected linear sizes. Measuring the projected linear sizes of these sources provides an indication of their orientation. Steep-spectrum isotropic radio emission allows for orientation-free sample selection at low radio frequencies. We use a new radio survey of the Boötes field at 150 MHz made with the Low-Frequency Array (LOFAR) to select a sample of radio sources. We identify 60 radio sources with powers P > 1025.5 W Hz-1 at 150 MHz using cross-matched multiwavelength information from the AGN and Galaxy Evolution Survey, which provides spectroscopic redshifts and photometric identification of 16 quasars and 44 radio galaxies. When considering the radio spectral slope only, we find that radio sources with steep spectra have projected linear sizes that are on average 4.4 ± 1.4 larger than those with flat spectra. The projected linear sizes of radio galaxies are on average 3.1 ± 1.0 larger than those of quasars (2.0 ± 0.3 after correcting for redshift evolution). Combining these results with three previous surveys, we find that the projected linear sizes of radio galaxies and quasars depend on redshift but not on power. The projected linear size ratio does not correlate with either parameter. The LOFAR data are consistent within the uncertainties with theoretical predictions of the correlation between the quasar fraction and linear size ratio, based on an orientation-based unification scheme.
Projections of number of cancer cases in India (2010-2020) by cancer groups.
Takiar, Ramnath; Nadayil, Deenu; Nandakumar, A
2010-01-01
Recently, NCRP (ICMR), Bangalore, has published a report on Time Trends in Cancer Incidence Rates. The report also provided projected numbers of cancer cases at the India country level for selected leadingsites. In the present paper, an attempt has been made to project cancer cases for India by sex, years and cancer groups. The incidence data generated by population-based cancer registries (PBCRs) at Bangalore, Barshi, Bhopal, Chennai, Delhi and Mumbai for the years 2001-2005 formed the sources of data. In addition, the latest incidence data of North Eastern Registries for the year 2005-06 were utilized. The crude incidence rate (CR) was considered suitable for assessing the future load of cancer cases in the country. The Linear Regression method (IARC 1991) was used to assess the time trend and the projection of rates for the periods 2010-2020. For whichever sites where trends were not found to be significant, their latest rates were taken into consideration and assumed to remain same for the period 2010-2020. The total cancer cases are likely to go up from 979,786 cases in the year 2010 to 1,148,757 cases in the year 2020. The tobacco-related cancers for males are estimated to go up from 190,244 in the year 2010 to 225,241 in the year 2020. Similarly, the female cases will go up from 75,289 in year 2010 to 93,563 in the year 2020. For the year 2010, the number of cancer cases related to digestive system, for both males and females, are estimated to be 107,030 and 86,606 respectively. For, head and neck cancers, the estimates are 122,643 and 53,148 cases, respectively. and for the lymphoid and hematopoietic system (LHS), for the year 2010, are 62,648 for males and 41,591 for females. Gynecological-related cancers are estimated to go up from 153,850 in 2010 to 182,602 in 2020. Among males and females, cancer of breast alone is expected to cross the figure of 100,000 by the year 2020.
NASA Astrophysics Data System (ADS)
Bindschadler, Robert
2013-04-01
The SeaRISE (Sea-level Response to Ice Sheet Evolution) project achieved ice-sheet model ensemble responses to a variety of prescribed changes to surface mass balance, basal sliding and ocean boundary melting. Greenland ice sheet models are more sensitive than Antarctic ice sheet models to likely atmospheric changes in surface mass balance, while Antarctic models are most sensitive to basal melting of its ice shelves. An experiment approximating the IPCC's RCP8.5 scenario produces first century contributions to sea level of 22.3 and 7.3 cm from Greenland and Antarctica, respectively, with a range among models of 62 and 17 cm, respectively. By 200 years, these projections increase to 53.2 and 23.4 cm, respectively, with ranges of 79 and 57 cm. The considerable range among models was not only in the magnitude of ice lost, but also in the spatial pattern of response to identical forcing. Despite this variation, the response of any single model to a large range in the forcing intensity was remarkably linear in most cases. Additionally, the results of sensitivity experiments to single types of forcing (i.e., only one of the surface mass balance, or basal sliding, or ocean boundary melting) could be summed to accurately predict any model's result for an experiment when multiple forcings were applied simultaneously. This suggests a limited amount of feedback through the ice sheet's internal dynamics between these types of forcing over the time scale of a few centuries (SeaRISE experiments lasted 500 years).
Secret Message Decryption: Group Consulting Projects Using Matrices and Linear Programming
ERIC Educational Resources Information Center
Gurski, Katharine F.
2009-01-01
We describe two short group projects for finite mathematics students that incorporate matrices and linear programming into fictional consulting requests presented as a letter to the students. The students are required to use mathematics to decrypt secret messages in one project involving matrix multiplication and inversion. The second project…
Mat-Rix-Toe: Improving Writing through a Game-Based Project in Linear Algebra
ERIC Educational Resources Information Center
Graham-Squire, Adam; Farnell, Elin; Stockton, Julianna Connelly
2014-01-01
The Mat-Rix-Toe project utilizes a matrix-based game to deepen students' understanding of linear algebra concepts and strengthen students' ability to express themselves mathematically. The project was administered in three classes using slightly different approaches, each of which included some editing component to encourage the…
SU-F-I-41: Calibration-Free Material Decomposition for Dual-Energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, W; Xing, L; Zhang, Q
2016-06-15
Purpose: To eliminate tedious phantom calibration or manually region of interest (ROI) selection as required in dual-energy CT material decomposition, we establish a new projection-domain material decomposition framework with incorporation of energy spectrum. Methods: Similar to the case of dual-energy CT, the integral of the basis material image in our model is expressed as a linear combination of basis functions, which are the polynomials of high- and low-energy raw projection data. To yield the unknown coefficients of the linear combination, the proposed algorithm minimizes the quadratic error between the high- and low-energy raw projection data and the projection calculated usingmore » material images. We evaluate the algorithm with an iodine concentration numerical phantom at different dose and iodine concentration levels. The x-ray energy spectra of the high and low energy are estimated using an indirect transmission method. The derived monochromatic images are compared with the high- and low-energy CT images to demonstrate beam hardening artifacts reduction. Quantitative results were measured and compared to the true values. Results: The differences between the true density value used for simulation and that were obtained from the monochromatic images, are 1.8%, 1.3%, 2.3%, and 2.9% for the dose levels from standard dose to 1/8 dose, and are 0.4%, 0.7%, 1.5%, and 1.8% for the four iodine concentration levels from 6 mg/mL to 24 mg/mL. For all of the cases, beam hardening artifacts, especially streaks shown between dense inserts, are almost completely removed in the monochromatic images. Conclusion: The proposed algorithm provides an effective way to yield material images and artifacts-free monochromatic images at different dose levels without the need for phantom calibration or ROI selection. Furthermore, the approach also yields accurate results when the concentration of the iodine concentrate insert is very low, suggesting the algorithm is robust with respect to the low-contrast scenario.« less
Construction of energy-stable projection-based reduced order models
Kalashnikova, Irina; Barone, Matthew F.; Arunajatesan, Srinivasan; ...
2014-12-15
Our paper aims to unify and extend several approaches for building stable projection-based reduced order models (ROMs) using the energy method and the concept of “energy-stability”. Attention is focused on linear time-invariant (LTI) systems. First, an approach for building energy stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is proposed. The key idea is to apply to the system a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The result of this procedure will be a ROM that is energy-stablemore » for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Next, attention is turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, termed the “Lyapunov inner product”, is derived. Moreover, it is shown that the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system ari sing from the discretization of a system of PDEs in space. Projection in this inner product guarantees a ROM that is energy-stable, again for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. We also made comparisons between the symmetry inner product and the Lyapunov inner product. Performance of ROMs constructed using these inner products is evaluated on several benchmark test cases.« less
The spin-partitioned total position-spread tensor: An application to Heisenberg spin chains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fertitta, Edoardo; Paulus, Beate; El Khatib, Muammar
2015-12-28
The spin partition of the Total Position-Spread (TPS) tensor has been performed for one-dimensional Heisenberg chains with open boundary conditions. Both the cases of a ferromagnetic (high-spin) and an anti-ferromagnetic (low-spin) ground-state have been considered. In the case of a low-spin ground-state, the use of alternating magnetic couplings allowed to investigate the effect of spin-pairing. The behavior of the spin-partitioned TPS (SP-TPS) tensor as a function of the number of sites turned to be closely related to the presence of an energy gap between the ground-state and the first excited-state at the thermodynamic limit. Indeed, a gapped energy spectrum ismore » associated to a linear growth of the SP-TPS tensor with the number of sites. On the other hand, in gapless situations, the spread presents a faster-than-linear growth, resulting in the divergence of its per-site value. Finally, for the case of a high-spin wave function, an analytical expression of the dependence of the SP-TPS on the number of sites n and the total spin-projection S{sub z} has been derived.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreiner, S.; Paschal, C.B.; Galloway, R.L.
Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less
Visual exploration of high-dimensional data through subspace analysis and dynamic projections
Liu, S.; Wang, B.; Thiagarajan, J. J.; ...
2015-06-01
Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that createmore » smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Visual Exploration of High-Dimensional Data through Subspace Analysis and Dynamic Projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, S.; Wang, B.; Thiagarajan, Jayaraman J.
2015-06-01
We introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smoothmore » animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Yang, Jian; Zhang, David; Yang, Jing-Yu; Niu, Ben
2007-04-01
This paper develops an unsupervised discriminant projection (UDP) technique for dimensionality reduction of high-dimensional data in small sample size cases. UDP can be seen as a linear approximation of a multimanifolds-based learning framework which takes into account both the local and nonlocal quantities. UDP characterizes the local scatter as well as the nonlocal scatter, seeking to find a projection that simultaneously maximizes the nonlocal scatter and minimizes the local scatter. This characteristic makes UDP more intuitive and more powerful than the most up-to-date method, Locality Preserving Projection (LPP), which considers only the local scatter for clustering or classification tasks. The proposed method is applied to face and palm biometrics and is examined using the Yale, FERET, and AR face image databases and the PolyU palmprint database. The experimental results show that UDP consistently outperforms LPP and PCA and outperforms LDA when the training sample size per class is small. This demonstrates that UDP is a good choice for real-world biometrics applications.
Optimal and adaptive methods of processing hydroacoustic signals (review)
NASA Astrophysics Data System (ADS)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
Constructive Learning in Undergraduate Linear Algebra
ERIC Educational Resources Information Center
Chandler, Farrah Jackson; Taylor, Dewey T.
2008-01-01
In this article we describe a project that we used in our undergraduate linear algebra courses to help our students successfully master fundamental concepts and definitions and generate interest in the course. We describe our philosophy and discuss the projects overall success.
Yao, Jia-wen; Jia, Tie-wu; Zhou, Xiao-nong
2013-08-01
To investigate the activity of scientific research and international collaboration in National Institute of Parasitic Diseases (NIPD), Chinese Center for Disease Control and Prevention (China CDC) from 2002 to 2012, and assess the relationship between international collaboration and academic influence at an individual level. Non-bibliometric indicators including number and structure of scientific research personnel, number of projects and funds, visiting frequency, etc, were used to assess the activity of scientific research and international collaboration, and bibliometric indicators including publications and h index, were employed to estimate the academic influence of senior professionals in NIPD, China CDC. The relationship between the international collaboration and international academic influence in the control and research of parasitic diseases was evaluated by using analysis of covariance and generalized linear models. There was an increase tendency of the number of projects, funds and visiting frequency in NIPD, China CDC since the foundation of the institute in 2002, notably after 2011. The h2 index of NIPD, China was 7. Analysis of covariance and generalized linear model analysis revealed that the number of international partners (F = 81.75, P < 0.0001) , number of international projects (F = 22.81, P < 0.0001) , number of national projects (F = 7.30, P = 0.0110), and academic degree (F = 3.80, P = 0.0330) contributed greatly to individual academic influence, while visiting frequency, professional title and length of service had no significant association with h index. Elevation of international collaboration projects and development of long-term, stable international partnership may enhance the institutional and individual international academic influence in the field of parasitic diseases.
Projection of angular momentum via linear algebra
Johnson, Calvin W.; O'Mara, Kevin D.
2017-12-01
Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. Here, we show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications tomore » $$^{48}$$Cr and $$^{60}$$Fe in the $pf$ shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.« less
Projection of angular momentum via linear algebra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Calvin W.; O'Mara, Kevin D.
Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. Here, we show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications tomore » $$^{48}$$Cr and $$^{60}$$Fe in the $pf$ shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.« less
Projection of angular momentum via linear algebra
NASA Astrophysics Data System (ADS)
Johnson, Calvin W.; O'Mara, Kevin D.
2017-12-01
Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. We show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications to 48Cr and 60Fe in the p f shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.
DOT National Transportation Integrated Search
2009-06-01
The primary objective of this study was to design and test a first flush-based stormwater treatment device for elevated linear transportation projects/roadways that is capable of complying with MS4 regulations. The innovative idea behind the device i...
Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables
NASA Astrophysics Data System (ADS)
Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.
2018-02-01
In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.
The global monopole spacetime and its topological charge
NASA Astrophysics Data System (ADS)
Tan, Hongwei; Yang, Jinbo; Zhang, Jingyi; He, Tangmei
2018-03-01
We show that the global monopole spacetime is one of the exact solutions of the Einstein equations by treating the matter field as a non-linear sigma model, without the weak field approximation applied in the original derivation by Barriola and Vilenkin. Furthermore, we find the physical origin of the topological charge in the global monopole spacetime. Finally, we generalize the proposal which generates spacetime from thermodynamical laws to the case of spacetime with global monopole charge. Project supported by the National Natural Science Foundation of China (Grant Nos. 11273009 and 11303006).
Twistor Geometry of Null Foliations in Complex Euclidean Space
NASA Astrophysics Data System (ADS)
Taghavi-Chabert, Arman
2017-01-01
We give a detailed account of the geometric correspondence between a smooth complex projective quadric hypersurface Q^n of dimension n ≥ 3, and its twistor space PT, defined to be the space of all linear subspaces of maximal dimension of Q^n. Viewing complex Euclidean space CE^n as a dense open subset of Q^n, we show how local foliations tangent to certain integrable holomorphic totally null distributions of maximal rank on CE^n can be constructed in terms of complex submanifolds of PT. The construction is illustrated by means of two examples, one involving conformal Killing spinors, the other, conformal Killing-Yano 2-forms. We focus on the odd-dimensional case, and we treat the even-dimensional case only tangentially for comparison.
University of Chicago School Mathematics Project (UCSMP) Algebra. WWC Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2009
2009-01-01
University of Chicago School Mathematics Project (UCSMP) Algebra is a one-year course covering three primary topics: (1) linear and quadratic expressions, sentences, and functions; (2) exponential expressions and functions; and (3) linear systems. Topics from geometry, probability, and statistics are integrated with the appropriate algebra.…
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
A comparative study of linear and nonlinear anomaly detectors for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Goldberg, Hirsh; Nasrabadi, Nasser M.
2007-04-01
In this paper we implement various linear and nonlinear subspace-based anomaly detectors for hyperspectral imagery. First, a dual window technique is used to separate the local area around each pixel into two regions - an inner-window region (IWR) and an outer-window region (OWR). Pixel spectra from each region are projected onto a subspace which is defined by projection bases that can be generated in several ways. Here we use three common pattern classification techniques (Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD) Analysis, and the Eigenspace Separation Transform (EST)) to generate projection vectors. In addition to these three algorithms, the well-known Reed-Xiaoli (RX) anomaly detector is also implemented. Each of the four linear methods is then implicitly defined in a high- (possibly infinite-) dimensional feature space by using a nonlinear mapping associated with a kernel function. Using a common machine-learning technique known as the kernel trick all dot products in the feature space are replaced with a Mercer kernel function defined in terms of the original input data space. To determine how anomalous a given pixel is, we then project the current test pixel spectra and the spectral mean vector of the OWR onto the linear and nonlinear projection vectors in order to exploit the statistical differences between the IWR and OWR pixels. Anomalies are detected if the separation of the projection of the current test pixel spectra and the OWR mean spectra are greater than a certain threshold. Comparisons are made using receiver operating characteristics (ROC) curves.
Liu, S.; Bremer, P. -T; Jayaraman, J. J.; ...
2016-06-04
Linear projections are one of the most common approaches to visualize high-dimensional data. Since the space of possible projections is large, existing systems usually select a small set of interesting projections by ranking a large set of candidate projections based on a chosen quality measure. However, while highly ranked projections can be informative, some lower ranked ones could offer important complementary information. Therefore, selection based on ranking may miss projections that are important to provide a global picture of the data. Here, the proposed work fills this gap by presenting the Grassmannian Atlas, a framework that captures the global structuresmore » of quality measures in the space of all projections, which enables a systematic exploration of many complementary projections and provides new insights into the properties of existing quality measures.« less
NASA Astrophysics Data System (ADS)
Good, Peter; Andrews, Timothy; Chadwick, Robin; Dufresne, Jean-Louis; Gregory, Jonathan M.; Lowe, Jason A.; Schaller, Nathalie; Shiogama, Hideo
2016-11-01
nonlinMIP provides experiments that account for state-dependent regional and global climate responses. The experiments have two main applications: (1) to focus understanding of responses to CO2 forcing on states relevant to specific policy or scientific questions (e.g. change under low-forcing scenarios, the benefits of mitigation, or from past cold climates to the present day), or (2) to understand the state dependence (non-linearity) of climate change - i.e. why doubling the forcing may not double the response. State dependence (non-linearity) of responses can be large at regional scales, with important implications for understanding mechanisms and for general circulation model (GCM) emulation techniques (e.g. energy balance models and pattern-scaling methods). However, these processes are hard to explore using traditional experiments, which explains why they have had so little attention in previous studies. Some single model studies have established novel analysis principles and some physical mechanisms. There is now a need to explore robustness and uncertainty in such mechanisms across a range of models (point 2 above), and, more broadly, to focus work on understanding the response to CO2 on climate states relevant to specific policy/science questions (point 1). nonlinMIP addresses this using a simple, small set of CO2-forced experiments that are able to separate linear and non-linear mechanisms cleanly, with a good signal-to-noise ratio - while being demonstrably traceable to realistic transient scenarios. The design builds on the CMIP5 (Coupled Model Intercomparison Project Phase 5) and CMIP6 DECK (Diagnostic, Evaluation and Characterization of Klima) protocols, and is centred around a suite of instantaneous atmospheric CO2 change experiments, with a ramp-up-ramp-down experiment to test traceability to gradual forcing scenarios. In all cases the models are intended to be used with CO2 concentrations rather than CO2 emissions as the input. The understanding gained will help interpret the spread in policy-relevant scenario projections. Here we outline the basic physical principles behind nonlinMIP, and the method of establishing traceability from abruptCO2 to gradual forcing experiments, before detailing the experimental design, and finally some analysis principles. The test of traceability from abruptCO2 to transient experiments is recommended as a standard analysis within the CMIP5 and CMIP6 DECK protocols.
Optimization of subcutaneous vein contrast enhancement
NASA Astrophysics Data System (ADS)
Zeman, Herbert D.; Lovhoiden, Gunnar; Deshmukh, Harshal
2000-05-01
A technique for enhancing the contrast of subcutaneous veins has been demonstrated. This techniques uses a near IR light source and one or more IR sensitive CCD TV cameras to produce a contrast enhanced image of the subcutaneous veins. This video image of the veins is projected back onto the patient's skin using a n LCD video projector. The use of an IR transmitting filter in front of the video cameras prevents any positive feedback from the visible light from the video projector from causing instabilities in the projected image. The demonstration contrast enhancing illuminator has been tested on adults and children, both Caucasian and African-American, and it enhances veins quite well in all cases. The most difficult cases are those where significant deposits of subcutaneous fat are present which make the veins invisible under normal room illumination. Recent attempts to see through fat using different IR wavelength bands and both linearly and circularly polarized light were unsuccessful. The key to seeing through fat turns out to be a very diffuse source of RI light. Results on adult and pediatric subjects are shown with this new IR light source.
Palazuelos, Daniel; DaEun Im, Dana; Peckarsky, Matthew; Schwarz, Dan; Farmer, Didi Bertrand; Dhillon, Ranu; Johnson, Ari; Orihuela, Claudia; Hackett, Jill; Bazile, Junior; Berman, Leslie; Ballard, Madeleine; Panjabi, Raj; Ternier, Ralph; Slavin, Sam; Lee, Scott; Selinsky, Steve; Mitnick, Carole Diane
2013-01-01
Introduction Despite decades of experience with community health workers (CHWs) in a wide variety of global health projects, there is no established conceptual framework that structures how implementers and researchers can understand, study and improve their respective programs based on lessons learned by other CHW programs. Objective To apply an original, non-linear framework and case study method, 5-SPICE, to multiple sister projects of a large, international non-governmental organization (NGO), and other CHW projects. Design Engaging a large group of implementers, researchers and the best available literature, the 5-SPICE framework was refined and then applied to a selection of CHW programs. Insights gleaned from the case study method were summarized in a tabular format named the ‘5×5-SPICE chart’. This format graphically lists the ways in which essential CHW program elements interact, both positively and negatively, in the implementation field. Results The 5×5-SPICE charts reveal a variety of insights that come from a more complex understanding of how essential CHW projects interact and influence each other in their unique context. Some have been well described in the literature previously, while others are exclusive to this article. An analysis of how best to compensate CHWs is also offered as an example of the type of insights that this method may yield. Conclusions The 5-SPICE framework is a novel instrument that can be used to guide discussions about CHW projects. Insights from this process can help guide quality improvement efforts, or be used as hypothesis that will form the basis of a program's research agenda. Recent experience with research protocols embedded into successfully implemented projects demonstrates how such hypothesis can be rigorously tested. PMID:23561023
ESEA Title I Linking Project. Final Report.
ERIC Educational Resources Information Center
Holmes, Susan E.
The Rasch model for test score equating was compared with three other equating procedures as methods for implementing the norm referenced method (RMC Model A) of evaluating ESEA Title I projects. The Rasch model and its theoretical limitations were described. The three other equating methods used were: linear observed score equating, linear true…
ERIC Educational Resources Information Center
Williams-Candek, Maryellen
2016-01-01
How better to begin the study of linear equations in an algebra class than to determine what students already know about the subject? A seventh-grade algebra class in a suburban school undertook a project early in the school year that was completed before they began studying linear relations and functions. The project, which might have been…
Construction of energy-stable Galerkin reduced order models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalashnikova, Irina; Barone, Matthew Franklin; Arunajatesan, Srinivasan
2013-05-01
This report aims to unify several approaches for building stable projection-based reduced order models (ROMs). Attention is focused on linear time-invariant (LTI) systems. The model reduction procedure consists of two steps: the computation of a reduced basis, and the projection of the governing partial differential equations (PDEs) onto this reduced basis. Two kinds of reduced bases are considered: the proper orthogonal decomposition (POD) basis and the balanced truncation basis. The projection step of the model reduction can be done in two ways: via continuous projection or via discrete projection. First, an approach for building energy-stable Galerkin ROMs for linear hyperbolicmore » or incompletely parabolic systems of PDEs using continuous projection is proposed. The idea is to apply to the set of PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The resulting ROM will be energy-stable for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special weighted L2 inner product, termed the %E2%80%9Csymmetry inner product%E2%80%9D. Attention is then turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, a weighted L2 inner product termed the %E2%80%9CLyapunov inner product%E2%80%9D, is derived. The weighting matrix that defines the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system arising from the discretization of a system of PDEs in space. It is shown that a ROM constructed via discrete projection using the Lyapunov inner product will be energy-stable for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. Comparisons are also made between the symmetry inner product and the Lyapunov inner product. The performance of ROMs constructed using these inner products is evaluated on several benchmark test cases.« less
The NASA High Speed ASE Project: Computational Analyses of a Low-Boom Supersonic Configuration
NASA Technical Reports Server (NTRS)
Silva, Walter A.; DeLaGarza, Antonio; Zink, Scott; Bounajem, Elias G.; Johnson, Christopher; Buonanno, Michael; Sanetrik, Mark D.; Yoo, Seung Y.; Kopasakis, George; Christhilf, David M.;
2014-01-01
A summary of NASA's High Speed Aeroservoelasticity (ASE) project is provided with a focus on a low-boom supersonic configuration developed by Lockheed-Martin and referred to as the N+2 configuration. The summary includes details of the computational models developed to date including a linear finite element model (FEM), linear unsteady aerodynamic models, structured and unstructured CFD grids, and discussion of the FEM development including sizing and structural constraints applied to the N+2 configuration. Linear results obtained to date include linear mode shapes and linear flutter boundaries. In addition to the tasks associated with the N+2 configuration, a summary of the work involving the development of AeroPropulsoServoElasticity (APSE) models is also discussed.
An enormous hepatitis B virus-related liver disease burden projected in Vietnam by 2025.
Nguyen, Van Thi Thuy; Law, Matthew G; Dore, Gregory J
2008-04-01
Hepatitis B virus (HBV) is the major cause of chronic liver disease in Vietnam. This study aimed to estimate and project chronic HBV prevalence and HBV-related liver cirrhosis (LC) and hepatocellular carcinoma (HCC) for the period 1990-2025. The Vietnamese population for the period 1990-1999 was derived from census data to 1999 and from 2000 to 2025 based on projection data from the United States Census Bureau. Population chronic HBV prevalence for males and females was estimated based on age-specific HBV prevalence from Vietnamese community-based studies. Universal infant HBV vaccination from 2003 was assumed to reduce HBV infection by 90% in subsequent birth cohorts. Incidences of HBV-related LC and HCC by HBV DNA levels from the Taiwanese REVEAL studies were applied to the chronic HBV population to estimate and project HBV-related liver disease burden. Estimated chronic HBV prevalence increased from 6.4 million cases in 1990 to around 8.4 million cases in 2005 and was projected to decrease to 8.0 million by 2025. Estimated HBV-related LC and HCC incidence increased linearly from 21,900 and 9400 in 1990 to 58,650 and 25,000 in 2025. Estimated HBV-related mortality increased from 12,600 in 1990 to 40,000 in 2025. Over the next two decades, universal infant HBV vaccination will reduce chronic HBV prevalence in Vietnam but HBV-related liver disease burden will continue to rise. A national HBV strategy is required to address this expanding burden of liver disease.
Next Linear Collider Home Page
Welcome to the Next Linear Collider NLC Home Page If you would like to learn about linear colliders in general and about this next-generation linear collider project's mission, design ideas, and Linear Collider. line | NLC Home | NLC Technical | SLAC | mcdunn Tuesday, February 14, 2006 01:32:11 PM
NASA Astrophysics Data System (ADS)
Mishra, Aanand Kumar; Singh, Ajay; Bahadur Singh, Akal
2018-06-01
High rise arc dams are widely used in the development of storage type hydropower project because of the economic advantage. Among different phases considered during the lifetime of dam, control of dam’s safety and performance becomes more concerned during the lifetime. This paper proposed the 3 – D finite element method (FEM) for stress and deformation analysis of double curvature arc dam considering the non – linearity of foundation rock following the Hoek – Brown Criterion. The proposed methodology is implemented through MATLAB scripting language and studied the double curvature arc dam proposed for Budhi Gandaki hydropower project. The stress developed in the foundation rock, compressive and tensile stress acting on the dam are investigated and analysed for the reservoir level variation. Deformation at the top of the dam and in the foundation rock is also investigated. In addition to that, stress and deformation variation in the foundation rock is analysed for various rock properties.
Projection Operator: A Step Towards Certification of Adaptive Controllers
NASA Technical Reports Server (NTRS)
Larchev, Gregory V.; Campbell, Stefan F.; Kaneshige, John T.
2010-01-01
One of the major barriers to wider use of adaptive controllers in commercial aviation is the lack of appropriate certification procedures. In order to be certified by the Federal Aviation Administration (FAA), an aircraft controller is expected to meet a set of guidelines on functionality and reliability while not negatively impacting other systems or safety of aircraft operations. Due to their inherent time-variant and non-linear behavior, adaptive controllers cannot be certified via the metrics used for linear conventional controllers, such as gain and phase margin. Projection Operator is a robustness augmentation technique that bounds the output of a non-linear adaptive controller while conforming to the Lyapunov stability rules. It can also be used to limit the control authority of the adaptive component so that the said control authority can be arbitrarily close to that of a linear controller. In this paper we will present the results of applying the Projection Operator to a Model-Reference Adaptive Controller (MRAC), varying the amount of control authority, and comparing controller s performance and stability characteristics with those of a linear controller. We will also show how adjusting Projection Operator parameters can make it easier for the controller to satisfy the certification guidelines by enabling a tradeoff between controller s performance and robustness.
Linearized motion estimation for articulated planes.
Datta, Ankur; Sheikh, Yaser; Kanade, Takeo
2011-04-01
In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.
Shear-Flow Instability Saturation by Stable Modes: Hydrodynamics and Gyrokinetics
NASA Astrophysics Data System (ADS)
Fraser, Adrian; Pueschel, M. J.; Terry, P. W.; Zweibel, E. G.
2017-10-01
We present simulations of shear-driven instabilities, focusing on the impact of nonlinearly excited, large-scale, linearly stable modes on the nonlinear cascade, momentum transport, and secondary instabilities. Stable modes, which have previously been shown to significantly affect instability saturation [Fraser et al. PoP 2017], are investigated in a collisionless, gyrokinetic, periodic zonal flow using the
DEMNUni: ISW, Rees-Sciama, and weak-lensing in the presence of massive neutrinos
NASA Astrophysics Data System (ADS)
Carbone, Carmelita; Petkova, Margarita; Dolag, Klaus
2016-07-01
We present, for the first time in the literature, a full reconstruction of the total (linear and non-linear) ISW/Rees-Sciama effect in the presence of massive neutrinos, together with its cross-correlations with CMB-lensing and weak-lensing signals. The present analyses make use of all-sky maps extracted via ray-tracing across the gravitational potential distribution provided by the ``Dark Energy and Massive Neutrino Universe'' (DEMNUni) project, a set of large-volume, high-resolution cosmological N-body simulations, where neutrinos are treated as separate collisionless particles. We correctly recover, at 1-2% accuracy, the linear predictions from CAMB. Concerning the CMB-lensing and weak-lensing signals, we also recover, with similar accuracy, the signal predicted by Boltzmann codes, once non-linear neutrino corrections to HALOFIT are accounted for. Interestingly, in the ISW/Rees-Sciama signal, and its cross correlation with lensing, we find an excess of power with respect to the massless case, due to free streaming neutrinos, roughly at the transition scale between the linear and non-linear regimes. The excess is ~ 5 - 10% at l ~ 100 for the ISW/Rees-Sciama auto power spectrum, depending on the total neutrino mass Mν, and becomes a factor of ~ 4 for Mν = 0.3 eV, at l ~ 600, for the ISW/Rees-Sciama cross power with CMB-lensing. This effect should be taken into account for the correct estimation of the CMB temperature bispectrum in the presence of massive neutrinos.
NASA Technical Reports Server (NTRS)
Howard, Joseph M.; Ha, Kong Q.
2004-01-01
This is part two of a series on the optical modeling activities for JWST. Starting with the linear optical model discussed in part one, we develop centroid and wavefront error sensitivities for the special case of a segmented optical system such as JWST, where the primary mirror consists of 18 individual segments. Our approach extends standard sensitivity matrix methods used for systems consisting of monolithic optics, where the image motion is approximated by averaging ray coordinates at the image and residual wavefront error is determined with global tip/tilt removed. We develop an exact formulation using the linear optical model, and extend it to cover multiple field points for performance prediction at each instrument aboard JWST. This optical model is then driven by thermal and dynamic structural perturbations in an integrated modeling environment. Results are presented.
Single link flexible beam testbed project. Thesis
NASA Technical Reports Server (NTRS)
Hughes, Declan
1992-01-01
This thesis describes the single link flexible beam testbed at the CLaMS laboratory in terms of its hardware, software, and linear model, and presents two controllers, each including a hub angle proportional-derivative (PD) feedback compensator and one augmented by a second static gain full state feedback loop, based upon a synthesized strictly positive real (SPR) output, that increases specific flexible mode pole damping ratios w.r.t the PD only case and hence reduces unwanted residual oscillation effects. Restricting full state feedback gains so as to produce a SPR open loop transfer function ensures that the associated compensator has an infinite gain margin and a phase margin of at least (-90, 90) degrees. Both experimental and simulation data are evaluated in order to compare some different observer performance when applied to the real testbed and to the linear model when uncompensated flexible modes are included.
Study on Resources Assessment of Coal Seams covered by Long-Distance Oil & Gas Pipelines
NASA Astrophysics Data System (ADS)
Han, Bing; Fu, Qiang; Pan, Wei; Hou, Hanfang
2018-01-01
The assessment of mineral resources covered by construction projects plays an important role in reducing the overlaying of important mineral resources and ensuring the smooth implementation of construction projects. To take a planned long-distance gas pipeline as an example, the assessment method and principles for coal resources covered by linear projects are introduced. The areas covered by multiple coal seams are determined according to the linear projection method, and the resources covered by pipelines directly and indirectly are estimated by using area segmentation method on the basis of original blocks. The research results can provide references for route optimization of projects and compensation for mining right..
NASA Astrophysics Data System (ADS)
Kuleshov, Alexander S.; Katasonova, Vera A.
2018-05-01
The problem of rolling without slipping of a rotationally symmetric rigid body on a sphere is considered. The rolling body is assumed to be subjected to the forces, the resultant of which is directed from the center of mass G of the body to the center O of the sphere, and depends only on the distance between G and O. In this case the solution of this problem is reduced to solving the second order linear differential equation over the projection of the angular velocity of the body onto its axis of symmetry. Using the Kovacic algorithm we search for liouvillian solutions of the corresponding second order differential equation in the case, when the rolling body is a dynamically symmetric ball.
Cho, HyunGi; Yeon, Suyong; Choi, Hyunga; Doh, Nakju
2018-01-01
In a group of general geometric primitives, plane-based features are widely used for indoor localization because of their robustness against noises. However, a lack of linearly independent planes may lead to a non-trivial estimation. This in return can cause a degenerate state from which all states cannot be estimated. To solve this problem, this paper first proposed a degeneracy detection method. A compensation method that could fix orientations by projecting an inertial measurement unit’s (IMU) information was then explained. Experiments were conducted using an IMU-Kinect v2 integrated sensor system prone to fall into degenerate cases owing to its narrow field-of-view. Results showed that the proposed framework could enhance map accuracy by successful detection and compensation of degenerated orientations. PMID:29565287
Total variation-based neutron computed tomography
NASA Astrophysics Data System (ADS)
Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick
2018-05-01
We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.
NASA Astrophysics Data System (ADS)
Pueyo, Laurent
2016-01-01
A new class of high-contrast image analysis algorithms, that empirically fit and subtract systematic noise has lead to recent discoveries of faint exoplanet /substellar companions and scattered light images of circumstellar disks. The consensus emerging in the community is that these methods are extremely efficient at enhancing the detectability of faint astrophysical signal, but do generally create systematic biases in their observed properties. This poster provides a solution this outstanding problem. We present an analytical derivation of a linear expansion that captures the impact of astrophysical over/self-subtraction in current image analysis techniques. We examine the general case for which the reference images of the astrophysical scene moves azimuthally and/or radially across the field of view as a result of the observation strategy. Our new method method is based on perturbing the covariance matrix underlying any least-squares speckles problem and propagating this perturbation through the data analysis algorithm. This work is presented in the framework of Karhunen-Loeve Image Processing (KLIP) but it can be easily generalized to methods relying on linear combination of images (instead of eigen-modes). Based on this linear expansion, obtained in the most general case, we then demonstrate practical applications of this new algorithm. We first consider the case of the spectral extraction of faint point sources in IFS data and illustrate, using public Gemini Planet Imager commissioning data, that our novel perturbation based Forward Modeling (which we named KLIP-FM) can indeed alleviate algorithmic biases. We then apply KLIP-FM to the detection of point sources and show how it decreases the rate of false negatives while keeping the rate of false positives unchanged when compared to classical KLIP. This can potentially have important consequences on the design of follow-up strategies of ongoing direct imaging surveys.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seong W. Lee
During this reporting period, the literature survey including the gasifier temperature measurement literature, the ultrasonic application and its background study in cleaning application, and spray coating process are completed. The gasifier simulator (cold model) testing has been successfully conducted. Four factors (blower voltage, ultrasonic application, injection time intervals, particle weight) were considered as significant factors that affect the temperature measurement. The Analysis of Variance (ANOVA) was applied to analyze the test data. The analysis shows that all four factors are significant to the temperature measurements in the gasifier simulator (cold model). The regression analysis for the case with the normalizedmore » room temperature shows that linear model fits the temperature data with 82% accuracy (18% error). The regression analysis for the case without the normalized room temperature shows 72.5% accuracy (27.5% error). The nonlinear regression analysis indicates a better fit than that of the linear regression. The nonlinear regression model's accuracy is 88.7% (11.3% error) for normalized room temperature case, which is better than the linear regression analysis. The hot model thermocouple sleeve design and fabrication are completed. The gasifier simulator (hot model) design and the fabrication are completed. The system tests of the gasifier simulator (hot model) have been conducted and some modifications have been made. Based on the system tests and results analysis, the gasifier simulator (hot model) has met the proposed design requirement and the ready for system test. The ultrasonic cleaning method is under evaluation and will be further studied for the gasifier simulator (hot model) application. The progress of this project has been on schedule.« less
Simic, Vladimir; Dimitrijevic, Branka
2015-02-01
An interval linear programming approach is used to formulate and comprehensively test a model for optimal long-term planning of vehicle recycling in the Republic of Serbia. The proposed model is applied to a numerical case study: a 4-year planning horizon (2013-2016) is considered, three legislative cases and three scrap metal price trends are analysed, availability of final destinations for sorted waste flows is explored. Potential and applicability of the developed model are fully illustrated. Detailed insights on profitability and eco-efficiency of the projected contemporary equipped vehicle recycling factory are presented. The influences of the ordinance on the management of end-of-life vehicles in the Republic of Serbia on the vehicle hulks procuring, sorting generated material fractions, sorted waste allocation and sorted metals allocation decisions are thoroughly examined. The validity of the waste management strategy for the period 2010-2019 is tested. The formulated model can create optimal plans for procuring vehicle hulks, sorting generated material fractions, allocating sorted waste flows and allocating sorted metals. Obtained results are valuable for supporting the construction and/or modernisation process of a vehicle recycling system in the Republic of Serbia. © The Author(s) 2015.
The Magsat precision vector magnetometer
NASA Technical Reports Server (NTRS)
Acuna, M. H.
1980-01-01
This paper examines the Magsat precision vector magnetometer which is designed to measure projections of the ambient field in three orthogonal directions. The system contains a highly stable and linear triaxial fluxgate magnetometer with a dynamic range of + or - 2000 nT (1 nT = 10 to the -9 weber per sq m). The magnetometer electronics, analog-to-digital converter, and digitally controlled current sources are implemented with redundant designs to avoid a loss of data in case of failures. Measurements are carried out with an accuracy of + or - 1 part in 64,000 in magnitude and 5 arcsec in orientation (1 arcsec = 0.00028 deg).
A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester
2010-01-01
A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.
Linear Collider Physics Resource Book Snowmass 2001
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronan
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decademore » or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup -} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup -} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup -} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup -} experiments can provide. This last point merits further emphasis. If a new accelerator could be designed and built in a few years, it would make sense to wait for the results of each accelerator before planning the next one. Thus, we would wait for the results from the Tevatron before planning the LHC experiments, and wait for the LHC before planning any later stage. In reality accelerators require a long time to construct, and they require such specialized resources and human talent that delay can cripple what would be promising opportunities. In any event, we believe that the case for the linear collider is so compelling and robust that we can justify this facility on the basis of our current knowledge, even before the Tevatron and LHC experiments are done. The physics prospects for the linear collider have been studied intensively for more than a decade, and arguments for the importance of its experimental program have been developed from many different points of view. This book provides an introduction and a guide to this literature. We hope that it will allow physicists new to the consideration of linear collider physics to start from their own personal perspectives and develop their own assessments of the opportunities afforded by a linear collider.« less
Feng, Cun-Fang; Xu, Xin-Jian; Wang, Sheng-Jun; Wang, Ying-Hai
2008-06-01
We study projective-anticipating, projective, and projective-lag synchronization of time-delayed chaotic systems on random networks. We relax some limitations of previous work, where projective-anticipating and projective-lag synchronization can be achieved only on two coupled chaotic systems. In this paper, we realize projective-anticipating and projective-lag synchronization on complex dynamical networks composed of a large number of interconnected components. At the same time, although previous work studied projective synchronization on complex dynamical networks, the dynamics of the nodes are coupled partially linear chaotic systems. In this paper, the dynamics of the nodes of the complex networks are time-delayed chaotic systems without the limitation of the partial linearity. Based on the Lyapunov stability theory, we suggest a generic method to achieve the projective-anticipating, projective, and projective-lag synchronization of time-delayed chaotic systems on random dynamical networks, and we find both its existence and sufficient stability conditions. The validity of the proposed method is demonstrated and verified by examining specific examples using Ikeda and Mackey-Glass systems on Erdos-Renyi networks.
Linear response theory for annealing of radiation damage in semiconductor devices
NASA Technical Reports Server (NTRS)
Litovchenko, Vitaly
1988-01-01
A theoretical study of the radiation/annealing response of MOS ICs is described. Although many experiments have been performed in this field, no comprehensive theory dealing with radiation/annealing response has been proposed. Many attempts have been made to apply linear response theory, but no theoretical foundation has been presented. The linear response theory outlined here is capable of describing a broad area of radiation/annealing response phenomena in MOS ICs, in particular, both simultaneous irradiation and annealing, as well as short- and long-term annealing, including the case when annealing is nearing completion. For the first time, a simple procedure is devised to determine the response function from experimental radiation/annealing data. In addition, this procedure enables us to study the effect of variable temperature and dose rate, effects which are of interest in spaceflight. In the past, the shift in threshold potential due to radiation/annealing has usually been assumed to depend on one variable: the time lapse between an impulse dose and the time of observation. While such a suggestion of uniformity in time is certainly true for a broad range of radiation annealing phenomena, it may not hold for some ranges of the variables of interest (temperature, dose rate, etc.). A response function is projected which is dependent on two variables: the time of observation and the time of the impulse dose. This dependence on two variables allows us to extend the theory to the treatment of a variable dose rate. Finally, the linear theory is generalized to the case in which the response is nonlinear with impulse dose, but is proportional to some impulse function of dose. A method to determine both the impulse and response functions is presented.
CALiPER Report 21.3: Cost-Effectiveness of Linear (T8) LED Lamps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Naomi J.; Perrin, Tess E.; Royer, Michael P.
2014-05-27
Meeting performance expectations is important for driving adoption of linear LED lamps, but cost-effectiveness may be an overriding factor in many cases. Linear LED lamps cost more initially than fluorescent lamps, but energy and maintenance savings may mean that the life-cycle cost is lower. This report details a series of life-cycle cost simulations that compared a two-lamp troffer using LED lamps (38 W total power draw) or fluorescent lamps (51 W total power draw) over a 10-year study period. Variables included LED system cost ($40, $80, or $120), annual operating hours (2,000 hours or 4,000 hours), LED installation time (15more » minutes or 30 minutes), and melded electricity rate ($0.06/kWh, $0.12/kWh, $0.18/kWh, or $0.24/kWh). A full factorial of simulations allows users to interpolate between these values to aid in making rough estimates of economic feasibility for their own projects. In general, while their initial cost premium remains high, linear LED lamps are more likely to be cost-effective when electric utility rates are higher than average and hours of operation are long, and if their installation time is shorter.« less
CALiPER Report 21.3. Cost Effectiveness of Linear (T8) LED Lamps
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2014-05-01
Meeting performance expectations is important for driving adoption of linear LED lamps, but cost-effectiveness may be an overriding factor in many cases. Linear LED lamps cost more initially than fluorescent lamps, but energy and maintenance savings may mean that the life-cycle cost is lower. This report details a series of life-cycle cost simulations that compared a two-lamp troffer using LED lamps (38 W total power draw) or fluorescent lamps (51 W total power draw) over a 10-year study period. Variables included LED system cost ($40, $80, or $120), annual operating hours (2,000 hours or 4,000 hours), LED installation time (15more » minutes or 30 minutes), and melded electricity rate ($0.06/kWh, $0.12/kWh, $0.18/kWh, or $0.24/kWh). A full factorial of simulations allows users to interpolate between these values to aid in making rough estimates of economic feasibility for their own projects. In general, while their initial cost premium remains high, linear LED lamps are more likely to be cost-effective when electric utility rates are higher than average and hours of operation are long, and if their installation time is shorter.« less
Automated Interval velocity picking for Atlantic Multi-Channel Seismic Data
NASA Astrophysics Data System (ADS)
Singh, Vishwajit
2016-04-01
This paper described the challenge in developing and testing a fully automated routine for measuring interval velocities from multi-channel seismic data. Various approaches are employed for generating an interactive algorithm picking interval velocity for continuous 1000-5000 normal moveout (NMO) corrected gather and replacing the interpreter's effort for manual picking the coherent reflections. The detailed steps and pitfalls for picking the interval velocities from seismic reflection time measurements are describe in these approaches. Key ingredients these approaches utilized for velocity analysis stage are semblance grid and starting model of interval velocity. Basin-Hopping optimization is employed for convergence of the misfit function toward local minima. SLiding-Overlapping Window (SLOW) algorithm are designed to mitigate the non-linearity and ill- possessedness of root-mean-square velocity. Synthetic data case studies addresses the performance of the velocity picker generating models perfectly fitting the semblance peaks. A similar linear relationship between average depth and reflection time for synthetic model and estimated models proposed picked interval velocities as the starting model for the full waveform inversion to project more accurate velocity structure of the subsurface. The challenges can be categorized as (1) building accurate starting model for projecting more accurate velocity structure of the subsurface, (2) improving the computational cost of algorithm by pre-calculating semblance grid to make auto picking more feasible.
Prediction of Cancer Incidence and Mortality in Korea, 2018.
Jung, Kyu-Won; Won, Young-Joo; Kong, Hyun-Joo; Lee, Eun Sook
2018-04-01
This study aimed to report on cancer incidence and mortality for the year 2018 to estimate Korea's current cancer burden. Cancer incidence data from 1999 to 2015 were obtained from the Korea National Cancer Incidence Database, and cancer mortality data from 1993 to 2016 were acquired from Statistics Korea. Cancer incidence and mortality were projected by fitting a linear regression model to observed age-specific cancer rates against observed years, then multiplying the projected age-specific rates by the age-specific population. The Joinpoint regression model was used to determine at which year the linear trend changed significantly, we only used the data of the latest trend. A total of 204,909 new cancer cases and 82,155 cancer deaths are expected to occur in Korea in 2018. The most common cancer sites were lung, followed by stomach, colorectal, breast and liver. These five cancers represent half of the overall burden of cancer in Korea. For mortality, the most common sites were lung cancer, followed by liver, colorectal, stomach and pancreas. The incidence rate of all cancer in Korea are estimated to decrease gradually, mainly due to decrease of thyroid cancer. These up-to-date estimates of the cancer burden in Korea could be an important resource for planning and evaluation of cancer-control programs.
Time-frequency filtering and synthesis from convex projections
NASA Astrophysics Data System (ADS)
White, Langford B.
1990-11-01
This paper describes the application of the theory of projections onto convex sets to time-frequency filtering and synthesis problems. We show that the class of Wigner-Ville Distributions (WVD) of L2 signals form the boundary of a closed convex subset of L2(R2). This result is obtained by considering the convex set of states on the Heisenberg group of which the ambiguity functions form the extreme points. The form of the projection onto the set of WVDs is deduced. Various linear and non-linear filtering operations are incorporated by formulation as convex projections. An example algorithm for simultaneous time-frequency filtering and synthesis is suggested.
An efficient variable projection formulation for separable nonlinear least squares problems.
Gan, Min; Li, Han-Xiong
2014-05-01
We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.
NASA Astrophysics Data System (ADS)
Walker, Lisa Jean
The implementation process is critical to the success of educational innovations. Project-based science is an innovation designed to support students' science learning. Science fair is a pervasive school practice in which students exhibit science projects. Little is known about how science fair may affect the implementation of reform efforts in science education. This study explores the relationship of science fair and project-based science in the classrooms of three science teachers. Two theories are used to understand science fair as an instructional practice. Cultural historical activity theory supports an analysis of the origins and development of science fair. The idea of communities of practice supports a focus on why and how educational practitioners participate in science fair and what meanings the activity holds for them. The study identifies five historically-based design themes that have shaped science fair: general science, project method, scientific method, extra-curricular activity, and laboratory science. The themes provide a new framework for describing teachers' classroom practices for science fair activities and support analysis of the ways their practices incorporate aspects of project-based science. Three case studies in Chicago present ethnographic descriptions of science fair practices within the context of school communities. One focuses on the scientific method as a linear process for doing science, another on knowledge generation through laboratory experiments, and the third on student ability to engage in open-ended inquiry. One teacher reinvents a project-based science curriculum to strengthen students' laboratory-based science fair projects, while another reinvents science fair to teach science as inquiry. In each case, science fair is part of the school's efforts to improve science instruction. The cases suggest that reform efforts help to perpetuate science fair practice. To support systemic improvements in science education, this study recommends that science fair be recognized as a classroom instructional activity---rather than an extra-curricular event---and part of the system of science education in this country. If science fair is to reflect new ideas in science education, direct intervention in the practice is necessary. This study---including both the history and examples of current practice---provides valuable insights for reconsidering science fair's design.
An efficient inverse radiotherapy planning method for VMAT using quadratic programming optimization.
Hoegele, W; Loeschel, R; Merkle, N; Zygmanski, P
2012-01-01
The purpose of this study is to investigate the feasibility of an inverse planning optimization approach for the Volumetric Modulated Arc Therapy (VMAT) based on quadratic programming and the projection method. The performance of this method is evaluated against a reference commercial planning system (eclipse(TM) for rapidarc(TM)) for clinically relevant cases. The inverse problem is posed in terms of a linear combination of basis functions representing arclet dose contributions and their respective linear coefficients as degrees of freedom. MLC motion is decomposed into basic motion patterns in an intuitive manner leading to a system of equations with a relatively small number of equations and unknowns. These equations are solved using quadratic programming under certain limiting physical conditions for the solution, such as the avoidance of negative dose during optimization and Monitor Unit reduction. The modeling by the projection method assures a unique treatment plan with beneficial properties, such as the explicit relation between organ weightings and the final dose distribution. Clinical cases studied include prostate and spine treatments. The optimized plans are evaluated by comparing isodose lines, DVH profiles for target and normal organs, and Monitor Units to those obtained by the clinical treatment planning system eclipse(TM). The resulting dose distributions for a prostate (with rectum and bladder as organs at risk), and for a spine case (with kidneys, liver, lung and heart as organs at risk) are presented. Overall, the results indicate that similar plan qualities for quadratic programming (QP) and rapidarc(TM) could be achieved at significantly more efficient computational and planning effort using QP. Additionally, results for the quasimodo phantom [Bohsung et al., "IMRT treatment planning: A comparative inter-system and inter-centre planning exercise of the estro quasimodo group," Radiother. Oncol. 76(3), 354-361 (2005)] are presented as an example for an extreme concave case. Quadratic programming is an alternative approach for inverse planning which generates clinically satisfying plans in comparison to the clinical system and constitutes an efficient optimization process characterized by uniqueness and reproducibility of the solution.
SU-F-J-54: Towards Real-Time Volumetric Imaging Using the Treatment Beam and KV Beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M; Rozario, T; Liu, A
Purpose: Existing real-time imaging uses dual (orthogonal) kV beam fluoroscopies and may result in significant amount of extra radiation to patients, especially for prolonged treatment cases. In addition, kV projections only provide 2D information, which is insufficient for in vivo dose reconstruction. We propose real-time volumetric imaging using prior knowledge of pre-treatment 4D images and real-time 2D transit data of treatment beam and kV beam. Methods: The pre-treatment multi-snapshot volumetric images are used to simulate 2D projections of both the treatment beam and kV beam, respectively, for each treatment field defined by the control point. During radiation delivery, the transitmore » signals acquired by the electronic portal image device (EPID) are processed for every projection and compared with pre-calculation by cross-correlation for phase matching and thus 3D snapshot identification or real-time volumetric imaging. The data processing involves taking logarithmic ratios of EPID signals with respect to the air scan to reduce modeling uncertainties in head scatter fluence and EPID response. Simulated 2D projections are also used to pre-calculate confidence levels in phase matching. Treatment beam projections that have a low confidence level either in pre-calculation or real-time acquisition will trigger kV beams so that complementary information can be exploited. In case both the treatment beam and kV beam return low confidence in phase matching, a predicted phase based on linear regression will be generated. Results: Simulation studies indicated treatment beams provide sufficient confidence in phase matching for most cases. At times of low confidence from treatment beams, kV imaging provides sufficient confidence in phase matching due to its complementary configuration. Conclusion: The proposed real-time volumetric imaging utilizes the treatment beam and triggers kV beams for complementary information when the treatment beam along does not provide sufficient confidence for phase matching. This strategy minimizes the use of extra radiation to patients. This project is partially supported by a Varian MRA grant.« less
NASA Astrophysics Data System (ADS)
Miao, Di; Borden, Michael J.; Scott, Michael A.; Thomas, Derek C.
2018-06-01
In this paper we demonstrate the use of B\\'{e}zier projection to alleviate locking phenomena in structural mechanics applications of isogeometric analysis. Interpreting the well-known $\\bar{B}$ projection in two different ways we develop two formulations for locking problems in beams and nearly incompressible elastic solids. One formulation leads to a sparse symmetric symmetric system and the other leads to a sparse non-symmetric system. To demonstrate the utility of B\\'{e}zier projection for both geometry and material locking phenomena we focus on transverse shear locking in Timoshenko beams and volumetric locking in nearly compressible linear elasticity although the approach can be applied generally to other types of locking phenemona as well. B\\'{e}zier projection is a local projection technique with optimal approximation properties, which in many cases produces solutions that are comparable to global $L^2$ projection. In the context of $\\bar{B}$ methods, the use of B\\'ezier projection produces sparse stiffness matrices with only a slight increase in bandwidth when compared to standard displacement-based methods. Of particular importance is that the approach is applicable to any spline representation that can be written in B\\'ezier form like NURBS, T-splines, LR-splines, etc. We discuss in detail how to integrate this approach into an existing finite element framework with minimal disruption through the use of B\\'ezier extraction operators and a newly introduced dual basis for the B\\'{e}zierprojection operator. We then demonstrate the behavior of the two proposed formulations through several challenging benchmark problems.
Piecewise polynomial representations of genomic tracks.
Tarabichi, Maxime; Detours, Vincent; Konopka, Tomasz
2012-01-01
Genomic data from micro-array and sequencing projects consist of associations of measured values to chromosomal coordinates. These associations can be thought of as functions in one dimension and can thus be stored, analyzed, and interpreted as piecewise-polynomial curves. We present a general framework for building piecewise polynomial representations of genome-scale signals and illustrate some of its applications via examples. We show that piecewise constant segmentation, a typical step in copy-number analyses, can be carried out within this framework for both array and (DNA) sequencing data offering advantages over existing methods in each case. Higher-order polynomial curves can be used, for example, to detect trends and/or discontinuities in transcription levels from RNA-seq data. We give a concrete application of piecewise linear functions to diagnose and quantify alignment quality at exon borders (splice sites). Our software (source and object code) for building piecewise polynomial models is available at http://sourceforge.net/projects/locsmoc/.
Capacitive touch sensing : signal and image processing algorithms
NASA Astrophysics Data System (ADS)
Baharav, Zachi; Kakarala, Ramakrishna
2011-03-01
Capacitive touch sensors have been in use for many years, and recently gained center stage with the ubiquitous use in smart-phones. In this work we will analyze the most common method of projected capacitive sensing, that of absolute capacitive sensing, together with the most common sensing pattern, that of diamond-shaped sensors. After a brief introduction to the problem, and the reasons behind its popularity, we will formulate the problem as a reconstruction from projections. We derive analytic solutions for two simple cases: circular finger on a wire grid, and square finger on a square grid. The solutions give insight into the ambiguities of finding finger location from sensor readings. The main contribution of our paper is the discussion of interpolation algorithms including simple linear interpolation , curve fitting (parabolic and Gaussian), filtering, general look-up-table, and combinations thereof. We conclude with observations on the limits of the present algorithmic methods, and point to possible future research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fike, Jeffrey A.
2013-08-01
The construction of stable reduced order models using Galerkin projection for the Euler or Navier-Stokes equations requires a suitable choice for the inner product. The standard L2 inner product is expected to produce unstable ROMs. For the non-linear Navier-Stokes equations this means the use of an energy inner product. In this report, Galerkin projection for the non-linear Navier-Stokes equations using the L2 inner product is implemented as a first step toward constructing stable ROMs for this set of physics.
DEMNUni: ISW, Rees-Sciama, and weak-lensing in the presence of massive neutrinos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbone, Carmelita; Petkova, Margarita; Dolag, Klaus, E-mail: carmelita.carbone@brera.inaf.it, E-mail: mpetkova@usm.lmu.de, E-mail: kdolag@mpa-garching.mpg.de
2016-07-01
We present, for the first time in the literature, a full reconstruction of the total (linear and non-linear) ISW/Rees-Sciama effect in the presence of massive neutrinos, together with its cross-correlations with CMB-lensing and weak-lensing signals. The present analyses make use of all-sky maps extracted via ray-tracing across the gravitational potential distribution provided by the ''Dark Energy and Massive Neutrino Universe'' (DEMNUni) project, a set of large-volume, high-resolution cosmological N-body simulations, where neutrinos are treated as separate collisionless particles. We correctly recover, at 1–2% accuracy, the linear predictions from CAMB. Concerning the CMB-lensing and weak-lensing signals, we also recover, with similarmore » accuracy, the signal predicted by Boltzmann codes, once non-linear neutrino corrections to HALOFIT are accounted for. Interestingly, in the ISW/Rees-Sciama signal, and its cross correlation with lensing, we find an excess of power with respect to the massless case, due to free streaming neutrinos, roughly at the transition scale between the linear and non-linear regimes. The excess is ∼ 5 – 10% at l ∼ 100 for the ISW/Rees-Sciama auto power spectrum, depending on the total neutrino mass M {sub ν}, and becomes a factor of ∼ 4 for M {sub ν} = 0.3 eV, at l ∼ 600, for the ISW/Rees-Sciama cross power with CMB-lensing. This effect should be taken into account for the correct estimation of the CMB temperature bispectrum in the presence of massive neutrinos.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caravelli, Francesco
The dynamics of purely memristive circuits has been shown to depend on a projection operator which expresses the Kirchhoff constraints, is naturally non-local in nature, and does represent the interaction between memristors. In the present paper we show that for the case of planar circuits, for which a meaningful Hamming distance can be defined, the elements of such projector can be bounded by exponentially decreasing functions of the distance. We provide a geometrical interpretation of the projector elements in terms of determinants of Dirichlet Laplacian of the dual circuit. For the case of linearized dynamics of the circuit for whichmore » a solution is known, this can be shown to provide a light cone bound for the interaction between memristors. Furthermore, this result establishes a finite speed of propagation of signals across the network, despite the non-local nature of the system.« less
Operator pencil passing through a given operator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biggs, A., E-mail: khudian@manchester.ac.uk, E-mail: adam.biggs@student.manchester.ac.uk; Khudaverdian, H. M., E-mail: khudian@manchester.ac.uk, E-mail: adam.biggs@student.manchester.ac.uk
Let Δ be a linear differential operator acting on the space of densities of a given weight λ{sub 0} on a manifold M. One can consider a pencil of operators Π-circumflex(Δ)=(Δ{sub λ}) passing through the operator Δ such that any Δ{sub λ} is a linear differential operator acting on densities of weight λ. This pencil can be identified with a linear differential operator Δ-circumflex acting on the algebra of densities of all weights. The existence of an invariant scalar product in the algebra of densities implies a natural decomposition of operators, i.e., pencils of self-adjoint and anti-self-adjoint operators. We studymore » lifting maps that are on one hand equivariant with respect to divergenceless vector fields, and, on the other hand, with values in self-adjoint or anti-self-adjoint operators. In particular, we analyze the relation between these two concepts, and apply it to the study of diff (M)-equivariant liftings. Finally, we briefly consider the case of liftings equivariant with respect to the algebra of projective transformations and describe all regular self-adjoint and anti-self-adjoint liftings. Our constructions can be considered as a generalisation of equivariant quantisation.« less
Y-cell receptive field and collicular projection of parasol ganglion cells in macaque monkey retina
Crook, Joanna D.; Peterson, Beth B.; Packer, Orin S.; Robinson, Farrel R.; Troy, John B.; Dacey, Dennis M.
2009-01-01
The distinctive parasol ganglion cell of the primate retina transmits a transient, spectrally non-opponent signal to the magnocellular layers of the lateral geniculate nucleus (LGN). Parasol cells show well-recognized parallels with the alpha-Y cell of other mammals, yet two key alpha-Y cell properties, a collateral projection to the superior colliculus and nonlinear spatial summation, have not been clearly established for parasol cells. Here we show by retrograde photodynamic staining that parasol cells project to the superior colliculus. Photostained dendritic trees formed characteristic spatial mosaics and afforded unequivocal identification of the parasol cells among diverse collicular-projecting cell types. Loose-patch recordings were used to demonstrate for all parasol cells a distinct Y-cell receptive field ‘signature’ marked by a non-linear mechanism that responded to contrast-reversing gratings at twice the stimulus temporal frequency (second Fourier harmonic, F2) independent of stimulus spatial phase. The F2 component showed high contrast gain and temporal sensitivity and appeared to originate from a region coextensive with that of the linear receptive field center. The F2 spatial frequency response peaked well beyond the resolution limit of the linear receptive field center, showing a Gaussian center radius of ~15 μm. Blocking inner retinal inhibition elevated the F2 response, suggesting that amacrine circuitry does not generate this non-linearity. Our data are consistent with a pooled-subunit model of the parasol-Y cell receptive field in which summation from an array of transient, partially rectifying cone bipolar cells accounts for both linear and non-linear components of the receptive field. PMID:18971470
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, J; Chao, M
2016-06-15
Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associatedmore » algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately improving tumor motion management for radiation therapy of cancer patients.« less
Han, Sungmin; Chu, Jun-Uk; Park, Jong Woong; Youn, Inchan
2018-05-15
Proprioceptive afferent activities recorded by a multichannel microelectrode have been used to decode limb movements to provide sensory feedback signals for closed-loop control in a functional electrical stimulation (FES) system. However, analyzing the high dimensionality of neural activity is one of the major challenges in real-time applications. This paper proposes a linear feature projection method for the real-time decoding of ankle and knee joint angles. Single-unit activity was extracted as a feature vector from proprioceptive afferent signals that were recorded from the L7 dorsal root ganglion during passive movements of ankle and knee joints. The dimensionality of this feature vector was then reduced using a linear feature projection composed of projection pursuit and negentropy maximization (PP/NEM). Finally, a time-delayed Kalman filter was used to estimate the ankle and knee joint angles. The PP/NEM approach had a better decoding performance than did other feature projection methods, and all processes were completed within the real-time constraints. These results suggested that the proposed method could be a useful decoding method to provide real-time feedback signals in closed-loop FES systems.
Musuku, Adrien; Tan, Aimin; Awaiye, Kayode; Trabelsi, Fethi
2013-09-01
Linear calibration is usually performed using eight to ten calibration concentration levels in regulated LC-MS bioanalysis because a minimum of six are specified in regulatory guidelines. However, we have previously reported that two-concentration linear calibration is as reliable as or even better than using multiple concentrations. The purpose of this research is to compare two-concentration with multiple-concentration linear calibration through retrospective data analysis of multiple bioanalytical projects that were conducted in an independent regulated bioanalytical laboratory. A total of 12 bioanalytical projects were randomly selected: two validations and two studies for each of the three most commonly used types of sample extraction methods (protein precipitation, liquid-liquid extraction, solid-phase extraction). When the existing data were retrospectively linearly regressed using only the lowest and the highest concentration levels, no extra batch failure/QC rejection was observed and the differences in accuracy and precision between the original multi-concentration regression and the new two-concentration linear regression are negligible. Specifically, the differences in overall mean apparent bias (square root of mean individual bias squares) are within the ranges of -0.3% to 0.7% and 0.1-0.7% for the validations and studies, respectively. The differences in mean QC concentrations are within the ranges of -0.6% to 1.8% and -0.8% to 2.5% for the validations and studies, respectively. The differences in %CV are within the ranges of -0.7% to 0.9% and -0.3% to 0.6% for the validations and studies, respectively. The average differences in study sample concentrations are within the range of -0.8% to 2.3%. With two-concentration linear regression, an average of 13% of time and cost could have been saved for each batch together with 53% of saving in the lead-in for each project (the preparation of working standard solutions, spiking, and aliquoting). Furthermore, examples are given as how to evaluate the linearity over the entire concentration range when only two concentration levels are used for linear regression. To conclude, two-concentration linear regression is accurate and robust enough for routine use in regulated LC-MS bioanalysis and it significantly saves time and cost as well. Copyright © 2013 Elsevier B.V. All rights reserved.
Projector primary-based optimization for superimposed projection mappings
NASA Astrophysics Data System (ADS)
Ahmed, Bilal; Lee, Jong Hun; Lee, Yong Yi; Lee, Kwan H.
2018-01-01
Recently, many researchers have focused on fully overlapping projections for three-dimensional (3-D) projection mapping systems but reproducing a high-quality appearance using this technology still remains a challenge. On top of existing color compensation-based methods, much effort is still required to faithfully reproduce an appearance that is free from artifacts, colorimetric inconsistencies, and inappropriate illuminance over the 3-D projection surface. According to our observation, this is due to the fact that overlapping projections are treated as an additive-linear mixture of color. However, this is not the case according to our elaborated observations. We propose a method that enables us to use high-quality appearance data that are measured from original objects and regenerate the same appearance by projecting optimized images using multiple projectors, ensuring that the projection-rendered results look visually close to the real object. We prepare our target appearances by photographing original objects. Then, using calibrated projector-camera pairs, we compensate for missing geometric correspondences to make our method robust against noise. The heart of our method is a target appearance-driven adaptive sampling of the projection surface followed by a representation of overlapping projections in terms of the projector-primary response. This gives off projector-primary weights to facilitate blending and the system is applied with constraints. These samples are used to populate a light transport-based system. Then, the system is solved minimizing the error to get the projection images in a noise-free manner by utilizing intersample overlaps. We ensure that we make the best utilization of available hardware resources to recreate projection mapped appearances that look as close to the original object as possible. Our experimental results show compelling results in terms of visual similarity and colorimetric error.
NASA Astrophysics Data System (ADS)
Preaux, S. A.; Crump, B.; Damiani, T.
2015-12-01
The Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project of NOAA's National Geodetic Survey has been collecting airborne gravity data since 2008 using 3 TAGS gravimeters, S-137, S-160 and S-161 (Table 1). The 38 surveys contain 1697 gravimeter calibration readings taken when the aircraft is parked on the ground before and after each flight, called still readings. This dataset is uniquely suited to examine the drift characteristics of these instruments. This study is broken into 3 parts: re-computation of individual still reading values; examination of drift rates during flights and surveys; and examination of long term drift rates. Re-computation of still readings was accomplished by isolating the least-noisy 10-minute segment of gravity data while the aircraft was parked and the beam unclamped. This automated method worked in most cases, but a small number of readings required further examination. This method improved the consistency of pre- and post-flight still readings as compared to those recorded in the field. Preliminary results indicate that the drift rate for these 3 instruments during a typical survey period is both small (95% smaller than 0.35 mGal/day) and linear. The average drift rate during a survey is -0.11 mGal/day with a standard deviation of 0.12 mGal/day (Figure 1). Still readings for most surveys were well represented by a linear trend, but a small number have curvature or discontinuities. The nature and cause of this non-linearity will be investigated. Early results show a long term linear drift rate for these 3 gravimeters between 0.01 and 0.04 mGal/day. There also appears to be significant non-linear variability. Comparing the 1.5-2 year time series of still readings from S-160 and S-161 with the 7.5 year time series for S-137, indicates that data from more than two years are needed to accurately characterize the long-term behavior. Instrumentation and processing causes for this non-linearity will be explored. Table1. Number of surveys and still reading and duration of use for each Gravimeter Meter # of surveys # still readings Duration of use S-137 27 1205 7.5 years S-160 4 288 1.5 years S-161 7 204 2 years
Wu, Yabei; Lu, Huanzhang; Zhao, Fei; Zhang, Zhiyong
2016-01-01
Shape serves as an important additional feature for space target classification, which is complementary to those made available. Since different shapes lead to different projection functions, the projection property can be regarded as one kind of shape feature. In this work, the problem of estimating the projection function from the infrared signature of the object is addressed. We show that the projection function of any rotationally symmetric object can be approximately represented as a linear combination of some base functions. Based on this fact, the signal model of the emissivity-area product sequence is constructed, which is a particular mathematical function of the linear coefficients and micro-motion parameters. Then, the least square estimator is proposed to estimate the projection function and micro-motion parameters jointly. Experiments validate the effectiveness of the proposed method. PMID:27763500
Effective Perron-Frobenius eigenvalue for a correlated random map
NASA Astrophysics Data System (ADS)
Pool, Roman R.; Cáceres, Manuel O.
2010-09-01
We investigate the evolution of random positive linear maps with various type of disorder by analytic perturbation and direct simulation. Our theoretical result indicates that the statistics of a random linear map can be successfully described for long time by the mean-value vector state. The growth rate can be characterized by an effective Perron-Frobenius eigenvalue that strongly depends on the type of correlation between the elements of the projection matrix. We apply this approach to an age-structured population dynamics model. We show that the asymptotic mean-value vector state characterizes the population growth rate when the age-structured model has random vital parameters. In this case our approach reveals the nontrivial dependence of the effective growth rate with cross correlations. The problem was reduced to the calculation of the smallest positive root of a secular polynomial, which can be obtained by perturbations in terms of Green’s function diagrammatic technique built with noncommutative cumulants for arbitrary n -point correlations.
IGA-ADS: Isogeometric analysis FEM using ADS solver
NASA Astrophysics Data System (ADS)
Łoś, Marcin M.; Woźniak, Maciej; Paszyński, Maciej; Lenharth, Andrew; Hassaan, Muhamm Amber; Pingali, Keshav
2017-08-01
In this paper we present a fast explicit solver for solution of non-stationary problems using L2 projections with isogeometric finite element method. The solver has been implemented within GALOIS framework. It enables parallel multi-core simulations of different time-dependent problems, in 1D, 2D, or 3D. We have prepared the solver framework in a way that enables direct implementation of the selected PDE and corresponding boundary conditions. In this paper we describe the installation, implementation of exemplary three PDEs, and execution of the simulations on multi-core Linux cluster nodes. We consider three case studies, including heat transfer, linear elasticity, as well as non-linear flow in heterogeneous media. The presented package generates output suitable for interfacing with Gnuplot and ParaView visualization software. The exemplary simulations show near perfect scalability on Gilbert shared-memory node with four Intel® Xeon® CPU E7-4860 processors, each possessing 10 physical cores (for a total of 40 cores).
Explicit methods in extended phase space for inseparable Hamiltonian problems
NASA Astrophysics Data System (ADS)
Pihajoki, Pauli
2015-03-01
We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.
Neuromuscular Control of Rapid Linear Accelerations in Fish
2016-06-22
2014 30-Apr-2015 Approved for Public Release; Distribution Unlimited Final Report: Neuromuscular Control of Rapid Linear Accelerations in Fish The...it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. Tufts University Research... Control of Rapid Linear Accelerations in Fish Report Title In this project, we measured muscle activity, body movements, and flow patterns during linear
NASA Astrophysics Data System (ADS)
Kumer, J. B.; Rairden, R. L.; Polonsky, I. N.; O'Brien, D. M.
2014-12-01
The Tropospheric Infrared Mapping Spectrometer (TIMS) unit rebuilt to operate in a narrow spectral region, approximately 1603 to 1615 nm, of the weak CO2 band as described by Kumer et al. (2013, Proc. SPIE 8867, doi:10.1117/12.2022668) was used to conduct the demonstration. An integrating sphere (IS), linear polarizers and quarter wave plate were used to confirm that the instrument's spectral response to unpolarized light, to 45° linearly polarized light and to circular polarized light are identical. In all these cases the intensity components Ip = Is where Ip is the component parallel to the object space projected slit and Is is perpendicular to the slit. In the circular polarized case Ip = Is in the time averaged sense. The polarizer and IS were used to characterize the ratio Rθ of the instrument response to linearly polarized light at the angle θ relative to parallel from the slit, for increments of θ from 0 to 90°, to that of the unpolarized case. Spectra of diffusely reflected sunlight passed through the polarizer in increments of θ, and divided by the respective Rθ showed identical results, within the noise limit, for solar spectrum multiplied by the atmospheric transmission and convolved by the Instrument Line Shape (ILS). These measurements demonstrate that unknown polarization in the diffusely reflected sunlight on this small spectral range affect only the slow change across the narrow band in spectral response relative to that of unpolarized light and NOT the finely structured / high contrast spectral structure of the CO2 atmospheric absorption that is used to retrieve the atmospheric content of CO2. The latter is one of the geoCARB mission objectives (Kumer et al, 2013). The situation is similar for the other three narrow geoCARB bands; O2 A band 757.9 to 768.6 nm; strong CO2 band 2045.0 to 2085.0 nm; CH4 and CO region 2300.6 to 2345.6 nm. Polonsky et al have repeated the mission simulation study doi:10.5194/amt-7-959-2014 assuming no use of a geoCARB depolarizer or polarizer. Enabled by measurement of the geoCARB grating efficiencies the simulated intensities Ism include the slow polarization induced spectral change across the band. These Ism are input to the retrieval SW that was used in the original study. There is no significant change to the very positive previous results for the mission objective of gas column retrieval.
Development of a Nonlinear Probability of Collision Tool for the Earth Observing System
NASA Technical Reports Server (NTRS)
McKinley, David P.
2006-01-01
The Earth Observing System (EOS) spacecraft Terra, Aqua, and Aura fly in constellation with several other spacecraft in 705-kilometer mean altitude sun-synchronous orbits. All three spacecraft are operated by the Earth Science Mission Operations (ESMO) Project at Goddard Space Flight Center (GSFC). In 2004, the ESMO project began assessing the probability of collision of the EOS spacecraft with other space objects. In addition to conjunctions with high relative velocities, the collision assessment method for the EOS spacecraft must address conjunctions with low relative velocities during potential collisions between constellation members. Probability of Collision algorithms that are based on assumptions of high relative velocities and linear relative trajectories are not suitable for these situations; therefore an algorithm for handling the nonlinear relative trajectories was developed. This paper describes this algorithm and presents results from its validation for operational use. The probability of collision is typically calculated by integrating a Gaussian probability distribution over the volume swept out by a sphere representing the size of the space objects involved in the conjunction. This sphere is defined as the Hard Body Radius. With the assumption of linear relative trajectories, this volume is a cylinder, which translates into simple limits of integration for the probability calculation. For the case of nonlinear relative trajectories, the volume becomes a complex geometry. However, with an appropriate choice of coordinate systems, the new algorithm breaks down the complex geometry into a series of simple cylinders that have simple limits of integration. This nonlinear algorithm will be discussed in detail in the paper. The nonlinear Probability of Collision algorithm was first verified by showing that, when used in high relative velocity cases, it yields similar answers to existing high relative velocity linear relative trajectory algorithms. The comparison with the existing high velocity/linear theory will also be used to determine at what relative velocity the analysis should use the new nonlinear theory in place of the existing linear theory. The nonlinear algorithm was also compared to a known exact solution for the probability of collision between two objects when the relative motion is strictly circular and the error covariance is spherically symmetric. Figure I shows preliminary results from this comparison by plotting the probabilities calculated from the new algorithm and those from the exact solution versus the Hard Body Radius to Covariance ratio. These results show about 5% error when the Hard Body Radius is equal to one half the spherical covariance magnitude. The algorithm was then combined with a high fidelity orbit state and error covariance propagator into a useful tool for analyzing low relative velocity nonlinear relative trajectories. The high fidelity propagator is capable of using atmospheric drag, central body gravitational, solar radiation, and third body forces to provide accurate prediction of the relative trajectories and covariance evolution. The covariance propagator also includes a process noise model to ensure realistic evolutions of the error covariance. This paper will describe the integration of the nonlinear probability algorithm and the propagators into a useful collision assessment tool. Finally, a hypothetical case study involving a low relative velocity conjunction between members of the Earth Observation System constellation will be presented.
Prediction of Cancer Incidence and Mortality in Korea, 2018
Jung, Kyu-Won; Won, Young-Joo; Kong, Hyun-Joo; Lee, Eun Sook
2018-01-01
Purpose This study aimed to report on cancer incidence and mortality for the year 2018 to estimate Korea’s current cancer burden. Materials and Methods Cancer incidence data from 1999 to 2015 were obtained from the Korea National Cancer Incidence Database, and cancer mortality data from 1993 to 2016 were acquired from Statistics Korea. Cancer incidence and mortality were projected by fitting a linear regression model to observed age-specific cancer rates against observed years, then multiplying the projected age-specific rates by the age-specific population. The Joinpoint regression model was used to determine at which year the linear trend changed significantly, we only used the data of the latest trend. Results A total of 204,909 new cancer cases and 82,155 cancer deaths are expected to occur in Korea in 2018. The most common cancer sites were lung, followed by stomach, colorectal, breast and liver. These five cancers represent half of the overall burden of cancer in Korea. For mortality, the most common sites were lung cancer, followed by liver, colorectal, stomach and pancreas. Conclusion The incidence rate of all cancer in Korea are estimated to decrease gradually, mainly due to decrease of thyroid cancer. These up-to-date estimates of the cancer burden in Korea could be an important resource for planning and evaluation of cancer-control programs. PMID:29566480
GPU-accelerated iterative reconstruction for limited-data tomography in CBCT systems.
de Molina, Claudia; Serrano, Estefania; Garcia-Blas, Javier; Carretero, Jesus; Desco, Manuel; Abella, Monica
2018-05-15
Standard cone-beam computed tomography (CBCT) involves the acquisition of at least 360 projections rotating through 360 degrees. Nevertheless, there are cases in which only a few projections can be taken in a limited angular span, such as during surgery, where rotation of the source-detector pair is limited to less than 180 degrees. Reconstruction of limited data with the conventional method proposed by Feldkamp, Davis and Kress (FDK) results in severe artifacts. Iterative methods may compensate for the lack of data by including additional prior information, although they imply a high computational burden and memory consumption. We present an accelerated implementation of an iterative method for CBCT following the Split Bregman formulation, which reduces computational time through GPU-accelerated kernels. The implementation enables the reconstruction of large volumes (>1024 3 pixels) using partitioning strategies in forward- and back-projection operations. We evaluated the algorithm on small-animal data for different scenarios with different numbers of projections, angular span, and projection size. Reconstruction time varied linearly with the number of projections and quadratically with projection size but remained almost unchanged with angular span. Forward- and back-projection operations represent 60% of the total computational burden. Efficient implementation using parallel processing and large-memory management strategies together with GPU kernels enables the use of advanced reconstruction approaches which are needed in limited-data scenarios. Our GPU implementation showed a significant time reduction (up to 48 ×) compared to a CPU-only implementation, resulting in a total reconstruction time from several hours to few minutes.
Concentrating Solar Power Projects | Concentrating Solar Power | NREL
construction, or under development. CSP technologies include parabolic trough, linear Fresnel reflector, power Technology-listing by parabolic trough, linear Fresnel reflector, power tower, or dish/engine systems Status
Locality of interactions for planar memristive circuits
Caravelli, Francesco
2017-11-08
The dynamics of purely memristive circuits has been shown to depend on a projection operator which expresses the Kirchhoff constraints, is naturally non-local in nature, and does represent the interaction between memristors. In the present paper we show that for the case of planar circuits, for which a meaningful Hamming distance can be defined, the elements of such projector can be bounded by exponentially decreasing functions of the distance. We provide a geometrical interpretation of the projector elements in terms of determinants of Dirichlet Laplacian of the dual circuit. For the case of linearized dynamics of the circuit for whichmore » a solution is known, this can be shown to provide a light cone bound for the interaction between memristors. Furthermore, this result establishes a finite speed of propagation of signals across the network, despite the non-local nature of the system.« less
Concentrating Solar Power Projects - Dacheng Dunhuang 50MW Molten Salt
project Status Date: September 29, 2016 Project Overview Project Name: Dacheng Dunhuang 50MW Molten Salt ., Ltd Technology: Linear Fresnel reflector Turbine Capacity: Net: 50.0 MW Gross: 50.0 MW Status: Under reflector Status: Under development Country: China City: Dunhuang Region: Gansu Province Contact(s
Mathematical modelling in engineering: an alternative way to teach Linear Algebra
NASA Astrophysics Data System (ADS)
Domínguez-García, S.; García-Planas, M. I.; Taberna, J.
2016-10-01
Technological advances require that basic science courses for engineering, including Linear Algebra, emphasize the development of mathematical strengths associated with modelling and interpretation of results, which are not limited only to calculus abilities. Based on this consideration, we have proposed a project-based learning, giving a dynamic classroom approach in which students modelled real-world problems and turn gain a deeper knowledge of the Linear Algebra subject. Considering that most students are digital natives, we use the e-portfolio as a tool of communication between students and teachers, besides being a good place making the work visible. In this article, we present an overview of the design and implementation of a project-based learning for a Linear Algebra course taught during the 2014-2015 at the 'ETSEIB'of Universitat Politècnica de Catalunya (UPC).
Treatment planning systems dosimetry auditing project in Portugal.
Lopes, M C; Cavaco, A; Jacob, K; Madureira, L; Germano, S; Faustino, S; Lencart, J; Trindade, M; Vale, J; Batel, V; Sousa, M; Bernardo, A; Brás, S; Macedo, S; Pimparel, D; Ponte, F; Diaz, E; Martins, A; Pinheiro, A; Marques, F; Batista, C; Silva, L; Rodrigues, M; Carita, L; Gershkevitsh, E; Izewska, J
2014-02-01
The Medical Physics Division of the Portuguese Physics Society (DFM_SPF) in collaboration with the IAEA, carried out a national auditing project in radiotherapy, between September 2011 and April 2012. The objective of this audit was to ensure the optimal usage of treatment planning systems. The national results are presented in this paper. The audit methodology simulated all steps of external beam radiotherapy workflow, from image acquisition to treatment planning and dose delivery. A thorax CIRS phantom lend by IAEA was used in 8 planning test-cases for photon beams corresponding to 15 measuring points (33 point dose results, including individual fields in multi-field test cases and 5 sum results) in different phantom materials covering a set of typical clinical delivery techniques in 3D Conformal Radiotherapy. All 24 radiotherapy centers in Portugal have participated. 50 photon beams with energies 4-18 MV have been audited using 25 linear accelerators and 32 calculation algorithms. In general a very good consistency was observed for the same type of algorithm in all centres and for each beam quality. The overall results confirmed that the national status of TPS calculations and dose delivery for 3D conformal radiotherapy is generally acceptable with no major causes for concern. This project contributed to the strengthening of the cooperation between the centres and professionals, paving the way to further national collaborations. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Yang-Kyun; Sharp, Gregory C.; Gierga, David P.
2015-06-15
Purpose: Real-time kV projection streaming capability has become recently available for Elekta XVI version 5.0. This study aims to investigate the feasibility and accuracy of real-time fiducial marker tracking during CBCT acquisition with or without simultaneous VMAT delivery using a conventional Elekta linear accelerator. Methods: A client computer was connected to an on-board kV imaging system computer, and receives and processes projection images immediately after image acquisition. In-house marker tracking software based on FFT normalized cross-correlation was developed and installed in the client computer. Three gold fiducial markers with 3 mm length were implanted in a pelvis-shaped phantom with 36more » cm width. The phantom was placed on a programmable motion platform oscillating in anterior-posterior and superior-inferior directions simultaneously. The marker motion was tracked in real-time for (1) a kV-only CBCT scan with treatment beam off and (2) a kV CBCT scan during a 6-MV VMAT delivery. The exposure parameters per projection were 120 kVp and 1.6 mAs. Tracking accuracy was assessed by comparing superior-inferior positions between the programmed and tracked trajectories. Results: The projection images were successfully transferred to the client computer at a frequency of about 5 Hz. In the kV-only scan, highly accurate marker tracking was achieved over the entire range of cone-beam projection angles (detection rate / tracking error were 100.0% / 0.6±0.5 mm). In the kV-VMAT scan, MV-scatter degraded image quality, particularly for lateral projections passing through the thickest part of the phantom (kV source angle ranging 70°-110° and 250°-290°), resulting in a reduced detection rate (90.5%). If the lateral projections are excluded, tracking performance was comparable to the kV-only case (detection rate / tracking error were 100.0% / 0.8±0.5 mm). Conclusion: Our phantom study demonstrated a promising Result for real-time motion tracking using a conventional Elekta linear accelerator. MV-scatter suppression is needed to improve tracking accuracy during MV delivery. This research is funded by Motion Management Research Grant from Elekta.« less
ERIC Educational Resources Information Center
Dolan, Thomas G.
2003-01-01
Describes project delivery methods that are replacing the traditional Design/Bid/Build linear approach to the management, design, and construction of new facilities. These variations can enhance construction management and teamwork. (SLD)
Spectral-Spatial Shared Linear Regression for Hyperspectral Image Classification.
Haoliang Yuan; Yuan Yan Tang
2017-04-01
Classification of the pixels in hyperspectral image (HSI) is an important task and has been popularly applied in many practical applications. Its major challenge is the high-dimensional small-sized problem. To deal with this problem, lots of subspace learning (SL) methods are developed to reduce the dimension of the pixels while preserving the important discriminant information. Motivated by ridge linear regression (RLR) framework for SL, we propose a spectral-spatial shared linear regression method (SSSLR) for extracting the feature representation. Comparing with RLR, our proposed SSSLR has the following two advantages. First, we utilize a convex set to explore the spatial structure for computing the linear projection matrix. Second, we utilize a shared structure learning model, which is formed by original data space and a hidden feature space, to learn a more discriminant linear projection matrix for classification. To optimize our proposed method, an efficient iterative algorithm is proposed. Experimental results on two popular HSI data sets, i.e., Indian Pines and Salinas demonstrate that our proposed methods outperform many SL methods.
Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.; ...
2017-10-01
This paper summarizes the findings from Phase II of the Offshore Code Comparison, Collaboration, Continued, with Correlation project. The project is run under the International Energy Agency Wind Research Task 30, and is focused on validating the tools used for modeling offshore wind systems through the comparison of simulated responses of select system designs to physical test data. Validation activities such as these lead to improvement of offshore wind modeling tools, which will enable the development of more innovative and cost-effective offshore wind designs. For Phase II of the project, numerical models of the DeepCwind floating semisubmersible wind system weremore » validated using measurement data from a 1/50th-scale validation campaign performed at the Maritime Research Institute Netherlands offshore wave basin. Validation of the models was performed by comparing the calculated ultimate and fatigue loads for eight different wave-only and combined wind/wave test cases against the measured data, after calibration was performed using free-decay, wind-only, and wave-only tests. The results show a decent estimation of both the ultimate and fatigue loads for the simulated results, but with a fairly consistent underestimation in the tower and upwind mooring line loads that can be attributed to an underestimation of wave-excitation forces outside the linear wave-excitation region, and the presence of broadband frequency excitation in the experimental measurements from wind. Participant results showed varied agreement with the experimental measurements based on the modeling approach used. Modeling attributes that enabled better agreement included: the use of a dynamic mooring model; wave stretching, or some other hydrodynamic modeling approach that excites frequencies outside the linear wave region; nonlinear wave kinematics models; and unsteady aerodynamics models. Also, it was observed that a Morison-only hydrodynamic modeling approach could create excessive pitch excitation and resulting tower loads in some frequency bands.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.
This paper summarizes the findings from Phase II of the Offshore Code Comparison, Collaboration, Continued, with Correlation project. The project is run under the International Energy Agency Wind Research Task 30, and is focused on validating the tools used for modeling offshore wind systems through the comparison of simulated responses of select system designs to physical test data. Validation activities such as these lead to improvement of offshore wind modeling tools, which will enable the development of more innovative and cost-effective offshore wind designs. For Phase II of the project, numerical models of the DeepCwind floating semisubmersible wind system weremore » validated using measurement data from a 1/50th-scale validation campaign performed at the Maritime Research Institute Netherlands offshore wave basin. Validation of the models was performed by comparing the calculated ultimate and fatigue loads for eight different wave-only and combined wind/wave test cases against the measured data, after calibration was performed using free-decay, wind-only, and wave-only tests. The results show a decent estimation of both the ultimate and fatigue loads for the simulated results, but with a fairly consistent underestimation in the tower and upwind mooring line loads that can be attributed to an underestimation of wave-excitation forces outside the linear wave-excitation region, and the presence of broadband frequency excitation in the experimental measurements from wind. Participant results showed varied agreement with the experimental measurements based on the modeling approach used. Modeling attributes that enabled better agreement included: the use of a dynamic mooring model; wave stretching, or some other hydrodynamic modeling approach that excites frequencies outside the linear wave region; nonlinear wave kinematics models; and unsteady aerodynamics models. Also, it was observed that a Morison-only hydrodynamic modeling approach could create excessive pitch excitation and resulting tower loads in some frequency bands.« less
Bayesian reconstruction of projection reconstruction NMR (PR-NMR).
Yoon, Ji Won
2014-11-01
Projection reconstruction nuclear magnetic resonance (PR-NMR) is a technique for generating multidimensional NMR spectra. A small number of projections from lower-dimensional NMR spectra are used to reconstruct the multidimensional NMR spectra. In our previous work, it was shown that multidimensional NMR spectra are efficiently reconstructed using peak-by-peak based reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. We propose an extended and generalized RJMCMC algorithm replacing a simple linear model with a linear mixed model to reconstruct close NMR spectra into true spectra. This statistical method generates samples in a Bayesian scheme. Our proposed algorithm is tested on a set of six projections derived from the three-dimensional 700 MHz HNCO spectrum of a protein HasA. Copyright © 2014 Elsevier Ltd. All rights reserved.
Regional projection of climate impact indices over the Mediterranean region
NASA Astrophysics Data System (ADS)
Casanueva, Ana; Frías, M.; Dolores; Herrera, Sixto; Bedia, Joaquín; San Martín, Daniel; Gutiérrez, José Manuel; Zaninovic, Ksenija
2014-05-01
Climate Impact Indices (CIIs) are being increasingly used in different socioeconomic sectors to transfer information about climate change impacts and risks to stakeholders. CIIs are typically based on different weather variables such as temperature, wind speed, precipitation or humidity and comprise, in a single index, the relevant meteorological information for the particular impact sector (in this study wildfires and tourism). This dependence on several climate variables poses important limitations to the application of statistical downscaling techniques, since physical consistency among variables is required in most cases to obtain reliable local projections. The present study assesses the suitability of the "direct" downscaling approach, in which the downscaling method is directly applied to the CII. In particular, for illustrative purposes, we consider two popular indices used in the wildfire and tourism sectors, the Fire Weather Index (FWI) and the Physiological Equivalent Temperature (PET), respectively. As an example, two case studies are analysed over two representative Mediterranean regions of interest for the EU CLIM-RUN project: continental Spain for the FWI and Croatia for the PET. Results obtained with this "direct" downscaling approach are similar to those found from the application of the statistical downscaling to the individual meteorological drivers prior to the index calculation ("component" downscaling) thus, a wider range of statistical downscaling methods could be used. As an illustration, future changes in both indices are projected by applying two direct statistical downscaling methods, analogs and linear regression, to the ECHAM5 model. Larger differences were found between the two direct statistical downscaling approaches than between the direct and the component approaches with a single downscaling method. While these examples focus on particular indices and Mediterranean regions of interest for CLIM-RUN stakeholders, the same study could be extended to other indices and regions.
NASA Astrophysics Data System (ADS)
Kim, Juhye; Nam, Haewon; Lee, Rena
2015-07-01
CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.
A minimal approach to the scattering of physical massless bosons
NASA Astrophysics Data System (ADS)
Boels, Rutger H.; Luo, Hui
2018-05-01
Tree and loop level scattering amplitudes which involve physical massless bosons are derived directly from physical constraints such as locality, symmetry and unitarity, bypassing path integral constructions. Amplitudes can be projected onto a minimal basis of kinematic factors through linear algebra, by employing four dimensional spinor helicity methods or at its most general using projection techniques. The linear algebra analysis is closely related to amplitude relations, especially the Bern-Carrasco-Johansson relations for gluon amplitudes and the Kawai-Lewellen-Tye relations between gluons and graviton amplitudes. Projection techniques are known to reduce the computation of loop amplitudes with spinning particles to scalar integrals. Unitarity, locality and integration-by-parts identities can then be used to fix complete tree and loop amplitudes efficiently. The loop amplitudes follow algorithmically from the trees. A number of proof-of-concept examples are presented. These include the planar four point two-loop amplitude in pure Yang-Mills theory as well as a range of one loop amplitudes with internal and external scalars, gluons and gravitons. Several interesting features of the results are highlighted, such as the vanishing of certain basis coefficients for gluon and graviton amplitudes. Effective field theories are naturally and efficiently included into the framework. Dimensional regularisation is employed throughout; different regularisation schemes are worked out explicitly. The presented methods appear most powerful in non-supersymmetric theories in cases with relatively few legs, but with potentially many loops. For instance, in the introduced approach iterated unitarity cuts of four point amplitudes for non-supersymmetric gauge and gravity theories can be computed by matrix multiplication, generalising the so-called rung-rule of maximally supersymmetric theories. The philosophy of the approach to kinematics also leads to a technique to control colour quantum numbers of scattering amplitudes with matter, especially efficient in the adjoint and fundamental representations.
Scanning linear estimation: improvements over region of interest (ROI) methods
NASA Astrophysics Data System (ADS)
Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.
2013-03-01
In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.
Kolstad, Erik W; Johansson, Kjell Arne
2011-03-01
Climate change is expected to have large impacts on health at low latitudes where droughts and malnutrition, diarrhea, and malaria are projected to increase. The main objective of this study was to indicate a method to assess a range of plausible health impacts of climate change while handling uncertainties in a unambiguous manner. We illustrate this method by quantifying the impacts of projected regional warming on diarrhea in this century. We combined a range of linear regression coefficients to compute projections of future climate change-induced increases in diarrhea using the results from five empirical studies and a 19-member climate model ensemble for which future greenhouse gas emissions were prescribed. Six geographical regions were analyzed. The model ensemble projected temperature increases of up to 4°C over land in the tropics and subtropics by the end of this century. The associated mean projected increases of relative risk of diarrhea in the six study regions were 8-11% (with SDs of 3-5%) by 2010-2039 and 22-29% (SDs of 9-12%) by 2070-2099. Even our most conservative estimates indicate substantial impacts from climate change on the incidence of diarrhea. Nevertheless, our main conclusion is that large uncertainties are associated with future projections of diarrhea and climate change. We believe that these uncertainties can be attributed primarily to the sparsity of empirical climate-health data. Our results therefore highlight the need for empirical data in the cross section between climate and human health.
Comparison of linear synchronous and induction motors
DOT National Transportation Integrated Search
2004-06-01
A propulsion prade study was conducted as part of the Colorado Maglev Project of FTA's Urban Maglev Technology Development Program to identify and evaluate prospective linear motor designs that could potentially meet the system performance requiremen...
Concentrating Solar Power Projects - Liddell Power Station | Concentrating
: Linear Fresnel reflector Turbine Capacity: Net: 3.0 MW Gross: 3.0 MW Status: Currently Non-Operational Start Year: 2012 Do you have more information, corrections, or comments? Background Technology: Linear
[Total laryngectomy using a linear stapler for laryngeal cancer].
Tomifuji, Masayuki; Araki, Koji; Kamide, Daisuke; Tanaka, Shingo; Tanaka, Yuya; Fukumori, Takayuki; Shiotani, Akihiro
2014-06-01
Total laryngectomy is a well established method for the treatment of laryngeal cancer. In some cases such as elderly patients or patients with severe complications, a shorter surgical time is preferred. Total laryngectomy using a linear stapler is reportedly advantageous for shortening of the surgical time and for lowering the rate of pharyngeal fistula formation. We applied this surgical technique in three laryngeal cancer cases. After skeletonization of the larynx, the linear stapler is inserted between the larynx and the pharyngeal mucosa. Excision of the larynx and suturing of the pharyngeal mucosa are performed simultaneously. Although the number of cases is small for statistical analysis, the surgical time was shortened by about 30 minutes compared to laryngectomy with manual suturing. Total laryngectomy by linear stapler cannot be applied in all cases of advanced laryngeal cancer. However, if the tumor is confined to the endolarynx, it is a useful tool for some cases that require a shorter surgical time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venezian, G.; Bretschneider, C.L.
1980-08-01
This volume details a new methodology to analyze statistically the forces experienced by a structure at sea. Conventionally a wave climate is defined using a spectral function. The wave climate is described using a joint distribution of wave heights and periods (wave lengths), characterizing actual sea conditions through some measured or estimated parameters like the significant wave height, maximum spectral density, etc. Random wave heights and periods satisfying the joint distribution are then generated. Wave kinetics are obtained using linear or non-linear theory. In the case of currents a linear wave-current interaction theory of Venezian (1979) is used. The peakmore » force experienced by the structure for each individual wave is identified. Finally, the probability of exceedance of any given peak force on the structure may be obtained. A three-parameter Longuet-Higgins type joint distribution of wave heights and periods is discussed in detail. This joint distribution was used to model sea conditions at four potential OTEC locations. A uniform cylindrical pipe of 3 m diameter, extending to a depth of 550 m was used as a sample structure. Wave-current interactions were included and forces computed using Morison's equation. The drag and virtual mass coefficients were interpolated from published data. A Fortran program CUFOR was written to execute the above procedure. Tabulated and graphic results of peak forces experienced by the structure, for each location, are presented. A listing of CUFOR is included. Considerable flexibility of structural definition has been incorporated. The program can easily be modified in the case of an alternative joint distribution or for inclusion of effects like non-linearity of waves, transverse forces and diffraction.« less
Concentrating Solar Power Projects - Puerto Errado 1 Thermosolar Power
linear Fresnel reflector system. Status Date: September 7, 2011 Photo showing an aerial view at an angle ): Novatec Solar España S.L. (100%) Technology: Linear Fresnel reflector Turbine Capacity: Gross: 1.4 MW Technology: Linear Fresnel reflector Status: Operational Country: Spain City: Calasparra Region: Murcia Lat
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1987-01-01
Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.
The Seismic Tool-Kit (STK): an open source software for seismology and signal processing.
NASA Astrophysics Data System (ADS)
Reymond, Dominique
2016-04-01
We present an open source software project (GNU public license), named STK: Seismic ToolKit, that is dedicated mainly for seismology and signal processing. The STK project that started in 2007, is hosted by SourceForge.net, and count more than 19 500 downloads at the date of writing. The STK project is composed of two main branches: First, a graphical interface dedicated to signal processing (in the SAC format (SAC_ASCII and SAC_BIN): where the signal can be plotted, zoomed, filtered, integrated, derivated, ... etc. (a large variety of IFR and FIR filter is proposed). The estimation of spectral density of the signal are performed via the Fourier transform, with visualization of the Power Spectral Density (PSD) in linear or log scale, and also the evolutive time-frequency representation (or sonagram). The 3-components signals can be also processed for estimating their polarization properties, either for a given window, or either for evolutive windows along the time. This polarization analysis is useful for extracting the polarized noises, differentiating P waves, Rayleigh waves, Love waves, ... etc. Secondly, a panel of Utilities-Program are proposed for working in a terminal mode, with basic programs for computing azimuth and distance in spherical geometry, inter/auto-correlation, spectral density, time-frequency for an entire directory of signals, focal planes, and main components axis, radiation pattern of P waves, Polarization analysis of different waves (including noize), under/over-sampling the signals, cubic-spline smoothing, and linear/non linear regression analysis of data set. A MINimum library of Linear AlGebra (MIN-LINAG) is also provided for computing the main matrix process like: QR/QL decomposition, Cholesky solve of linear system, finding eigen value/eigen vectors, QR-solve/Eigen-solve of linear equations systems ... etc. STK is developed in C/C++, mainly under Linux OS, and it has been also partially implemented under MS-Windows. Usefull links: http://sourceforge.net/projects/seismic-toolkit/ http://sourceforge.net/p/seismic-toolkit/wiki/browse_pages/
Extraction and Propagation of an Intense Rotating Electron Beam,
1982-10-01
radiochromic foils positioned at z = 25 cm. The equal transmission density contours are ranked in linear order of increasing exposure (increasing current...flux encircled by the cathode e = %rc2Bc. Linearizing the equation of motion around the equilibrium, we can find the wavelength of small radial...the beam rotation. The mask which precedes the scint- illator is a linear array of dots while the projection is made up of two disjoint linear arrays
Phase space flows for non-Hamiltonian systems with constraints
NASA Astrophysics Data System (ADS)
Sergi, Alessandro
2005-09-01
In this paper, non-Hamiltonian systems with holonomic constraints are treated by a generalization of Dirac’s formalism. Non-Hamiltonian phase space flows can be described by generalized antisymmetric brackets or by general Liouville operators which cannot be derived from brackets. Both situations are treated. In the first case, a Nosé-Dirac bracket is introduced as an example. In the second one, Dirac’s recipe for projecting out constrained variables from time translation operators is generalized and then applied to non-Hamiltonian linear response. Dirac’s formalism avoids spurious terms in the response function of constrained systems. However, corrections coming from phase space measure must be considered for general perturbations.
Elzoghby, Mostafa; Li, Fu; Arafa, Ibrahim. I.; Arif, Usman
2017-01-01
Information fusion from multiple sensors ensures the accuracy and robustness of a navigation system, especially in the absence of global positioning system (GPS) data which gets degraded in many cases. A way to deal with multi-mode estimation for a small fixed wing unmanned aerial vehicle (UAV) localization framework is proposed, which depends on utilizing a Luenberger observer-based linear matrix inequality (LMI) approach. The proposed estimation technique relies on the interaction between multiple measurement modes and a continuous observer. The state estimation is performed in a switching environment between multiple active sensors to exploit the available information as much as possible, especially in GPS-denied environments. Luenberger observer-based projection is implemented as a continuous observer to optimize the estimation performance. The observer gain might be chosen by solving a Lyapunov equation by means of a LMI algorithm. Convergence is achieved by utilizing the linear matrix inequality (LMI), based on Lyapunov stability which keeps the dynamic estimation error bounded by selecting the observer gain matrix (L). Simulation results are presented for a small UAV fixed wing localization problem. The results obtained using the proposed approach are compared with a single mode Extended Kalman Filter (EKF). Simulation results are presented to demonstrate the viability of the proposed strategy. PMID:28420214
Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L
2014-10-01
Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Use of the Hotelling observer to optimize image reconstruction in digital breast tomosynthesis
Sánchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan
2015-01-01
Abstract. We propose an implementation of the Hotelling observer that can be applied to the optimization of linear image reconstruction algorithms in digital breast tomosynthesis. The method is based on considering information within a specific region of interest, and it is applied to the optimization of algorithms for detectability of microcalcifications. Several linear algorithms are considered: simple back-projection, filtered back-projection, back-projection filtration, and Λ-tomography. The optimized algorithms are then evaluated through the reconstruction of phantom data. The method appears robust across algorithms and parameters and leads to the generation of algorithm implementations which subjectively appear optimized for the task of interest. PMID:26702408
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
NASA Astrophysics Data System (ADS)
Degenfeld-Schonburg, Peter; Navarrete-Benlloch, Carlos; Hartmann, Michael J.
2015-05-01
Nonlinear quantum optical systems are of paramount relevance for modern quantum technologies, as well as for the study of dissipative phase transitions. Their nonlinear nature makes their theoretical study very challenging and hence they have always served as great motivation to develop new techniques for the analysis of open quantum systems. We apply the recently developed self-consistent projection operator theory to the degenerate optical parametric oscillator to exemplify its general applicability to quantum optical systems. We show that this theory provides an efficient method to calculate the full quantum state of each mode with a high degree of accuracy, even at the critical point. It is equally successful in describing both the stationary limit and the dynamics, including regions of the parameter space where the numerical integration of the full problem is significantly less efficient. We further develop a Gaussian approach consistent with our theory, which yields sensibly better results than the previous Gaussian methods developed for this system, most notably standard linearization techniques.
NASA Astrophysics Data System (ADS)
Beli, D.; Mencik, J.-M.; Silva, P. B.; Arruda, J. R. F.
2018-05-01
The wave finite element method has proved to be an efficient and accurate numerical tool to perform the free and forced vibration analysis of linear reciprocal periodic structures, i.e. those conforming to symmetrical wave fields. In this paper, its use is extended to the analysis of rotating periodic structures, which, due to the gyroscopic effect, exhibit asymmetric wave propagation. A projection-based strategy which uses reduced symplectic wave basis is employed, which provides a well-conditioned eigenproblem for computing waves in rotating periodic structures. The proposed formulation is applied to the free and forced response analysis of homogeneous, multi-layered and phononic ring structures. In all test cases, the following features are highlighted: well-conditioned dispersion diagrams, good accuracy, and low computational time. The proposed strategy is particularly convenient in the simulation of rotating structures when parametric analysis for several rotational speeds is usually required, e.g. for calculating Campbell diagrams. This provides an efficient and flexible framework for the analysis of rotordynamic problems.
Determining Dissolved Oxygen Levels
ERIC Educational Resources Information Center
Boucher, Randy
2010-01-01
This project was used in a mathematical modeling and introduction to differential equations course for first-year college students. The students worked in two-person groups and were given three weeks to complete the project. Students were given this project three weeks into the course, after basic first order linear differential equation and…
SparseBeads data: benchmarking sparsity-regularized computed tomography
NASA Astrophysics Data System (ADS)
Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.
2017-12-01
Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.
Hybrid-Wing-Body Vehicle Composite Fuselage Analysis and Case Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
2014-01-01
Recent progress in the structural analysis of a Hybrid Wing-Body (HWB) fuselage concept is presented with the objective of structural weight reduction under a set of critical design loads. This pressurized efficient HWB fuselage design is presently being investigated by the NASA Environmentally Responsible Aviation (ERA) project in collaboration with the Boeing Company, Huntington Beach. The Pultruded Rod-Stiffened Efficient Unitized Structure (PRSEUS) composite concept, developed at the Boeing Company, is approximately modeled for an analytical study and finite element analysis. Stiffened plate linear theories are employed for a parametric case study. Maximum deflection and stress levels are obtained with appropriate assumptions for a set of feasible stiffened panel configurations. An analytical parametric case study is presented to examine the effects of discrete stiffener spacing and skin thickness on structural weight, deflection and stress. A finite-element model (FEM) of an integrated fuselage section with bulkhead is developed for an independent assessment. Stress analysis and scenario based case studies are conducted for design improvement. The FEM model specific weight of the improved fuselage concept is computed and compared to previous studies, in order to assess the relative weight/strength advantages of this advanced composite airframe technology
Rescia, Alejandro J; Astrada, Elizabeth N; Bono, Julieta; Blasco, Carlos A; Meli, Paula; Adámoli, Jorge M
2006-08-01
A linear engineering project--i.e. a pipeline--has a potential long- and short-term impact on the environment and on the inhabitants therein. We must find better, less expensive, and less time-consuming ways to obtain information on the environment and on any modifications resulting from anthropic activity. We need scientifically sound, rapid and affordable assessment and monitoring methods. Construction companies, industries and the regulating government organisms lack the resources needed to conduct long-term basic studies of the environment. Thus there is a need to make the necessary adjustments and improvements in the environmental data considered useful for this development project. More effective and less costly methods are generally needed. We characterized the landscape of the study area, situated in the center and north-east of Argentina. Little is known of the ecology of this region and substantial research is required in order to develop sustainable uses and, at the same time, to develop methods for reducing impacts, both primary and secondary, resulting from anthropic activity in this area. Furthermore, we made an assessment of the environmental impact of the planned linear project, applying an ad hoc impact index, and we analyzed the different alternatives for a corridor, each one of these involving different sections of the territory. Among the alternative corridors considered, this study locates the most suitable ones in accordance with a selection criterion based on different environmental and conservation aspects. We selected the corridor that we considered to be the most compatible--i.e. with the least potential environmental impact--for the possible construction and operation of the linear project. This information, along with suitable measures for mitigating possible impacts, should be the basis of an environmental management plan for the design process and location of the project. We pointed out the objectivity and efficiency of this methodological approach, along with the possibility of integrating the information in order to allow for the application thereof in this type of study.
Some comparisons of complexity in dictionary-based and linear computational models.
Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello
2011-03-01
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.
Development of semiconductor tracking: The future linear collider case
NASA Astrophysics Data System (ADS)
Savoy-Navarro, Aurore
2011-04-01
An active R&D on silicon tracking for the linear collider, SiLC, is pursued since several years to develop the new generation of large area silicon trackers for the future linear collider(s). The R&D objectives on new sensors, new front end processing of the signal, and the related mechanical and integration challenges for building such large detectors within the proposed detector concepts are described. Synergies and differences with the LHC construction and upgrades are explained. The differences between the linear collider projects, namely the international linear collider, ILC, and the compact linear collider, CLIC, are discussed as well. Two final objectives are presented for the construction of this important sub-detector for the future linear collider experiments: a relatively short term design based on micro-strips combined or not with a gaseous central tracker and a longer term design based on an all-pixel tracker.The R&D objectives on sensors include single sided micro-strips as baseline for the shorter term with the strips from large wafers (at least 6 in), 200 μm thick, 50 μm pitch and the edgeless and alignment friendly options. This work is conducted by SiLC in collaboration with three technical research centers in Italy, Finland, and Spain and HPK. SiLC is studied as well, using advanced Si sensor technologies for higher granularity trackers especially short strips and pixels all based on 3D technology. New Deep Sub-Micron CMOS mix mode (analog and digital) FE and readout electronics are developed to fully process the detector signals currently adapted to the ILC cycle. It is a high-level processing and a fully programmable ASIC; highly fault tolerant. In its latest version, handling 128 channels will equip these next coming years larger size silicon tracking prototypes at test beams. Connection of the FEE chip on the silicon detector especially in the strip case is a major issue. Very preliminary results with inline pitch adapter based on wiring were just achieved. Bump-bonding or 3D vertical interconnect is the other SiLC R&D objective. The goal is to simplify the overall architecture and decrease the material budget of these devices. Three tracking concepts are briefly discussed, two of which are part of the ILC Letter of Intent of the ILD and SiD detector concepts. These last years, SiLC successfully performed beam tests to experience and test these R&D lines.
GIS Tools to Estimate Average Annual Daily Traffic
DOT National Transportation Integrated Search
2012-06-01
This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...
Application of laser speckle to randomized numerical linear algebra
NASA Astrophysics Data System (ADS)
Valley, George C.; Shaw, Thomas J.; Stapleton, Andrew D.; Scofield, Adam C.; Sefler, George A.; Johannson, Leif
2018-02-01
We propose and simulate integrated optical devices for accelerating numerical linear algebra (NLA) calculations. Data is modulated on chirped optical pulses and these propagate through a multimode waveguide where speckle provides the random projections needed for NLA dimensionality reduction.
The Aggregation of Single-Case Results Using Hierarchical Linear Models
ERIC Educational Resources Information Center
Van den Noortgate, Wim; Onghena, Patrick
2007-01-01
To investigate the generalizability of the results of single-case experimental studies, evaluating the effect of one or more treatments, in applied research various simultaneous and sequential replication strategies are used. We discuss one approach for aggregating the results for single-cases: the use of hierarchical linear models. This approach…
A Fourier transform method for Vsin i estimations under nonlinear Limb-Darkening laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levenhagen, R. S., E-mail: ronaldo.levenhagen@gmail.com
Star rotation offers us a large horizon for the study of many important physical issues pertaining to stellar evolution. Currently, four methods are widely used to infer rotation velocities, namely those related to line width calibrations, on the fitting of synthetic spectra, interferometry, and on Fourier transforms (FTs) of line profiles. Almost all of the estimations of stellar projected rotation velocities using the Fourier method in the literature have been addressed with the use of linear limb-darkening (LD) approximations during the evaluation of rotation profiles and their cosine FTs, which in certain cases, lead to discrepant velocity estimates. In thismore » work, we introduce new mathematical expressions of rotation profiles and their Fourier cosine transforms assuming three nonlinear LD laws—quadratic, square-root, and logarithmic—and study their applications with and without gravity-darkening (GD) and geometrical flattening (GF) effects. Through an analysis of He I models in the visible range accounting for both limb and GD, we find out that, for classical models without rotationally driven effects, all the Vsin i values are too close to each other. On the other hand, taking into account GD and GF, the Vsin i obtained with the linear law result in Vsin i values that are systematically smaller than those obtained with the other laws. As a rule of thumb, we apply these expressions to the FT method to evaluate the projected rotation velocity of the emission B-type star Achernar (α Eri).« less
Standardisation of DNA quantitation by image analysis: quality control of instrumentation.
Puech, M; Giroud, F
1999-05-01
DNA image analysis is frequently performed in clinical practice as a prognostic tool and to improve diagnosis. The precision of prognosis and diagnosis depends on the accuracy of analysis and particularly on the quality of image analysis systems. It has been reported that image analysis systems used for DNA quantification differ widely in their characteristics (Thunissen et al.: Cytometry 27: 21-25, 1997). This induces inter-laboratory variations when the same sample is analysed in different laboratories. In microscopic image analysis, the principal instrumentation errors arise from the optical and electronic parts of systems. They bring about problems of instability, non-linearity, and shading and glare phenomena. The aim of this study is to establish tools and standardised quality control procedures for microscopic image analysis systems. Specific reference standard slides have been developed to control instability, non-linearity, shading and glare phenomena and segmentation efficiency. Some systems have been controlled with these tools and these quality control procedures. Interpretation criteria and accuracy limits of these quality control procedures are proposed according to the conclusions of a European project called PRESS project (Prototype Reference Standard Slide). Beyond these limits, tested image analysis systems are not qualified to realise precise DNA analysis. The different procedures presented in this work determine if an image analysis system is qualified to deliver sufficiently precise DNA measurements for cancer case analysis. If the controlled systems are beyond the defined limits, some recommendations are given to find a solution to the problem.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne E.
2013-01-01
We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.
Regional Climate Sensitivity- and Historical-Based Projections to 2100
NASA Astrophysics Data System (ADS)
Hébert, Raphaël.; Lovejoy, Shaun
2018-05-01
Reliable climate projections at the regional scale are needed in order to evaluate climate change impacts and inform policy. We develop an alternative method for projections based on the transient climate sensitivity (TCS), which relies on a linear relationship between the forced temperature response and the strongly increasing anthropogenic forcing. The TCS is evaluated at the regional scale (5° by 5°), and projections are made accordingly to 2100 using the high and low Representative Concentration Pathways emission scenarios. We find that there are large spatial discrepancies between the regional TCS from 5 historical data sets and 32 global climate model (GCM) historical runs and furthermore that the global mean GCM TCS is about 15% too high. Given that the GCM Representative Concentration Pathway scenario runs are mostly linear with respect to their (inadequate) TCS, we conclude that historical methods of regional projection are better suited given that they are directly calibrated on the real world (historical) climate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heroux, Michael Allen; Marker, Bryan
This report summarizes the progress made as part of a one year lab-directed research and development (LDRD) project to fund the research efforts of Bryan Marker at the University of Texas at Austin. The goal of the project was to develop new techniques for automatically tuning the performance of dense linear algebra kernels. These kernels often represent the majority of computational time in an application. The primary outcome from this work is a demonstration of the value of model driven engineering as an approach to accurately predict and study performance trade-offs for dense linear algebra computations.
Trends in incidence and survival for anal cancer in New South Wales, Australia, 1972-2009.
Soeberg, Matthew J; Rogers, Kris; Currow, David C; Young, Jane M
2015-12-01
Little is known about the incidence and survival of anal cancer in New South Wales (NSW), Australia, as anal cancer cases are often grouped together with other colorectal cancers in descriptive epidemiological analyses. We studied patterns and trends in the incidence and survival of people diagnosed with anal cancer in NSW, Australia, 1972-2009 (n=2724). We also predicted anal cancer incidence in NSW during 2010-2032. Given the human papilloma virus-associated aetiology for most anal cancers, we quantified these changes over time in incidence and survival by histological subtype: anal squamous cell carcinoma (ASCC); and anal adenocarcinoma (AAC). There was a linear increase in incident anal cancer cases in NSW with an average annual percentage change (AAPC) of 1.6 (95% CI 1.1-2.0) such that, in combination with age-period-cohort modelling, we predict there will be 198 cases of anal cancer in the 2032 calendar year (95% CI 169-236). Almost all of these anal cancer cases are projected to be ASCC (94%). Survival improved over time regardless of histological subtype. However, five-year relative survival was substantially higher for people with ASCC (70% (95% CI 66-74%)) compared to AAC (51% (95% CI 43-59%)), a 37% difference. Survival was also greater for women (69% (95% CI 64-73%)) with ASCC compared to men (55% (95% CI 50-60%)). It was not possible to estimate survival by stage at diagnosis particularly given that 8% of all cases were recorded as having distant stage and 22% had missing stage data. Aetiological explanations, namely exposure to oncogenic types of human papillomavirus, along with demographic changes most likely explain the actual and projected increase in ASCC case numbers. Survival differences by gender and histological subtype point to areas where further research is warranted to improve treatment and outcomes for all anal cancer patients. Copyright © 2015 Elsevier Ltd. All rights reserved.
Orthogonal Projection in Teaching Regression and Financial Mathematics
ERIC Educational Resources Information Center
Kachapova, Farida; Kachapov, Ilias
2010-01-01
Two improvements in teaching linear regression are suggested. The first is to include the population regression model at the beginning of the topic. The second is to use a geometric approach: to interpret the regression estimate as an orthogonal projection and the estimation error as the distance (which is minimized by the projection). Linear…
Concentrating Solar Power Projects - Urat 50MW Fresnel CSP project |
Concentrating Solar Power | NREL 50MW Fresnel CSP project Status Date: September 29, 2016 Turbine Capacity: Net: 50.0 MW Gross: 50.0 MW Status: Under development Do you have more information , corrections, or comments? Background Technology: Linear Fresnel reflector Status: Under development Country
ERIC Educational Resources Information Center
Wilson, Jason; Lawman, Joshua; Murphy, Rachael; Nelson, Marissa
2011-01-01
This article describes a probability project used in an upper division, one-semester probability course with third-semester calculus and linear algebra prerequisites. The student learning outcome focused on developing the skills necessary for approaching project-sized math/stat application problems. These skills include appropriately defining…
THe Case Method of Instruction (CMI) Project. Final Report.
ERIC Educational Resources Information Center
McWilliam, P. J.; And Others
This final report describes the Case Method of Instruction (CMI) Project, a project to develop, field test, and disseminate training materials to facilitate the use of the Case Method of Instruction by inservice and preservice instructors in developmental disabilities. CMI project activities focused on developing a collection of case stories and…
NASA Technical Reports Server (NTRS)
Callier, Frank M.; Desoer, Charles A.
1991-01-01
The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Y; Rottmann, J; Myronakis, M
2016-06-15
Purpose: The purpose of this study was to validate the use of a cascaded linear system model for MV cone-beam CT (CBCT) using a multi-layer (MLI) electronic portal imaging device (EPID) and provide experimental insight into image formation. A validated 3D model provides insight into salient factors affecting reconstructed image quality, allowing potential for optimizing detector design for CBCT applications. Methods: A cascaded linear system model was developed to investigate the potential improvement in reconstructed image quality for MV CBCT using an MLI EPID. Inputs to the three-dimensional (3D) model include projection space MTF and NPS. Experimental validation was performedmore » on a prototype MLI detector installed on the portal imaging arm of a Varian TrueBeam radiotherapy system. CBCT scans of up to 898 projections over 360 degrees were acquired at exposures of 16 and 64 MU. Image volumes were reconstructed using a Feldkamp-type (FDK) filtered backprojection (FBP) algorithm. Flat field images and scans of a Catphan model 604 phantom were acquired. The effect of 2×2 and 4×4 detector binning was also examined. Results: Using projection flat fields as an input, examination of the modeled and measured NPS in the axial plane exhibits good agreement. Binning projection images was shown to improve axial slice SDNR by a factor of approximately 1.4. This improvement is largely driven by a decrease in image noise of roughly 20%. However, this effect is accompanied by a subsequent loss in image resolution. Conclusion: The measured axial NPS shows good agreement with the theoretical calculation using a linear system model. Binning of projection images improves SNR of large objects on the Catphan phantom by decreasing noise. Specific imaging tasks will dictate the implementation image binning to two-dimensional projection images. The project was partially supported by a grant from Varian Medical Systems, Inc. and grant No. R01CA188446-01 from the National Cancer Institute.« less
Neuropsychologic assessment of a population-based sample of Gulf War veterans.
Wallin, Mitchell T; Wilken, Jeffrey; Alfaro, Mercedes H; Rogers, Catherine; Mahan, Clare; Chapman, Julie C; Fratto, Timothy; Sullivan, Cynthia; Kang, Han; Kane, Robert
2009-09-01
The objective of this project was to compare neuropsychologic performance and quality of life in a population-based sample of deployed Gulf War (GW) veterans with and without multisymptom complaints. The study participants were obtained from the 30,000 member population-based National Health Survey of GW-era veterans conducted in 1995. Cases (N=25) were deployed to the year 1990 and 1991 GW and met Center for Disease Control and Prevention criteria for multisymptom GW illness (GWI). Controls (N=16) were deployed to the 1990 and 1991 GW but did not meet Center for Disease Control and Prevention criteria for GWI. There were no significant differences in composite scores on the traditional and computerized neuropsychologic battery (automated neuropsychologic assessment metrics) between GW cases and controls using bivariate techniques. Multiple linear regression analyses controlling for demographic and clinical variables revealed composite automated neuropsychologic assessment metrics scores were associated with age (b=-7.8; P=0.084), and education (b=22.9; P=0.0012), but not GW case or control status (b=-63.9; P=0.22). Compared with controls, GW cases had significantly more impairment on the Personality Assessment Inventory and the short form-36. Compared with GW controls, GW cases meeting criteria for GWI had preserved cognition function but had significant psychiatric symptoms and lower quality of life.
Global and local curvature in density functional theory.
Zhao, Qing; Ioannidis, Efthymios I; Kulik, Heather J
2016-08-07
Piecewise linearity of the energy with respect to fractional electron removal or addition is a requirement of an electronic structure method that necessitates the presence of a derivative discontinuity at integer electron occupation. Semi-local exchange-correlation (xc) approximations within density functional theory (DFT) fail to reproduce this behavior, giving rise to deviations from linearity with a convex global curvature that is evidence of many-electron, self-interaction error and electron delocalization. Popular functional tuning strategies focus on reproducing piecewise linearity, especially to improve predictions of optical properties. In a divergent approach, Hubbard U-augmented DFT (i.e., DFT+U) treats self-interaction errors by reducing the local curvature of the energy with respect to electron removal or addition from one localized subshell to the surrounding system. Although it has been suggested that DFT+U should simultaneously alleviate global and local curvature in the atomic limit, no detailed study on real systems has been carried out to probe the validity of this statement. In this work, we show when DFT+U should minimize deviations from linearity and demonstrate that a "+U" correction will never worsen the deviation from linearity of the underlying xc approximation. However, we explain varying degrees of efficiency of the approach over 27 octahedral transition metal complexes with respect to transition metal (Sc-Cu) and ligand strength (CO, NH3, and H2O) and investigate select pathological cases where the delocalization error is invisible to DFT+U within an atomic projection framework. Finally, we demonstrate that the global and local curvatures represent different quantities that show opposing behavior with increasing ligand field strength, and we identify where these two may still coincide.
Non-rigid alignment in electron tomography in materials science.
Printemps, Tony; Bernier, Nicolas; Bleuet, Pierre; Mula, Guido; Hervé, Lionel
2016-09-01
Electron tomography is a key technique that enables the visualization of an object in three dimensions with a resolution of about a nanometre. High-quality 3D reconstruction is possible thanks to the latest compressed sensing algorithms and/or better alignment and preprocessing of the 2D projections. Rigid alignment of 2D projections is routine in electron tomography. However, it cannot correct misalignments induced by (i) deformations of the sample due to radiation damage or (ii) drifting of the sample during the acquisition of an image in scanning transmission electron microscope mode. In both cases, those misalignments can give rise to artefacts in the reconstruction. We propose a simple-to-implement non-rigid alignment technique to correct those artefacts. This technique is particularly suited for needle-shaped samples in materials science. It is initiated by a rigid alignment of the projections and it is then followed by several rigid alignments of different parts of the projections. Piecewise linear deformations are applied to each projection to force them to simultaneously satisfy the rigid alignments of the different parts. The efficiency of this technique is demonstrated on three samples, an intermetallic sample with deformation misalignments due to a high electron dose typical to spectroscopic electron tomography, a porous silicon sample with an extremely thin end particularly sensitive to electron beam and another porous silicon sample that was drifting during image acquisitions. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Linear Collider project database
&D projects circa 2005 List of who is thinking of working on what. At present this includes non SLAC, FNAL, and Cornell meetings. Ordered list of who is thinking of working on what. At present this
Concentrating Solar Power Projects - IRESEN 1 MWe CSP-ORC pilot project |
Start Year: 2016 Do you have more information, corrections, or comments? Background Technology: Linear : 1,700 MWh/yr Contact(s): Webmaster Solar Break Ground: 2015 Start Production: September 2016 Cost
Spatial and temporal variation in the association between temperature and salmonellosis in NZ.
Lal, Aparna; Hales, Simon; Kirk, Martyn; Baker, Michael G; French, Nigel P
2016-04-01
Modelling the relationship between weather, climate and infectious diseases can help identify high-risk periods and provide understanding of the determinants of longer-term trends. We provide a detailed examination of the non-linear and delayed association between temperature and salmonellosis in three New Zealand cities (Auckland, Wellington and Christchurch). Salmonella notifications were geocoded to the city of residence for the reported case. City-specific associations between weekly maximum temperature and the onset date for reported salmonella infections (1997-2007) were modelled using non-linear distributed lag models, while controlling for season and long-term trends. Relatively high temperatures were positively associated with infection risk in Auckland (n=3,073) and Christchurch (n=880), although the former showed evidence of a more immediate relationship with exposure to high temperatures. There was no significant association between temperature and salmonellosis risk in Wellington. Projected increases in temperature with climate change may have localised health impacts, suggesting that preventative measures will need to be region-specific. This evidence contributes to the increasing concern over the public health impacts of climate change. © 2015 Public Health Association of Australia.
Fast computation of an optimal controller for large-scale adaptive optics.
Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc
2011-11-01
The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.
Using Spin Correlations to Distinguish Zh from ZA at the International Linear Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahlon, Gregory; /Penn State U., Mont Alto; Parke, Stephen J.
2006-06-01
We investigate how to exploit the spin information imparted to the Z boson in associated Higgs production at a future linear collider as an aid in distinguishing between CP-even and CP-odd Higgs bosons. We apply a generalized spin-basis analysis which allows us to study the possibilities offered by non-traditional choices of spin projection axis. In particular, we find that the Z bosons produced in association with a CP-even Higgs via polarized collisions are in a single transverse spin-state (> 90% purity) when we use the Zh-transverse basis, provided that the Z bosons are not ultra-relativistic (speed < 0.9c). This samemore » basis applied to the associated production of a CP-odd Higgs yields Z's that are an approximately equal mixture of longitudinal and transverse polarizations. We present a decay angular distribution which could be used to distinguish between the CP-even and CP-odd cases. Finally, we make a few brief remarks about how this distribution would be affected if the Higgs boson turns out to not be a CP-eigenstate.« less
Kolstad, Erik W.; Johansson, Kjell Arne
2011-01-01
Background Climate change is expected to have large impacts on health at low latitudes where droughts and malnutrition, diarrhea, and malaria are projected to increase. Objectives The main objective of this study was to indicate a method to assess a range of plausible health impacts of climate change while handling uncertainties in a unambiguous manner. We illustrate this method by quantifying the impacts of projected regional warming on diarrhea in this century. Methods We combined a range of linear regression coefficients to compute projections of future climate change-induced increases in diarrhea using the results from five empirical studies and a 19-member climate model ensemble for which future greenhouse gas emissions were prescribed. Six geographical regions were analyzed. Results The model ensemble projected temperature increases of up to 4°C over land in the tropics and subtropics by the end of this century. The associated mean projected increases of relative risk of diarrhea in the six study regions were 8–11% (with SDs of 3–5%) by 2010–2039 and 22–29% (SDs of 9–12%) by 2070–2099. Conclusions Even our most conservative estimates indicate substantial impacts from climate change on the incidence of diarrhea. Nevertheless, our main conclusion is that large uncertainties are associated with future projections of diarrhea and climate change. We believe that these uncertainties can be attributed primarily to the sparsity of empirical climate–health data. Our results therefore highlight the need for empirical data in the cross section between climate and human health. PMID:20929684
2D instabilities of surface gravity waves on a linear shear current
NASA Astrophysics Data System (ADS)
Francius, Marc; Kharif, Christian
2016-04-01
Periodic 2D surface water waves propagating steadily on a rotational current have been studied by many authors (see [1] and references therein). Although the recent important theoretical developments have confirmed that periodic waves can exist over flows with arbitrary vorticity, their stability and their nonlinear evolution have not been much studied extensively so far. In fact, even in the rather simple case of uniform vorticity (linear shear), few papers have been published on the effect of a vertical shear current on the side-band instability of a uniform wave train over finite depth. In most of these studies [2-5], asymptotic expansions and multiple scales method have been used to obtain envelope evolution equations, which allow eventually to formulate a condition of (linear) instability to long modulational perturbations. It is noted here that this instability is often referred in the literature as the Benjamin-Feir or modulational instability. In the present study, we consider the linear stability of finite amplitude two-dimensional, periodic water waves propagating steadily on the free surface of a fluid with constant vorticity and finite depth. First, the steadily propagating surface waves are computed with steepness up to very close to the highest, using a Fourier series expansions and a collocation method, which constitutes a simple extension of Fenton's method [6] to the cases with a linear shear current. Then, the linear stability of these permanent waves to infinitesimal 2D perturbations is developed from the fully nonlinear equations in the framework of normal modes analysis. This linear stability analysis is an extension of [7] to the case of waves in the presence of a linear shear current and permits the determination of the dominant instability as a function of depth and vorticity for a given steepness. The numerical results are used to assess the accuracy of the vor-NLS equation derived in [5] for the characteristics of modulational instabilities due to resonant four-wave interactions, as well as to study the influence of vorticity and nonlinearity on the characteristics of linear instabilities due to resonant five-wave and six-wave interactions. Depending on the dimensionless depth, superharmonic instabilities due to five-wave interactions can become dominant with increasing positive vorticiy. Acknowledgments: This work was supported by the Direction Générale de l'Armement and funded by the ANR project n°. ANR-13-ASTR-0007. References [1] A. Constantin, Two-dimensionality of gravity water flows of constant non-zero vorticity beneath a surface wave train, Eur. J. Mech. B/Fluids, 2011, 30, 12-16. [2] R. S. Johnson, On the modulation of water waves on shear flows, Proc. Royal Soc. Lond. A., 1976, 347, 537-546. [3] M. Oikawa, K. Chow, D. J. Benney, The propagation of nonlinear wave packets in a shear flow with a free surface, Stud. Appl. Math., 1987, 76, 69-92. [4] A. I Baumstein, Modulation of gravity waves with shear in water, Stud. Appl. Math., 1998, 100, 365-90. [5] R. Thomas, C. Kharif, M. Manna, A nonlinear Schrödinger equation for water waves on finite depth with constant vorticity, Phys. Fluids, 2012, 24, 127102. [6] M. M Rienecker, J. D Fenton, A Fourier approximation method for steady water waves , J. Fluid Mech., 1981, 104, 119-137 [7] M. Francius, C. Kharif, Three-dimensional instabilities of periodic gravity waves in shallow water, J. Fluid Mech., 2006, 561, 417-437
Ishii, Naohiro; Ando, Jiro; Harao, Michiko; Takemae, Masaru; Kishi, Kazuo
2018-05-07
In nipple reconstruction, the width, length, and thickness of modified star flaps are concerns for long-term reconstructed nipple projection. However, the flap's projection has not been analyzed, based on its thickness. The aim of the present study was to investigate how flap thickness in a modified star flap influences the resulting reconstructed nipple and achieves an appropriate flap width in design. Sixty-three patients who underwent nipple reconstruction using a modified star flap following implant-based breast reconstruction between August 2014 and July 2016 were included in this case-controlled study. The length of laterally diverging flaps was 1.5 times their width. The thickness of each flap was measured using ultrasonography, and the average thickness was defined as the flap thickness. We investigated the correlation between the resulting reconstructed nipple and flap thickness, and the difference of the change in the reconstructed nipple projection after using a thin or thick flap. The average flap thickness was 3.8 ± 1.7 (range 2.5-6.0) mm. There was a significant, linear correlation between the flap thickness and resulting reconstructed nipple projection (β = 0.853, p < 0.01). Furthermore, the difference between the thin and thick flaps in the resulting reconstructed nipple projection was significant (p < 0.01). Measuring the flap thickness preoperatively may allow surgeons to achieve an appropriate flap width; otherwise, alternative methods for higher projection might be used. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
NASA Technical Reports Server (NTRS)
McIlraith, Sheila; Biswas, Gautam; Clancy, Dan; Gupta, Vineet
2005-01-01
This paper reports on an on-going Project to investigate techniques to diagnose complex dynamical systems that are modeled as hybrid systems. In particular, we examine continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices. We cast the diagnosis problem as a model selection problem. To reduce the space of potential models under consideration, we exploit techniques from qualitative reasoning to conjecture an initial set of qualitative candidate diagnoses, which induce a smaller set of models. We refine these diagnoses using parameter estimation and model fitting techniques. As a motivating case study, we have examined the problem of diagnosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.
Donald McKenzie; John T. Abatzoglou; E. Natasha Stavros; Narasimhan K. Larkin
2014-01-01
Seasonal changes in the climatic potential for very large wildfires (VLWF >= 50,000 ac~20,234 ha) across the western contiguous United States are projected over the 21st century using generalized linear models and downscaled climate projections for two representative concentration pathways (RCPs). Significant (p
The Next Linear Collider Program
/graphics.htm Snowmass 2001 http://snowmass2001.org/ Electrical Systems Modulators http://www -project.slac.stanford.edu/lc/local/electrical/e_home.htm DC Magnet Power http://www-project.slac.stanford.edu/lc/local /electrical/e_home.htm Global Systems http://www-project.slac.stanford.edu/lc/local/electrical/e_home.htm
The Challenge of Separating Effects of Simultaneous Education Projects on Student Achievement
ERIC Educational Resources Information Center
Ma, Xin; Ma, Lingling
2009-01-01
When multiple education projects operate in an overlapping or rear-ended manner, it is always a challenge to separate unique project effects on schooling outcomes. Our analysis represents a first attempt to address this challenge. A three-level hierarchical linear model (HLM) was presented as a general analytical framework to separate program…
ERIC Educational Resources Information Center
Dounas-Frazer, Dimitri R.; Stanley, Jacob T.; Lewandowski, H. J.
2017-01-01
We investigate students' sense of ownership of multiweek final projects in an upper-division optics lab course. Using a multiple case study approach, we describe three student projects in detail. Within-case analyses focused on identifying key issues in each project, and constructing chronological descriptions of those events. Cross-case analysis…
Linear Quantum Systems: Non-Classical States and Robust Stability
2016-06-29
quantum linear systems subject to non-classical quantum fields. The major outcomes of this project are (i) derivation of quantum filtering equations for...derivation of quantum filtering equations for systems non-classical input states including single photon states, (ii) determination of how linear...history going back some 50 years, to the birth of modern control theory with Kalman’s foundational work on filtering and LQG optimal control
NASA Astrophysics Data System (ADS)
Kajiwara, Yoshiyuki; Shiraishi, Junya; Kobayashi, Shoei; Yamagami, Tamotsu
2009-03-01
A digital phase-locked loop (PLL) with a linearly constrained adaptive filter (LCAF) has been studied for higher-linear-density optical discs. LCAF has been implemented before an interpolated timing recovery (ITR) PLL unit in order to improve the quality of phase error calculation by using an adaptively equalized partial response (PR) signal. Coefficient update of an asynchronous sampled adaptive FIR filter with a least-mean-square (LMS) algorithm has been constrained by a projection matrix in order to suppress the phase shift of the tap coefficients of the adaptive filter. We have developed projection matrices that are suitable for Blu-ray disc (BD) drive systems by numerical simulation. Results have shown the properties of the projection matrices. Then, we have designed the read channel system of the ITR PLL with an LCAF model on the FPGA board for experiments. Results have shown that the LCAF improves the tilt margins of 30 gigabytes (GB) recordable BD (BD-R) and 33 GB BD read-only memory (BD-ROM) with a sufficient LMS adaptation stability.
Prosthetic Leg Control in the Nullspace of Human Interaction.
Gregg, Robert D; Martin, Anne E
2016-07-01
Recent work has extended the control method of virtual constraints, originally developed for autonomous walking robots, to powered prosthetic legs for lower-limb amputees. Virtual constraints define desired joint patterns as functions of a mechanical phasing variable, which are typically enforced by torque control laws that linearize the output dynamics associated with the virtual constraints. However, the output dynamics of a powered prosthetic leg generally depend on the human interaction forces, which must be measured and canceled by the feedback linearizing control law. This feedback requires expensive multi-axis load cells, and actively canceling the interaction forces may minimize the human's influence over the prosthesis. To address these limitations, this paper proposes a method for projecting virtual constraints into the nullspace of the human interaction terms in the output dynamics. The projected virtual constraints naturally render the output dynamics invariant with respect to the human interaction forces, which instead enter into the internal dynamics of the partially linearized prosthetic system. This method is illustrated with simulations of a transfemoral amputee model walking with a powered knee-ankle prosthesis that is controlled via virtual constraints with and without the proposed projection.
Thin silicon layer SOI power device with linearly-distance fixed charge islands
NASA Astrophysics Data System (ADS)
Yuan, Zuo; Haiou, Li; Jianghui, Zhai; Ning, Tang; Shuxiang, Song; Qi, Li
2015-05-01
A new high-voltage LDMOS with linearly-distanced fixed charge islands is proposed (LFI LDMOS). A lot of linearly-distanced fixed charge islands are introduced by implanting the Cs or I ion into the buried oxide layer and dynamic holes are attracted and accumulated, which is crucial to enhance the electric field of the buried oxide and the vertical breakdown voltage. The surface electric field is improved by increasing the distance between two adjacent fixed charge islands from source to drain, which lead to the higher concentration of the drift region and a lower on-resistance. The numerical results indicate that the breakdown voltage of 500 V with Ld = 45 μm is obtained in the proposed device in comparison to 209 V of conventional LDMOS, while maintaining low on-resistance. Project supported by the Guangxi Natural Science Foundation of China (No. 2013GXNSFAA019335), the Guangxi Department of Education Project (No.201202ZD041), the China Postdoctoral Science Foundation Project (Nos. 2012M521127, 2013T60566), and the National Natural Science Foundation of China (Nos. 61361011, 61274077, 61464003).
Lessons from Fraxinus, a crowd-sourced citizen science game in genomics
Rallapalli, Ghanasyam; Saunders, Diane GO; Yoshida, Kentaro; Edwards, Anne; Lugo, Carlos A; Collin, Steve; Clavijo, Bernardo; Corpas, Manuel; Swarbreck, David; Clark, Matthew; Downie, J Allan; Kamoun, Sophien
2015-01-01
In 2013, in response to an epidemic of ash dieback disease in England the previous year, we launched a Facebook-based game called Fraxinus to enable non-scientists to contribute to genomics studies of the pathogen that causes the disease and the ash trees that are devastated by it. Over a period of 51 weeks players were able to match computational alignments of genetic sequences in 78% of cases, and to improve them in 15% of cases. We also found that most players were only transiently interested in the game, and that the majority of the work done was performed by a small group of dedicated players. Based on our experiences we have built a linear model for the length of time that contributors are likely to donate to a crowd-sourced citizen science project. This model could serve a guide for the design and implementation of future crowd-sourced citizen science initiatives. DOI: http://dx.doi.org/10.7554/eLife.07460.001 PMID:26219214
NASA Astrophysics Data System (ADS)
Diller, Christian; Karic, Sarah; Oberding, Sarah
2017-06-01
The topic of this article ist the question, in which phases oft he political planning process planners apply their methodological set of tools. That for the results of a research-project are presented, which were gained by an examination of planning-cases in learned journals. Firstly it is argued, which model oft he planning-process is most suitable to reflect the regarded cases and how it is positioned to models oft he political process. Thereafter it is analyzed, which types of planning methods are applied in the several stages oft he planning process. The central findings: Although complex, many planning processes can be thouroughly pictured by a linear modell with predominantly simple feedback loops. Even in times of he communicative turn, concerning their set of tools, planners should pay attention to apply not only communicative methods but as well the classical analytical-rational methods. They are helpful especially for the understanding of the political process before and after the actual planning phase.
LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL
NASA Technical Reports Server (NTRS)
Duke, E. L.
1994-01-01
The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of interest, or a full non-linear aerodynamic model as used in simulations. LINEAR is written in FORTRAN and has been implemented on a DEC VAX computer operating under VMS with a virtual memory requirement of approximately 296K of 8 bit bytes. Both an interactive and batch version are included. LINEAR was developed in 1988.
DOT National Transportation Integrated Search
2016-08-01
This project developed a solid-state welding process based on linear friction welding (LFW) technology. While resistance flash welding or : thermite techniques are tried and true methods for joining rails and performing partial rail replacement repai...
Neural processing of gravity information
NASA Technical Reports Server (NTRS)
Schor, Robert H.
1992-01-01
The goal of this project was to use the linear acceleration capabilities of the NASA Vestibular Research Facility (VRF) at Ames Research Center to directly examine encoding of linear accelerations in the vestibular system of the cat. Most previous studies, including my own, have utilized tilt stimuli, which at very low frequencies (e.g., 'static tilt') can be considered a reasonably pure linear acceleration (e.g., 'down'); however, higher frequencies of tilt, necessary for understanding the dynamic processing of linear acceleration information, necessarily involves rotations which can stimulate the semicircular canals. The VRF, particularly the Long Linear Sled, has promise to provide controlled pure linear accelerations at a variety of stimulus frequencies, with no confounding angular motion.
Conjugate gradient based projection - A new explicit methodology for frictional contact
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Li, Maocheng; Sha, Desong
1993-01-01
With special attention towards the applicability to parallel computation or vectorization, a new and effective explicit approach for linear complementary formulations involving a conjugate gradient based projection methodology is proposed in this study for contact problems with Coulomb friction. The overall objectives are focussed towards providing an explicit methodology of computation for the complete contact problem with friction. In this regard, the primary idea for solving the linear complementary formulations stems from an established search direction which is projected to a feasible region determined by the non-negative constraint condition; this direction is then applied to the Fletcher-Reeves conjugate gradient method resulting in a powerful explicit methodology which possesses high accuracy, excellent convergence characteristics, fast computational speed and is relatively simple to implement for contact problems involving Coulomb friction.
Spacecraft-borne long life cryogenic refrigeration: Status and trends
NASA Technical Reports Server (NTRS)
Johnson, A. L.
1983-01-01
The status of cryogenic refrigerator development intended for, or possibly applicable to, long life spacecraft-borne application is reviewed. Based on these efforts, the general development trends are identified. Using currently projected technology needs, the various trends are compared and evaluated. The linear drive, non-contacting bearing Stirling cycle refrigerator concept appears to be the best current approach that will meet the technology projection requirements for spacecraft-borne cryogenic refrigerators. However, a multiply redundant set of lightweight, moderate life, moderate reliability Stirling cycle cryogenic refrigerators using high-speed linear drive and sliding contact bearings may possibly suffice.
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Gottlieb, David; Carpenter, Mark H.
1994-01-01
It has been previously shown that the temporal integration of hyperbolic partial differential equations (PDE's) may, because of boundary conditions, lead to deterioration of accuracy of the solution. A procedure for removal of this error in the linear case has been established previously. In the present paper we consider hyperbolic (PDE's) (linear and non-linear) whose boundary treatment is done via the SAT-procedure. A methodology is present for recovery of the full order of accuracy, and has been applied to the case of a 4th order explicit finite difference scheme.
A multiscale filter for noise reduction of low-dose cone beam projections.
Yao, Weiguang; Farr, Jonathan B
2015-08-21
The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, exp(-x2/2σ(2)(f)) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of σ(f), which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ(2)(f)) is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024 × 768 pixels.
Radio Propagation Prediction Software for Complex Mixed Path Physical Channels
2006-08-14
63 4.4.6. Applied Linear Regression Analysis in the Frequency Range 1-50 MHz 69 4.4.7. Projected Scaling to...4.4.6. Applied Linear Regression Analysis in the Frequency Range 1-50 MHz In order to construct a comprehensive numerical algorithm capable of
Reaction-Infiltration Instabilities in Fractured and Porous Rocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ladd, Anthony
In this project we are developing a multiscale analysis of the evolution of fracture permeability, using numerical simulations and linear stability analysis. Our simulations include fully three-dimensional simulations of the fracture topography, fluid flow, and reactant transport, two-dimensional simulations based on aperture models, and linear stability analysis.
Mathematical Modelling in Engineering: An Alternative Way to Teach Linear Algebra
ERIC Educational Resources Information Center
Domínguez-García, S.; García-Planas, M. I.; Taberna, J.
2016-01-01
Technological advances require that basic science courses for engineering, including Linear Algebra, emphasize the development of mathematical strengths associated with modelling and interpretation of results, which are not limited only to calculus abilities. Based on this consideration, we have proposed a project-based learning, giving a dynamic…
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
1976-06-01
United States Naval Postgraduate School, Monterey , California, 1974. 6. Anton , H., Elementary Linear Algebra , John Wiley & Sons, 1973. 7. Parrat, L. G...CONVERTER ln(laser & bias) PULSE HEIGHT ANALYZER © LINEAR AMPLIFIER SAMPLE TRIGGER OSCILLATOR early ln(laser & bias) SCINTILLOMETERS recent BACKGROUND...DEMODULATOR LASER CALIBRATION BOX LASER OR CAL VOLTAGE LOG CONVERTER LN (LASER OR CAL VOLT) LINEAR AMPLIFIER uLN (LASER OR CAL VOLT) PULSE HEIGHTEN ANALYZER V
Non-destructive evaluation of containment walls in nuclear power plants
NASA Astrophysics Data System (ADS)
Garnier, V.; Payan, C.; Lott, M.; Ranaivomanana, N.; Balayssac, J. P.; Verdier, J.; Larose, E.; Zhang, Y.; Saliba, J.; Boniface, A.; Sbartai, Z. M.; Piwakowski, B.; Ciccarone, C.; Hafid, H.; Henault, J. M.; Buffet, F. Ouvrier
2017-02-01
Two functions are regularly tested on containment walls in order to anticipate a possible accident. The first is mechanical to resist a possible internal over-pressure and the second is to prevent leakage. The AAPR reference accident is the rupture of a pipe in the primary circuit of a nuclear plant. In this case, the pressure and temperature can reach 5 bar and 180°C in 20 seconds. The national project `Non-destructive testing of the containment structures of nuclear plants' aims at studying the non-destructive techniques capable to evaluate the concrete properties and its damaging and cracks. This 4-year-project is segmented into two parts. The first consists in developing and selecting the most relevant NDEs in the laboratory to reach these goals. These evaluations are developed in conditions representing the real conditions of the stresses generated during ten-yearly visits of the plants or those related to an accident. The second part consists in applying the selected techniques to two containment structures under pressure. The first structure is proposed by ONERA and the second is a mockup of a containment wall on a 1/3 scale made by EDF within the VeRCoRs project. Communication is focused on the part of the project that concerns the damage and crack process characterization by means of NDT. The tests are done in 3 or 4 points bending in order to study the cracks' generation, their propagation, as well as their opening and closing. The main ultrasonic techniques developed concern linear or non-linear acoustic: acoustic emission [1], Locadiff [2], energy diffusion, surface wave's velocity and attenuation, DAET [3]. The recorded data contribute to providing the mapping of the investigated parameters, either in volume, in surface or globally. Digital image correlation is an important additional asset to validate the coherence of the data. The spatial normalization of the data in the specimen space allows proposing algorithms on the combination of the experimental data. The tests results are presented and they show the capacity and the limits of the evaluation of the volume, surface or global data. A data fusion procedure is associated with these results.
ERIC Educational Resources Information Center
Zandieh, Michelle; Ellis, Jessica; Rasmussen, Chris
2017-01-01
As part of a larger study of student understanding of concepts in linear algebra, we interviewed 10 university linear algebra students as to their conceptions of functions from high school algebra and linear transformation from their study of linear algebra. An overarching goal of this study was to examine how linear algebra students see linear…
Linear lichen planus in children - Case report*
Horowitz, Marcia Raquel; Vidal, Marcela de Lima; Resende, Manuela Oliveira; Teixeira, Márcia Almeida Galvão; Cavalcanti, Silvana Maria de Morais; de Alencar, Eliane Ruth Barbosa
2013-01-01
Lichen planus is an uncommon disease in children, and only 2 to 3% of affected patients are under twenty years of age. This dermatosis may appear in several clinical forms, which vary according to the morphology and distribution of lesions. In less than 0.2% of all lichen planus cases, the lesions are distributed along the lines of Blaschko, and is a variant called linear lichen planus. This is a case report of a patient aged two years and eight months, who presented keratotic violaceous papules, affecting the abdomen, buttocks and right thigh, distributed along the lines of Blaschko. Histopathological examination confirmed a diagnosis of linear lichen planus. PMID:24346902
NASA Astrophysics Data System (ADS)
Laura, Jason; Skinner, James A.; Hunter, Marc A.
2017-08-01
In this paper we present the Large Crater Clustering (LCC) tool set, an ArcGIS plugin that supports the quantitative approximation of a primary impact location from user-identified locations of possible secondary impact craters or the long-axes of clustered secondary craters. The identification of primary impact craters directly supports planetary geologic mapping and topical science studies where the chronostratigraphic age of some geologic units may be known, but more distant features have questionable geologic ages. Previous works (e.g., McEwen et al., 2005; Dundas and McEwen, 2007) have shown that the source of secondary impact craters can be estimated from secondary impact craters. This work adapts those methods into a statistically robust tool set. We describe the four individual tools within the LCC tool set to support: (1) processing individually digitized point observations (craters), (2) estimating the directional distribution of a clustered set of craters, back projecting the potential flight paths (crater clusters or linearly approximated catenae or lineaments), (3) intersecting projected paths, and (4) intersecting back-projected trajectories to approximate the local of potential source primary craters. We present two case studies using secondary impact features mapped in two regions of Mars. We demonstrate that the tool is able to quantitatively identify primary impacts and supports the improved qualitative interpretation of potential secondary crater flight trajectories.
Image reconstruction of x-ray tomography by using image J platform
NASA Astrophysics Data System (ADS)
Zain, R. M.; Razali, A. M.; Salleh, K. A. M.; Yahya, R.
2017-01-01
A tomogram is a technical term for a CT image. It is also called a slice because it corresponds to what the object being scanned would look like if it were sliced open along a plane. A CT slice corresponds to a certain thickness of the object being scanned. So, while a typical digital image is composed of pixels, a CT slice image is composed of voxels (volume elements). In the case of x-ray tomography, similar to x-ray Radiography, the quantity being imaged is the distribution of the attenuation coefficient μ(x) within the object of interest. The different is only on the technique to produce the tomogram. The image of x-ray radiography can be produced straight foward after exposed to x-ray, while the image of tomography produces by combination of radiography images in every angle of projection. A number of image reconstruction methods by converting x-ray attenuation data into a tomography image have been produced by researchers. In this work, Ramp filter in "filtered back projection" has been applied. The linear data acquired at each angular orientation are convolved with a specially designed filter and then back projected across a pixel field at the same angle. This paper describe the step of using Image J software to produce image reconstruction of x-ray tomography.
A drop in uniaxial and biaxial nonlinear extensional flows
NASA Astrophysics Data System (ADS)
Favelukis, M.
2017-08-01
In this theoretical report, we explore small deformations of an initially spherical drop subjected to uniaxial or biaxial nonlinear extensional creeping flows. The problem is governed by the capillary number (Ca), the viscosity ratio (λ), and the nonlinear intensity of the flow (E). When the extensional flow is linear (E = 0), the familiar internal circulations are obtained and the same is true with E > 0, except that the external and internal flow rates increase with increasing E. If E < 0, the external flow consists of some unconnected regions leading to the same number of internal circulations (-3/7 < E < 0) or twice the number of internal circulations (E < -3/7), when compared to the linear case. The shape of the deformed drop is represented in terms of a modified Taylor deformation parameter, and the conditions for the breakup of the drop by a center pinching mechanism are also established. When the flow is linear (E = 0), the literature predicts prolate spheroidal drops for uniaxial flows (Ca > 0) and oblate spheroidal drops for biaxial flows (Ca < 0). For the same |Ca|, if E > 0, the drop is more elongated than the linear case, while E < 0 results in less elongated drops than the linear case. Compared to the linear case, for both uniaxial and biaxial extensional flows, E > 0 tends to facilitate drop breakup, while E < 0 makes drop breakup more difficult.
Investigating Students' Modes of Thinking in Linear Algebra: The Case of Linear Independence
ERIC Educational Resources Information Center
Çelik, Derya
2015-01-01
Linear algebra is one of the most challenging topics to learn and teach in many countries. To facilitate the teaching and learning of linear algebra, priority should be given to epistemologically analyze the concepts that the undergraduate students have difficulty in conceptualizing and to define their ways of reasoning in linear algebra. After…
NASA Astrophysics Data System (ADS)
Ge, Jun; Chan, Heang-Ping; Sahiner, Berkman; Zhang, Yiheng; Wei, Jun; Hadjiiski, Lubomir M.; Zhou, Chuan
2007-03-01
We are developing a computerized technique to reduce intra- and interplane ghosting artifacts caused by high-contrast objects such as dense microcalcifications (MCs) or metal markers on the reconstructed slices of digital tomosynthesis mammography (DTM). In this study, we designed a constrained iterative artifact reduction method based on a priori 3D information of individual MCs. We first segmented individual MCs on projection views (PVs) using an automated MC detection system. The centroid and the contrast profile of the individual MCs in the 3D breast volume were estimated from the backprojection of the segmented individual MCs on high-resolution (0.1 mm isotropic voxel size) reconstructed DTM slices. An isolated volume of interest (VOI) containing one or a few MCs is then modeled as a high-contrast object embedded in a local homogeneous background. A shift-variant 3D impulse response matrix (IRM) of the projection-reconstruction (PR) system for the extracted VOI was calculated using the DTM geometry and the reconstruction algorithm. The PR system for this VOI is characterized by a system of linear equations. A constrained iterative method was used to solve these equations for the effective linear attenuation coefficients (eLACs) within the isolated VOI. Spatial constraint and positivity constraint were used in this method. Finally, the intra- and interplane artifacts on the whole breast volume resulting from the MC were calculated using the corresponding impulse responses and subsequently subtracted from the original reconstructed slices. The performance of our artifact-reduction method was evaluated using a computer-simulated MC phantom, as well as phantom images and patient DTMs obtained with IRB approval. A GE prototype DTM system that acquires 21 PVs in 3º increments over a +/-30º range was used for image acquisition in this study. For the computer-simulated MC phantom, the eLACs can be estimated accurately, thus the interplane artifacts were effectively removed. For MCs in phantom and patient DTMs, our method reduced the artifacts but also created small over-corrected areas in some cases. Potential reasons for this may include: the simplified mathematical modeling of the forward projection process, and the amplified noise in the solution of the system of linear equations.
18 CFR 4.103 - General provisions for case-specific exemption.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., EXEMPTIONS, AND DETERMINATION OF PROJECT COSTS Exemption of Small Hydroelectric Power Projects of 5 Megawatts or Less § 4.103 General provisions for case-specific exemption. (a) Exemptible projects. Subject to... exempt on a case-specific basis any small hydroelectric power project from all or part of Part I of the...
18 CFR 4.103 - General provisions for case-specific exemption.
Code of Federal Regulations, 2012 CFR
2012-04-01
..., EXEMPTIONS, AND DETERMINATION OF PROJECT COSTS Exemption of Small Hydroelectric Power Projects of 5 Megawatts or Less § 4.103 General provisions for case-specific exemption. (a) Exemptible projects. Subject to... exempt on a case-specific basis any small hydroelectric power project from all or part of Part I of the...
18 CFR 4.103 - General provisions for case-specific exemption.
Code of Federal Regulations, 2011 CFR
2011-04-01
..., EXEMPTIONS, AND DETERMINATION OF PROJECT COSTS Exemption of Small Hydroelectric Power Projects of 5 Megawatts or Less § 4.103 General provisions for case-specific exemption. (a) Exemptible projects. Subject to... exempt on a case-specific basis any small hydroelectric power project from all or part of Part I of the...
18 CFR 4.103 - General provisions for case-specific exemption.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., EXEMPTIONS, AND DETERMINATION OF PROJECT COSTS Exemption of Small Hydroelectric Power Projects of 5 Megawatts or Less § 4.103 General provisions for case-specific exemption. (a) Exemptible projects. Subject to... exempt on a case-specific basis any small hydroelectric power project from all or part of Part I of the...
18 CFR 4.103 - General provisions for case-specific exemption.
Code of Federal Regulations, 2013 CFR
2013-04-01
..., EXEMPTIONS, AND DETERMINATION OF PROJECT COSTS Exemption of Small Hydroelectric Power Projects of 5 Megawatts or Less § 4.103 General provisions for case-specific exemption. (a) Exemptible projects. Subject to... exempt on a case-specific basis any small hydroelectric power project from all or part of Part I of the...
Evaluation of an LED Retrofit Project at Princeton University’s Carl Icahn Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Robert G.; Murphy, Arthur L.; Perrin, Tess E.
The LED lighting retrofit at the Carl Icahn Laboratory of the Lewis-Sigler Institute for Integrative Genomics was the first building-wide interior LED project at Princeton University, following the University’s experiences from several years of exterior and small-scale interior LED implementation projects. The project addressed three luminaire types – recessed 2x2 troffers, cove and other luminaires using linear T8 fluorescent lamps, and CFL downlights - which combined accounted for over 564,000 kWh of annual energy, over 90% of the lighting energy used in the facility. The Princeton Facilities Engineering staff used a thorough process of evaluating product alternatives before selecting anmore » acceptable LED retrofit solution for each luminaire type. Overall, 815 2x2 luminaires, 550 linear fluorescent luminaires, and 240 downlights were converted to LED as part of this project. Based solely on the reductions in wattage in converting from the incumbent fluorescent lamps to LED retrofit kits, the annual energy savings from the project was over 190,000 kWh, a savings of 37%. An additional 125,000 kWh of energy savings is expected from the implementation of occupancy and task-tuning control solutions, which will bring the total savings for the project to 62%.« less
DOT National Transportation Integrated Search
2016-09-01
We consider the problem of solving mixed random linear equations with k components. This is the noiseless setting of mixed linear regression. The goal is to estimate multiple linear models from mixed samples in the case where the labels (which sample...
An evolutive real-time source inversion based on a linear inverse formulation
NASA Astrophysics Data System (ADS)
Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.
2016-12-01
Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.
ERIC Educational Resources Information Center
Fawcett, Lee
2017-01-01
The CASE project (Case-based Approaches to Statistics Education; see www.mas.ncl.ac.uk/~nlf8/innovation) was established to investigate how the use of real-life, discipline-specific case study material in Statistics service courses could improve student engagement, motivation, and confidence. Ultimately, the project aims to promote deep learning…
A characterization of linearly repetitive cut and project sets
NASA Astrophysics Data System (ADS)
Haynes, Alan; Koivusalo, Henna; Walton, James
2018-02-01
For the development of a mathematical theory which can be used to rigorously investigate physical properties of quasicrystals, it is necessary to understand regularity of patterns in special classes of aperiodic point sets in Euclidean space. In one dimension, prototypical mathematical models for quasicrystals are provided by Sturmian sequences and by point sets generated by substitution rules. Regularity properties of such sets are well understood, thanks mostly to well known results by Morse and Hedlund, and physicists have used this understanding to study one dimensional random Schrödinger operators and lattice gas models. A key fact which plays an important role in these problems is the existence of a subadditive ergodic theorem, which is guaranteed when the corresponding point set is linearly repetitive. In this paper we extend the one-dimensional model to cut and project sets, which generalize Sturmian sequences in higher dimensions, and which are frequently used in mathematical and physical literature as models for higher dimensional quasicrystals. By using a combination of algebraic, geometric, and dynamical techniques, together with input from higher dimensional Diophantine approximation, we give a complete characterization of all linearly repetitive cut and project sets with cubical windows. We also prove that these are precisely the collection of such sets which satisfy subadditive ergodic theorems. The results are explicit enough to allow us to apply them to known classical models, and to construct linearly repetitive cut and project sets in all pairs of dimensions and codimensions in which they exist. Research supported by EPSRC grants EP/L001462, EP/J00149X, EP/M023540. HK also gratefully acknowledges the support of the Osk. Huttunen foundation.
Conjecture about the 2-Flavour QCD Phase Diagram
NASA Astrophysics Data System (ADS)
Nava Blanco, M. A.; Bietenholz, W.; Fernández Téllez, A.
2017-10-01
The QCD phase diagram, in particular its sector of high baryon density, is one of the most prominent outstanding mysteries within the Standard Model of particle physics. We sketch a project how to arrive at a conjecture for the case of two massless quark flavours. The pattern of spontaneous chiral symmetry breaking is isomorphic to the spontaneous magnetisation in an O(4) non-linear σ-model, which can be employed as a low-energy effective theory to study the critical behaviour. We focus on the 3d O(4) model, where the configurations are divided into topological sectors, as in QCD. A topological winding with minimal Euclidean action is denoted as a skyrmion, and the topological charge corresponds to the QCD baryon number. This effective model can be simulated on a lattice with a powerful cluster algorithm, which should allow us to identify the features of the critical temperature, as we proceed from low to high baryon density. In this sense, this projected numerical study has the potential to provide us with a conjecture about the phase diagram of QCD with two massless quark flavours.
Adler, N R; McLean, C A; Aung, A K; Goh, M S Y
2017-04-01
Linear IgA bullous dermatosis (LABD) is a subepidermal autoimmune bullous disease characterized by linear IgA deposition at the basement membrane zone, which is visualized by direct immunofluorescence. Patients with LABD typically present with widespread vesicles and bullae; however, this is not necessarily the case, as the clinical presentation of this disease is heterogeneous. LABD clinically presenting as Stevens-Johnson syndrome (SJS) or toxic epidermal necrolysis (TEN) is an infrequent, yet well-described phenomenon. Most cases of LABD are idiopathic, but some cases are drug-induced. Multiple drugs have been implicated in the development of LABD. We report a case of piperacillin-tazobactam-induced LABD presenting clinically as SJS/TEN overlap. This is the first reported case of a strong causal association between piperacillin-tazobactam and the development of LABD. © 2017 British Association of Dermatologists.
Electric energy savings from new technologies. Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrer, B.J.; Kellogg, M.A.; Lyke, A.J.
1986-09-01
Purpose of the report is to provide information about the electricity-saving potential of new technologies to OCEP that it can use in developing alternative long-term projections of US electricity consumption. Low-, base-, and high-case scenarios of the electricity savings for 10 technologies were prepared. The total projected annual savings for the year 2000 for all 10 technologies were 137 billion kilowatt hours (BkWh), 279 BkWh, and 470 BkWh, respectively, for the three cases. The magnitude of these savings projections can be gauged by comparing them to the Department's reference case projection for the 1985 National Energy Policy Plan. In themore » Department's reference case, total consumption in 2000 is projected to be 3319 BkWh. Because approximately 75% of the base-case estimate of savings are already incorporated into the reference projection, only 25% of the savings estimated here should be subtracted from the reference projection for analysis purposes.« less
Estimated incidence of pertussis in people aged <50 years in the United States
Chen, Chi-Chang; Balderston McGuiness, Catherine; Krishnarajah, Girishanthy; Blanchette, Christopher M.; Wang, Yuanyuan; Sun, Kainan; Buck, Philip O.
2016-01-01
ABSTRACT The introduction of pertussis vaccination in the United States (US) in the 1940s has greatly reduced its burden. However, the incidence of pertussis is difficult to quantify, as many cases are not laboratory-confirmed or reported, particularly in adults. This study estimated pertussis incidence in a commercially insured US population aged <50 years. Data were extracted from IMS' PharMetrics Plus claims database for patients with a diagnosis of pertussis or cough illness using International Classification of Diseases (ICD-9) codes, a commercial outpatient laboratory database for patients with a pertussis laboratory test, and the Centers for Disease Control influenza surveillance database. US national pertussis incidence was projected using 3 methods: (1) diagnosed pertussis, defined as a claim for pertussis (ICD-9 033.0, 033.9, 484.3) during 2008–2013; (2) based on proxy pertussis predictive logistic regression models; (3) using the fraction of cough illness (ICD-9 033.0, 033.9, 484.3, 786.2, 466.0, 466.1, 487.1) attributed to laboratory-confirmed pertussis, estimated by time series linear regression models. Method 1 gave a projected annual incidence of diagnosed pertussis of 9/100,000, which was highest in those aged <1 year. Method 2 gave an average annual projected incidence of 21/100,000. Method 3 gave an overall regression-estimated weighted annual incidence of pertussis of 649/100,000, approximately 58–93 times higher than method 1 depending on the year. These estimations, which are consistent with considerable underreporting of pertussis in people aged <50 years and provide further evidence that the majority of cases go undetected, especially with increasing age, may aid in the development of public health programs to reduce pertussis burden. PMID:27246119
Evidence from Social Service Enhancement Projects: Selected Cases from Norway's HUSK Project.
Johannessen, Asbjorn; Eide, Solveig Botnen
2015-01-01
Through this article the authors describe the social service context of the HUSK (The University Research Program to Support Selected Municipal Social Service Offices) projects and briefly describe 10 of the 50 projects funded throughout the country. The welfare state context for the cases and the criteria for case selection are also provided. The 10 cases are organized into three categories that feature the role of dialogue, educational innovation, and service innovation. These cases provide the foundation for the analysis and implications located in the subsequent articles of the special issue.
Impact analysis of government investment on water projects in the arid Gansu Province of China
NASA Astrophysics Data System (ADS)
Wang, Zhan; Deng, Xiangzheng; Li, Xiubin; Zhou, Qing; Yan, Haiming
In this paper, we introduced three-nested Constant Elasticity of Substitution (CES) production function into a static Computable General Equilibrium (CGE) Model. Through four levels of factor productivity, we constructed three nested production function of land use productivity in the conceptual modeling frameworks. The first level of factor productivity is generated by the basic value-added land. On the second level, factor productivity in each sector is generated by human activities that presents human intervention to the first level of factor productivity. On the third level of factor productivity, water allocation reshapes the non-linear structure of transaction among first and second levels. From the perspective of resource utilization, we examined the economic efficiency of water allocation. The scenario-based empirical analysis results show that the three-nested CES production function within CGE model is well-behaved to present the economy system of the case study area. Firstly, water scarcity harmed economic production. Government investment on water projects in Gansu thereby had impacts on economic outcomes. Secondly, huge governmental financing on water projects bring depreciation of present value of social welfare. Moreover, water use for environment adaptation pressures on water supply. The theoretical water price can be sharply increased due to the increasing costs of factor inputs. Thirdly, water use efficiency can be improved by water projects, typically can be benefited from the expansion of water-saving irrigation areas even in those expanding dry area in Gansu. Therefore, increasing governmental financing on water projects can depreciate present value of social welfare but benefit economic efficiency for future generation.
Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Theodoridis, Sergios
2008-12-01
Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu Feipeng; Shi Hongjian; Bai Pengxiang
In fringe projection, the CCD camera and the projector are often placed at equal height. In this paper, we will study the calibration of an unequal arrangement of the CCD camera and the projector. The principle of fringe projection with two-dimensional digital image correlation to acquire the profile of object surface is described in detail. By formula derivation and experiment, the linear relationship between the out-of-plane calibration coefficient and the y coordinate is clearly found. To acquire the three-dimensional (3D) information of an object correctly, this paper presents an effective calibration method with linear least-squares fitting, which is very simplemore » in principle and calibration. Experiments are implemented to validate the availability and reliability of the calibration method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tim Roney; Robert Seifert; Bob Pink
2011-09-01
The field-portable Digital Radiography and Computed Tomography (DRCT) x-ray inspection systems developed for the Project Manager for NonStockpile Chemical Materiel (PMNSCM) over the past 13 years have used linear diode detector arrays from two manufacturers; Thomson and Thales. These two manufacturers no longer produce this type of detector. In the interest of insuring the long term viability of the portable DRCT single munitions inspection systems and to improve the imaging capabilities, this project has been investigating improved, commercially available detectors. During FY-10, detectors were evaluated and one in particular, manufactured by Detection Technologies (DT), Inc, was acquired for possible integrationmore » into the DRCT systems. The remainder of this report describes the work performed in FY-11 to complete evaluations and fully integrate the detector onto a representative DRCT platform.« less
On the equivalence of case-crossover and time series methods in environmental epidemiology.
Lu, Yun; Zeger, Scott L
2007-04-01
The case-crossover design was introduced in epidemiology 15 years ago as a method for studying the effects of a risk factor on a health event using only cases. The idea is to compare a case's exposure immediately prior to or during the case-defining event with that same person's exposure at otherwise similar "reference" times. An alternative approach to the analysis of daily exposure and case-only data is time series analysis. Here, log-linear regression models express the expected total number of events on each day as a function of the exposure level and potential confounding variables. In time series analyses of air pollution, smooth functions of time and weather are the main confounders. Time series and case-crossover methods are often viewed as competing methods. In this paper, we show that case-crossover using conditional logistic regression is a special case of time series analysis when there is a common exposure such as in air pollution studies. This equivalence provides computational convenience for case-crossover analyses and a better understanding of time series models. Time series log-linear regression accounts for overdispersion of the Poisson variance, while case-crossover analyses typically do not. This equivalence also permits model checking for case-crossover data using standard log-linear model diagnostics.
On Generalizations of Cochran’s Theorem and Projection Matrices.
1980-08-01
Definiteness of the Estimated Dispersion Matrix in a Multivariate Linear Model ," F. Pukelsheim and George P.H. Styan, May 1978. TECHNICAL REPORTS...with applications to the analysis of covariance," Proc. Cambridge Philos. Soc., 30, pp. 178-191. Graybill , F. A. and Marsaglia, G. (1957...34Idempotent matrices and quad- ratic forms in the general linear hypothesis," Ann. Math. Statist., 28, pp. 678-686. Greub, W. (1975). Linear Algebra (4th ed
Circulating 25-Hydroxyvitamin D and Risk of Kidney Cancer
Gallicchio, Lisa; Moore, Lee E.; Stevens, Victoria L.; Ahn, Jiyoung; Albanes, Demetrius; Hartmuller, Virginia; Setiawan, V. Wendy; Helzlsouer, Kathy J.; Yang, Gong; Xiang, Yong-Bing; Shu, Xiao-Ou; Snyder, Kirk; Weinstein, Stephanie J.; Yu, Kai; Zeleniuch-Jacquotte, Anne; Zheng, Wei; Cai, Qiuyin; Campbell, David S.; Chen, Yu; Chow, Wong-Ho; Horst, Ronald L.; Kolonel, Laurence N.; McCullough, Marjorie L.; Purdue, Mark P.; Koenig, Karen L.
2010-01-01
Although the kidney is a major organ for vitamin D metabolism, activity, and calcium-related homeostasis, little is known about whether this nutrient plays a role in the development or the inhibition of kidney cancer. To address this gap in knowledge, the authors examined the association between circulating 25-hydroxyvitamin D (25(OH)D) and kidney cancer within a large, nested case-control study developed as part of the Cohort Consortium Vitamin D Pooling Project of Rarer Cancers. Concentrations of 25(OH)D were measured from 775 kidney cancer cases and 775 age-, sex-, race-, and season-matched controls from 8 prospective cohort studies. Overall, neither low nor high concentrations of circulating 25(OH)D were significantly associated with kidney cancer risk. Although the data showed a statistically significant decreased risk for females (odds ratio = 0.31, 95% confidence interval: 0.12, 0.85) with 25(OH)D concentrations of ≥75 nmol/L, the linear trend was not statistically significant and the number of cases in this category was small (n = 14). The findings from this consortium-based study do not support the hypothesis that vitamin D is inversely associated with the risk of kidney cancer overall or with renal cell carcinoma specifically. PMID:20562187
Principal Component Analysis: Resources for an Essential Application of Linear Algebra
ERIC Educational Resources Information Center
Pankavich, Stephen; Swanson, Rebecca
2015-01-01
Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…
Re-Mediating Classroom Activity with a Non-Linear, Multi-Display Presentation Tool
ERIC Educational Resources Information Center
Bligh, Brett; Coyle, Do
2013-01-01
This paper uses an Activity Theory framework to evaluate the use of a novel, multi-screen, non-linear presentation tool. The Thunder tool allows presenters to manipulate and annotate multiple digital slides and to concurrently display a selection of juxtaposed resources across a wall-sized projection area. Conventional, single screen presentation…
Orthogonal Regression: A Teaching Perspective
ERIC Educational Resources Information Center
Carr, James R.
2012-01-01
A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…
How is the weather? Forecasting inpatient glycemic control
Saulnier, George E; Castro, Janna C; Cook, Curtiss B; Thompson, Bithika M
2017-01-01
Aim: Apply methods of damped trend analysis to forecast inpatient glycemic control. Method: Observed and calculated point-of-care blood glucose data trends were determined over 62 weeks. Mean absolute percent error was used to calculate differences between observed and forecasted values. Comparisons were drawn between model results and linear regression forecasting. Results: The forecasted mean glucose trends observed during the first 24 and 48 weeks of projections compared favorably to the results provided by linear regression forecasting. However, in some scenarios, the damped trend method changed inferences compared with linear regression. In all scenarios, mean absolute percent error values remained below the 10% accepted by demand industries. Conclusion: Results indicate that forecasting methods historically applied within demand industries can project future inpatient glycemic control. Additional study is needed to determine if forecasting is useful in the analyses of other glucometric parameters and, if so, how to apply the techniques to quality improvement. PMID:29134125
Dynamics of attitudes and genetic processes.
Guastello, Stephen J; Guastello, Denise D
2008-01-01
Relatively new discoveries of a genetic component to attitudes have challenged the traditional viewpoint that attitudes are primarily learned ideas and behaviors. Attitudes that are regarded by respondents as "more important" tend to have greater genetic components to them, and tend to be more closely associated with authoritarianism. Nonlinear theories, nonetheless, have also been introduced to study attitude change. The objective of this study was to determine whether change in authoritarian attitudes across two generations would be more aptly described by a linear or a nonlinear model. Participants were 372 college students, their mothers, and their fathers who completed an attitude questionnaire. Results indicated that the nonlinear model (R2 = .09) was slightly better than the linear model (R2 = .08), but the two models offered very different forecasts for future generations of US society. The linear model projected a gradual and continuing bifurcation between authoritarians and non-authoritarians. The nonlinear model projected a stabilization of authoritarian attitudes.
Simple estimation of linear 1+1 D tsunami run-up
NASA Astrophysics Data System (ADS)
Fuentes, M.; Campos, J. A.; Riquelme, S.
2016-12-01
An analytical expression is derived concerning the linear run-up for any given initial wave generated over a sloping bathymetry. Due to the simplicity of the linear formulation, complex transformations are unnecessay, because the shoreline motion is directly obtained in terms of the initial wave. This analytical result not only supports maximum run-up invariance between linear and non-linear theories, but also the time evolution of shoreline motion and velocity. The results exhibit good agreement with the non-linear theory. The present formulation also allows computing the shoreline motion numerically from a customised initial waveform, including non-smooth functions. This is useful for numerical tests, laboratory experiments or realistic cases in which the initial disturbance might be retrieved from seismic data rather than using a theoretical model. It is also shown that the real case studied is consistent with the field observations.
LINEAR POLYMER CHAIN AND BIOENGINEERED CHELATORS FOR METALS REMEDIATION
The 3-year GCHSRC grant of $150,000 levers financial assistance from the University ($94,500 match) as well as collaborative assistance from LANL and TCEQ in the project. Similarly, a related project supported by the Welch Foundation will likely contribute to the k...
Laser Bioeffects Resulting from Non-Linear Interactions of Ultrashort Pulses with Biological Systems
2004-07-01
project Saher Maswadi, Ph.D. (Postdoctoral Fellow) 100% on project Manuscripts submitted/published: Glickman RD. Phototoxicity to the retina...with Dr. Saher Maswadi, the AFOSR- supported postdoctoral fellow in my laboratory, we have implemented a non-invasive method for measuring absolute
NASA Astrophysics Data System (ADS)
Oladyshkin, Sergey; Class, Holger; Helmig, Rainer; Nowak, Wolfgang
2010-05-01
CO2 storage in geological formations is currently being discussed intensively as a technology for mitigating CO2 emissions. However, any large-scale application requires a thorough analysis of the potential risks. Current numerical simulation models are too expensive for probabilistic risk analysis and for stochastic approaches based on brute-force repeated simulation. Even single deterministic simulations may require parallel high-performance computing. The multiphase flow processes involved are too non-linear for quasi-linear error propagation and other simplified stochastic tools. As an alternative approach, we propose a massive stochastic model reduction based on the probabilistic collocation method. The model response is projected onto a orthogonal basis of higher-order polynomials to approximate dependence on uncertain parameters (porosity, permeability etc.) and design parameters (injection rate, depth etc.). This allows for a non-linear propagation of model uncertainty affecting the predicted risk, ensures fast computation and provides a powerful tool for combining design variables and uncertain variables into one approach based on an integrative response surface. Thus, the design task of finding optimal injection regimes explicitly includes uncertainty, which leads to robust designs of the non-linear system that minimize failure probability and provide valuable support for risk-informed management decisions. We validate our proposed stochastic approach by Monte Carlo simulation using a common 3D benchmark problem (Class et al. Computational Geosciences 13, 2009). A reasonable compromise between computational efforts and precision was reached already with second-order polynomials. In our case study, the proposed approach yields a significant computational speedup by a factor of 100 compared to Monte Carlo simulation. We demonstrate that, due to the non-linearity of the flow and transport processes during CO2 injection, including uncertainty in the analysis leads to a systematic and significant shift of predicted leakage rates towards higher values compared with deterministic simulations, affecting both risk estimates and the design of injection scenarios. This implies that, neglecting uncertainty can be a strong simplification for modeling CO2 injection, and the consequences can be stronger than when neglecting several physical phenomena (e.g. phase transition, convective mixing, capillary forces etc.). The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart. Keywords: polynomial chaos; CO2 storage; multiphase flow; porous media; risk assessment; uncertainty; integrative response surfaces
Hierarchical Control and Trajectory Planning
NASA Technical Reports Server (NTRS)
Martin, Clyde F.; Horn, P. W.
1994-01-01
Most of the time on this project was spent on the trajectory planning problem. The construction is equivalent to the classical spline construction in the case that the system matrix is nilpotent. If the dimension of the system is n then the spline of degree 2n-1 is constructed. This gives a new approach to the construction of splines that is more efficient than the usual construction and at the same time allows the construction of a much larger class of splines. All known classes of splines are reconstructed using the approach of linear control theory. As a numerical analysis tool control theory gives a very good tool for constructing splines. However, for the purposes of trajectory planning it is quite another story. Enclosed in this document are four reports done under this grant.
NASA Technical Reports Server (NTRS)
Haefner, L. E.
1975-01-01
Mathematical and philosophical approaches are presented for evaluation and implementation of ground and air transportation systems. Basic decision processes are examined that are used for cost analyses and planning (i.e, statistical decision theory, linear and dynamic programming, optimization, game theory). The effects on the environment and the community that a transportation system may have are discussed and modelled. Algorithmic structures are examined and selected bibliographic annotations are included. Transportation dynamic models were developed. Citizen participation in transportation projects (i.e, in Maryland and Massachusetts) is discussed. The relevance of the modelling and evaluation approaches to air transportation (i.e, airport planning) is examined in a case study in St. Louis, Missouri.
Responding to Mechanical Antigravity
NASA Technical Reports Server (NTRS)
Millis, Marc G.; Thomas, Nicholas E.
2006-01-01
Based on the experiences of the NASA Breakthrough Propulsion Physics Project, suggestions are offered for constructively responding to proposals that purport breakthrough propulsion using mechanical devices. Because of the relatively large number of unsolicited submissions received (about 1 per workday) and because many of these involve similar concepts, this report is offered to help the would-be submitters make genuine progress as well as to help reviewers respond to such submissions. Devices that use oscillating masses or gyroscope falsely appear to create net thrust through differential friction or by misinterpreting torques as linear forces. To cover both the possibility of an errant claim and a genuine discovery, reviews should require that submitters meet minimal thresholds of proof before engaging in further correspondence; such as achieving sustained deflection of a level-platform pendulum in the case of mechanical thrusters.
Local facet approximation for image stitching
NASA Astrophysics Data System (ADS)
Li, Jing; Lai, Shiming; Liu, Yu; Wang, Zhengming; Zhang, Maojun
2018-01-01
Image stitching aims at eliminating multiview parallax and generating a seamless panorama given a set of input images. This paper proposes a local adaptive stitching method, which could achieve both accurate and robust image alignments across the whole panorama. A transformation estimation model is introduced by approximating the scene as a combination of neighboring facets. Then, the local adaptive stitching field is constructed using a series of linear systems of the facet parameters, which enables the parallax handling in three-dimensional space. We also provide a concise but effective global projectivity preserving technique that smoothly varies the transformations from local adaptive to global planar. The proposed model is capable of stitching both normal images and fisheye images. The efficiency of our method is quantitatively demonstrated in the comparative experiments on several challenging cases.
ERIC Educational Resources Information Center
Ryberg, Thomas; Koottatep, Suporn; Pengchai, Petch; Dirckinck-Holmfeld, Lone
2006-01-01
In this article we bring together experiences from two international research projects: the Kaleidoscope ERT research collaboration and the VO@NET project. We do this by using a shared framework identified for cross-case analyses within the Kaleidoscope ERT to analyse a particular case in the VO@NET project, a training course called "Green…
Project Career: A qualitative examination of five college students with traumatic brain injuries.
Nardone, Amanda; Sampson, Elaine; Stauffer, Callista; Leopold, Anne; Jacobs, Karen; Hendricks, Deborah J; Elias, Eileen; Chen, Hui; Rumrill, Phillip
2015-01-01
Project Career is an interprofessional five-year development project designed to improve the employment success of undergraduate college and university students with traumatic brain injury (TBI). The case study information was collected and synthesized by the project's Technology and Employment Coordinators (TECs) at each of the project's three university sites. The project's evaluation is occurring independently through JBS International, Inc. Five case studies are presented to provide an understanding of student participants' experiences within Project Career. Each case study includes background on the student, engagement with technology, vocational supports, and interactions with his/her respective TEC. A qualitative analysis from the student's case notes is provided within each case study, along with a discussion of the overall qualitative analysis. Across all five students, the theme Positive Outcomes was mentioned most often in the case notes. Of all the different type of challenges, Cognitive Challenges were most often mentioned during meetings with the TECs, followed by Psychological Challenges, Physical Challenges, Other Challenges, and Academic Challenges, respectively. Project Career is providing academic enrichment and career enhancement that may substantially improve the unsatisfactory employment outcomes that presently await students with TBI following graduation.
Ultrafast-based projection-reconstruction three-dimensional nuclear magnetic resonance spectroscopy.
Mishkovsky, Mor; Kupce, Eriks; Frydman, Lucio
2007-07-21
Recent years have witnessed increased efforts toward the accelerated acquisition of multidimensional nuclear magnetic resonance (nD NMR) spectra. Among the methods proposed to speed up these NMR experiments is "projection reconstruction," a scheme based on the acquisition of a reduced number of two-dimensional (2D) NMR data sets constituting cross sections of the nD time domain being sought. Another proposition involves "ultrafast" spectroscopy, capable of completing nD NMR acquisitions within a single scan. Potential limitations of these approaches include the need for a relatively slow 2D-type serial data collection procedure in the former case, and a need for at least n high-performance, linearly independent gradients and a sufficiently high sensitivity in the latter. The present study introduces a new scheme that comes to address these limitations, by combining the basic features of the projection reconstruction and the ultrafast approaches into a single, unified nD NMR experiment. In the resulting method each member within the series of 2D cross sections required by projection reconstruction to deliver the nD NMR spectrum being sought, is acquired within a single scan with the aid of the 2D ultrafast protocol. Full nD NMR spectra can thus become available by backprojecting a small number of 2D sets, collected using a minimum number of scans. Principles, opportunities, and limitations of the resulting approach, together with demonstrations of its practical advantages, are here discussed and illustrated with a series of three-dimensional homo- and heteronuclear NMR correlation experiments.
Definitions Are Important: The Case of Linear Algebra
ERIC Educational Resources Information Center
Berman, Abraham; Shvartsman, Ludmila
2016-01-01
In this paper we describe an experiment in a linear algebra course. The aim of the experiment was to promote the students' understanding of the studied concepts focusing on their definitions. It seems to be a given that students should understand concepts' definitions before working substantially with them. Unfortunately, in many cases they do…
Sharmin, Sifat; Glass, Kathryn; Viennet, Elvina; Harley, David
2018-04-01
Determining the relation between climate and dengue incidence is challenging due to under-reporting of disease and consequent biased incidence estimates. Non-linear associations between climate and incidence compound this. Here, we introduce a modelling framework to estimate dengue incidence from passive surveillance data while incorporating non-linear climate effects. We estimated the true number of cases per month using a Bayesian generalised linear model, developed in stages to adjust for under-reporting. A semi-parametric thin-plate spline approach was used to quantify non-linear climate effects. The approach was applied to data collected from the national dengue surveillance system of Bangladesh. The model estimated that only 2.8% (95% credible interval 2.7-2.8) of all cases in the capital Dhaka were reported through passive case reporting. The optimal mean monthly temperature for dengue transmission is 29℃ and average monthly rainfall above 15 mm decreases transmission. Our approach provides an estimate of true incidence and an understanding of the effects of temperature and rainfall on dengue transmission in Dhaka, Bangladesh.
Asymptotic Linearity of Optimal Control Modification Adaptive Law with Analytical Stability Margins
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
Optimal control modification has been developed to improve robustness to model-reference adaptive control. For systems with linear matched uncertainty, optimal control modification adaptive law can be shown by a singular perturbation argument to possess an outer solution that exhibits a linear asymptotic property. Analytical expressions of phase and time delay margins for the outer solution can be obtained. Using the gradient projection operator, a free design parameter of the adaptive law can be selected to satisfy stability margins.
A Learning Combination: Coaching with CLASS and the Project Approach
ERIC Educational Resources Information Center
Vartuli, Sue; Bolz, Carol; Wilson, Catherine
2014-01-01
The focus of this ongoing research is the effectiveness of coaching in improving the quality of teacher-child instructional interactions in Head Start classrooms. This study examines the relationship between two measures: Classroom Assessment Scoring System (CLASS) and a Project Approach Fidelity form developed by the authors. Linear regressions…
Predictors of Adolescent Breakfast Consumption: Longitudinal Findings from Project EAT
ERIC Educational Resources Information Center
Bruening, Meg; Larson, Nicole; Story, Mary; Neumark-Sztainer, Dianne; Hannan, Peter
2011-01-01
Objective: To identify predictors of breakfast consumption among adolescents. Methods: Five-year longitudinal study Project EAT (Eating Among Teens). Baseline surveys were completed in Minneapolis-St. Paul schools and by mail at follow-up by youth (n = 800) transitioning from middle to high school. Linear regression models examined associations…
DOT National Transportation Integrated Search
2016-06-01
The purpose of this project is to study the optimal scheduling of work zones so that they have minimum negative impact (e.g., travel delay, gas consumption, accidents, etc.) on transport service vehicle flows. In this project, a mixed integer linear ...
We project the change in ozone-related mortality burden attributable to changes in climate between a historical (1995-2005) and near-future (2025-2035) time period while incorporating a non-linear and synergistic effect of ozone and temperature on mortality. We simulate air quali...
A case study of autonomy and motivation in a student-led game development project
NASA Astrophysics Data System (ADS)
Prigmore, M.; Taylor, R.; De Luca, D.
2016-07-01
This paper presents the findings of an exploratory case study into the relationship between student autonomy and motivation in project based learning, using Self-Determination Theory (SDT) to frame the investigation. The case study explores how different forms of motivation affect the students' response to challenges and their intention to complete the project. Earlier studies have made little explicit use of theoretical perspectives on student autonomy and motivation, a weakness this study attempts to address. As an exploratory case study seeking to evaluate the suitability of a particular theoretical framework, we chose a small case: three students on a one-term computer games development project. Given the small scale, the approach is necessarily qualitative, drawing on project documentation and one-to-one interviews with the students. Our conclusion is that the concepts of SDT provide a useful framework for analysing students' motivations to undertake project work, and its predictions can offer useful guidance on how to initiate and supervise such projects.
Direct Retrieval of Exterior Orientation Parameters Using A 2-D Projective Transformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seedahmed, Gamal H.
2006-09-01
Direct solutions are very attractive because they obviate the need for initial approximations associated with non-linear solutions. The Direct Linear Transformation (DLT) establishes itself as a method of choice for direct solutions in photogrammetry and other fields. The use of the DLT with coplanar object space points leads to a rank deficient model. This rank deficient model leaves the DLT defined up to a 2-D projective transformation, which makes the direct retrieval of the exterior orientation parameters (EOPs) a non-trivial task. This paper presents a novel direct algorithm to retrieve the EOPs from the 2-D projective transformation. It is basedmore » on a direct relationship between the 2-D projective transformation and the collinearity model using homogeneous coordinates representation. This representation offers a direct matrix correspondence between the 2-D projective transformation parameters and the collinearity model parameters. This correspondence lends itself to a direct matrix factorization to retrieve the EOPs. An important step in the proposed algorithm is a normalization process that provides the actual link between the 2-D projective transformation and the collinearity model. This paper explains the theoretical basis of the proposed algorithm as well as the necessary steps for its practical implementation. In addition, numerical examples are provided to demonstrate its validity.« less
NASA Astrophysics Data System (ADS)
Iwata, Makoto; Orihara, Hiroshi; Ishibashi, Yoshihiro
1997-04-01
The phase diagrams in the Landau-type thermodynamic potential including the linear-quadratic coupling between order parameters p and q, i.e., qp2, which is applicable to the phase transition in the benzil, phospholipid bilayers, and the isotropic-nematic phase transition in liquid crystals, are studied. It was found that the phase diagram in the extreme case has one tricritical point c1, one critical end point e1, and two triple points t1 and t2. The linear and nonlinear dielectric constants in the potential are discussed in the case that the order parameter p is the polarization.
The performance of projective standardization for digital subtraction radiography.
Mol, André; Dunn, Stanley M
2003-09-01
We sought to test the performance and robustness of projective standardization in preserving invariant properties of subtraction images in the presence of irreversible projection errors. Study design Twenty bone chips (1-10 mg each) were placed on dentate dry mandibles. Follow-up images were obtained without the bone chips, and irreversible projection errors of up to 6 degrees were introduced. Digitized image intensities were normalized, and follow-up images were geometrically reconstructed by 2 operators using anatomical and fiduciary landmarks. Subtraction images were analyzed by 3 observers. Regression analysis revealed a linear relationship between radiographic estimates of mineral loss and actual mineral loss (R(2) = 0.99; P <.05). The effect of projection error was not significant (general linear model [GLM]: P >.05). There was no difference between the radiographic estimates from images standardized with anatomical landmarks and those standardized with fiduciary landmarks (Wilcoxon signed rank test: P >.05). Operator variability was low for image analysis alone (R(2) = 0.99; P <.05), as well as for the entire procedure (R(2) = 0.98; P <.05). The predicted detection limit was smaller than 1 mg. Subtraction images registered by projective standardization yield estimates of osseous change that are invariant to irreversible projection errors of up to 6 degrees. Within these limits, operator precision is high and anatomical landmarks can be used to establish correspondence.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-08
... Section 1605 of ARRA. This action permits the purchase of the selected vertical linear motion mixers not...: Environmental Protection Agency (EPA). ACTION: Notice. SUMMARY: The Regional Administrator of EPA Region 6 is... purchase of ten (10) vertical linear motion mixers for the Clean Water State Revolving Fund (CWSRF) Hornsby...
DEGAS: Dynamic Exascale Global Address Space Programming Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demmel, James
The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speed and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speedmore » and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics.« less
Evaluating the completeness of the national ALS registry, United States.
Kaye, Wendy E; Wagner, Laurie; Wu, Ruoming; Mehta, Paul
2018-02-01
Our objective was to evaluate the completeness of the United States National ALS Registry (Registry). We compared persons with ALS who were passively identified by the Registry with those actively identified in the State and Metropolitan Area ALS Surveillance project. Cases in the two projects were matched using a combination of identifiers, including, partial social security number, name, date of birth, and sex. The distributions of cases from the two projects that matched/did not match were compared and Chi-square tests conducted to determine statistical significance. There were 5883 ALS cases identified by the surveillance project. Of these, 1116 died before the Registry started, leaving 4767 cases. We matched 2720 cases from the surveillance project to those in the Registry. The cases identified by the surveillance project that did not match cases in the Registry were more likely to be non-white, Hispanic, less than 65 years of age, and from western states. The methods used by the Registry to identify ALS cases, i.e. national administrative data and self-registration, worked well but missed cases. These findings suggest that developing strategies to identify and promote the Registry to those who were more likely to be missing, e.g. non-white and Hispanic, could be beneficial to improving the completeness of the Registry.
A multiscale filter for noise reduction of low-dose cone beam projections
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Farr, Jonathan B.
2015-08-01
The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, \\text{exp}≤ft(-{{x}2}/2σ f2\\right) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of {σf} , which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ f2 is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024 × 768 pixels.
ON THE PROBLEM OF PARTICLE GROUPINGS IN A TRAVELING WAVE LINEAR ACCELERATOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhileyko, G.I.
1957-01-01
A linear accelerator with traveling'' waves may be used for the production of especially short electron momenta, although in many cases the grouping capacity of the accelerator is not sufficient. Theoretically the case is derived in which grouping of the electrons takes place in the accelerator itself. (With 3 illustrations and 1 Slavic Reference). (TCO)
Combining Adaptive Hypermedia with Project and Case-Based Learning
ERIC Educational Resources Information Center
Papanikolaou, Kyparisia; Grigoriadou, Maria
2009-01-01
In this article we investigate the design of educational hypermedia based on constructivist learning theories. According to the principles of project and case-based learning we present the design rational of an Adaptive Educational Hypermedia system prototype named MyProject; learners working with MyProject undertake a project and the system…
Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine
2002-01-01
The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suhara, Tadahiro; Kanada-En'yo, Yoshiko
We investigate the linear-chain structures in highly excited states of {sup 14}C using a generalized molecular-orbital model, by which we incorporate an asymmetric configuration of three {alpha} clusters in the linear-chain states. By applying this model to the {sup 14}C system, we study the {sup 10}Be+{alpha} correlation in the linear-chain state of {sup 14}C. To clarify the origin of the {sup 10}Be+{alpha} correlation in the {sup 14}C linear-chain state, we analyze linear 3 {alpha} and 3{alpha} + n systems in a similar way. We find that a linear 3{alpha} system prefers the asymmetric 2{alpha} + {alpha} configuration, whose origin ismore » the many-body correlation incorporated by the parity projection. This configuration causes an asymmetric mean field for two valence neutrons, which induces the concentration of valence neutron wave functions around the correlating 2{alpha}. A linear-chain structure of {sup 16}C is also discussed.« less
NASA Astrophysics Data System (ADS)
Dounas-Frazer, Dimitri R.; Stanley, Jacob T.; Lewandowski, H. J.
2017-12-01
We investigate students' sense of ownership of multiweek final projects in an upper-division optics lab course. Using a multiple case study approach, we describe three student projects in detail. Within-case analyses focused on identifying key issues in each project, and constructing chronological descriptions of those events. Cross-case analysis focused on identifying emergent themes with respect to five dimensions of project ownership: student agency, instructor mentorship, peer collaboration, interest and value, and affective responses. Our within- and cross-case analyses yielded three major findings. First, coupling division of labor with collective brainstorming can help balance student agency, instructor mentorship, and peer collaboration. Second, students' interest in the project and perceptions of its value can increase over time; initial student interest in the project topic is not a necessary condition for student ownership of the project. Third, student ownership is characterized by a wide range of emotions that fluctuate as students alternate between extended periods of struggle and moments of success while working on their projects. These findings not only extend the literature on student ownership into a new educational domain—namely, upper-division physics labs—they also have concrete implications for the design of experimental physics projects in courses for which student ownership is a desired learning outcome. We describe the course and projects in sufficient detail that others can adapt our results to their particular contexts.
NASA Astrophysics Data System (ADS)
Krysa, Zbigniew; Pactwa, Katarzyna; Wozniak, Justyna; Dudek, Michal
2017-12-01
Geological variability is one of the main factors that has an influence on the viability of mining investment projects and on the technical risk of geology projects. In the current scenario, analyses of economic viability of new extraction fields have been performed for the KGHM Polska Miedź S.A. underground copper mine at Fore Sudetic Monocline with the assumption of constant averaged content of useful elements. Research presented in this article is aimed at verifying the value of production from copper and silver ore for the same economic background with the use of variable cash flows resulting from the local variability of useful elements. Furthermore, the ore economic model is investigated for a significant difference in model value estimated with the use of linear correlation between useful elements content and the height of mine face, and the approach in which model parameters correlation is based upon the copula best matched information capacity criterion. The use of copula allows the simulation to take into account the multi variable dependencies at the same time, thereby giving a better reflection of the dependency structure, which linear correlation does not take into account. Calculation results of the economic model used for deposit value estimation indicate that the correlation between copper and silver estimated with the use of copula generates higher variation of possible project value, as compared to modelling correlation based upon linear correlation. Average deposit value remains unchanged.
The effect of dropout on the efficiency of D-optimal designs of linear mixed models.
Ortega-Azurduy, S A; Tan, F E S; Berger, M P F
2008-06-30
Dropout is often encountered in longitudinal data. Optimal designs will usually not remain optimal in the presence of dropout. In this paper, we study D-optimal designs for linear mixed models where dropout is encountered. Moreover, we estimate the efficiency loss in cases where a D-optimal design for complete data is chosen instead of that for data with dropout. Two types of monotonically decreasing response probability functions are investigated to describe dropout. Our results show that the location of D-optimal design points for the dropout case will shift with respect to that for the complete and uncorrelated data case. Owing to this shift, the information collected at the D-optimal design points for the complete data case does not correspond to the smallest variance. We show that the size of the displacement of the time points depends on the linear mixed model and that the efficiency loss is moderate.
Prediction of optimum sorption isotherm: comparison of linear and non-linear method.
Kumar, K Vasanth; Sivanesan, S
2005-11-11
Equilibrium parameters for Bismarck brown onto rice husk were estimated by linear least square and a trial and error non-linear method using Freundlich, Langmuir and Redlich-Peterson isotherms. A comparison between linear and non-linear method of estimating the isotherm parameters was reported. The best fitting isotherm was Langmuir isotherm and Redlich-Peterson isotherm equation. The results show that non-linear method could be a better way to obtain the parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.
Local bounds preserving stabilization for continuous Galerkin discretization of hyperbolic systems
NASA Astrophysics Data System (ADS)
Mabuza, Sibusiso; Shadid, John N.; Kuzmin, Dmitri
2018-05-01
The objective of this paper is to present a local bounds preserving stabilized finite element scheme for hyperbolic systems on unstructured meshes based on continuous Galerkin (CG) discretization in space. A CG semi-discrete scheme with low order artificial dissipation that satisfies the local extremum diminishing (LED) condition for systems is used to discretize a system of conservation equations in space. The low order artificial diffusion is based on approximate Riemann solvers for hyperbolic conservation laws. In this case we consider both Rusanov and Roe artificial diffusion operators. In the Rusanov case, two designs are considered, a nodal based diffusion operator and a local projection stabilization operator. The result is a discretization that is LED and has first order convergence behavior. To achieve high resolution, limited antidiffusion is added back to the semi-discrete form where the limiter is constructed from a linearity preserving local projection stabilization operator. The procedure follows the algebraic flux correction procedure usually used in flux corrected transport algorithms. To further deal with phase errors (or terracing) common in FCT type methods, high order background dissipation is added to the antidiffusive correction. The resulting stabilized semi-discrete scheme can be discretized in time using a wide variety of time integrators. Numerical examples involving nonlinear scalar Burgers equation, and several shock hydrodynamics simulations for the Euler system are considered to demonstrate the performance of the method. For time discretization, Crank-Nicolson scheme and backward Euler scheme are utilized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soederman, Tarja
The Environmental Impact Assessment (EIA) process concerning the route of a 400 kV power transmission line between Loviisa and Hikiae in southern Finland was reviewed in order to assess how biodiversity issues are treated and to provide suggestions on how to improve the effectiveness of treatment of biodiversity issues in impact assessment of linear development projects. The review covered the whole assessment process, including interviews of stakeholders, participation in the interest group meetings and review of all documents from the project. The baseline studies and assessment of direct impacts in the case study were detailed but the documentation, both themore » assessment programme and the assessment report, only gave a partial picture of the assessment process. All existing information, baseline survey and assessment methods should be addressed in the scoping phase in order to promote interaction between all stakeholders. In contrast to the assessment of the direct effects, which first emphasized impacts on the nationally important and protected flying squirrel but later expanded to deal with the assessment of impacts on ecologically important sites, the indirect and cumulative impacts of the power line were poorly addressed. The public was given the opportunity to become involved in the EIA process. However, they were more concerned with impacts on their properties and less so on biodiversity and species protection issues. This suggests that the public needs to become more informed about locally important features of biodiversity.« less
Hong, In-Seok; Kim, Yong-Hwan; Choi, Bong-Hyuk; Choi, Suk-Jin; Park, Bum-Sik; Jin, Hyun-Chang; Kim, Hye-Jin; Heo, Jeong-Il; Kim, Deok-Min; Jang, Ji-Ho
2016-02-01
The injector for the main driver linear accelerator of the Rare Isotope Science Project in Korea, has been developed to allow heavy ions up to uranium to be delivered to the inflight fragmentation system. The critical components of the injector are the superconducting electron cyclotron resonance (ECR) ion sources, the radio frequency quadrupole (RFQ), and matching systems for low and medium energy beams. We have built superconducting magnets for the ECR ion source, and a prototype with one segment of the RFQ structure, with the aim of developing a design that can satisfy our specifications, demonstrate stable operation, and prove results to compare the design simulation.
Why Teach? A Project-Ive Life-World Approach to Understanding What Teaching Means for Teachers
ERIC Educational Resources Information Center
Landrum, Brittany; Guilbeau, Catherine; Garza, Gilbert
2017-01-01
Previous literature has examined teachers' motivations to teach in terms of intrinsic and extrinsic motives, personality dimensions, and teacher burnout. These findings have been cast in the rubric of differences between teachers and non-teachers and the linear relations between these measures among teachers. Utilizing a phenomenological approach…
Need for Linear Revitalization - Gdynia Case
NASA Astrophysics Data System (ADS)
Sas-Bojarska, Aleksandra
2017-10-01
The aim of the article is to discuss the need of defining and implementation of the linear revitalization - the new approach related to the revitalization processes. The results of the preliminary investigations indicate that this kind of revitalization seems to be an important tool supporting city management and planning, especially in cases of cities fragmentation - causing lack of physical, social, economic and ecological cohesion. The problems which may occur in such situations could be, in author’s opinion, solved with the use of linear revitalization. Linear revitalization relates to various linear city structures, which need a renewal. The article presents the idea of new attitude, character of specific actions related to degraded linear structures, draft classification, as well as the potential benefits to the city structure which could be reached due to the linear revitalization implementation. The theoretical deliberations are supplemented by the description and assessment of the chosen case study from Gdynia in Poland. The Kwiatkowskiego Route in Gdynia, playing important role in the city traffic as the external connection, creates the barrier in the city structure, causing many negative effects. Author presents specific problems related to chosen example, and the ways to solve them and to connect city structure. The main conclusion of the study is that the presented approach may be, in author’s opinion, the beginning of the discussion related to the linear revitalization, which may become an important and effective tool of sustainable city development. It may help overcoming physical barriers, and minimise functional, economic, social, mental and environmental conflicts caused by city fragmentation.
Socio-demographic, ecological factors and dengue infection trends in Australia.
Akter, Rokeya; Naish, Suchithra; Hu, Wenbiao; Tong, Shilu
2017-01-01
Dengue has been a major public health concern in Australia. This study has explored the spatio-temporal trends of dengue and potential socio- demographic and ecological determinants in Australia. Data on dengue cases, socio-demographic, climatic and land use types for the period January 1999 to December 2010 were collected from Australian National Notifiable Diseases Surveillance System, Australian Bureau of Statistics, Australian Bureau of Meteorology, and Australian Bureau of Agricultural and Resource Economics and Sciences, respectively. Descriptive and linear regression analyses were performed to observe the spatio-temporal trends of dengue, socio-demographic and ecological factors in Australia. A total of 5,853 dengue cases (both local and overseas acquired) were recorded across Australia between January 1999 and December 2010. Most the cases (53.0%) were reported from Queensland, followed by New South Wales (16.5%). Dengue outbreak was highest (54.2%) during 2008-2010. A highest percentage of overseas arrivals (29.9%), households having rainwater tanks (33.9%), Indigenous population (27.2%), separate houses (26.5%), terrace house types (26.9%) and economically advantage people (42.8%) were also observed during 2008-2010. Regression analyses demonstrate that there was an increasing trend of dengue incidence, potential socio-ecological factors such as overseas arrivals, number of households having rainwater tanks, housing types and land use types (e.g. intensive uses and production from dryland agriculture). Spatial variation of socio-demographic factors was also observed in this study. In near future, significant increase of temperature was also projected across Australia. The projected increased temperature as well as increased socio-ecological trend may pose a future threat to the local transmission of dengue in other parts of Australia if Aedes mosquitoes are being established. Therefore, upgraded mosquito and disease surveillance at different ports should be in place to reduce the chance of mosquitoes and dengue cases being imported into all over Australia.
ERIC Educational Resources Information Center
Hartley, William L.
This curriculum project was designed primarily to be incorporated into a larger world history unit on the Holocaust and World War II. The project can be adapted for a lesson on 'situational ethics' for use in a philosophy class. The lesson requires students to examine a historical case and to write and discuss that particular case. The project's…
18 CFR 4.106 - Standard terms and conditions of case-specific exemption from licensing.
Code of Federal Regulations, 2013 CFR
2013-04-01
... LICENSES, PERMITS, EXEMPTIONS, AND DETERMINATION OF PROJECT COSTS Exemption of Small Hydroelectric Power Projects of 5 Megawatts or Less § 4.106 Standard terms and conditions of case-specific exemption from licensing. Any case-specific exemption from licensing granted for a small hydroelectric power project is...
18 CFR 4.106 - Standard terms and conditions of case-specific exemption from licensing.
Code of Federal Regulations, 2012 CFR
2012-04-01
... LICENSES, PERMITS, EXEMPTIONS, AND DETERMINATION OF PROJECT COSTS Exemption of Small Hydroelectric Power Projects of 5 Megawatts or Less § 4.106 Standard terms and conditions of case-specific exemption from licensing. Any case-specific exemption from licensing granted for a small hydroelectric power project is...
18 CFR 4.106 - Standard terms and conditions of case-specific exemption from licensing.
Code of Federal Regulations, 2014 CFR
2014-04-01
... LICENSES, PERMITS, EXEMPTIONS, AND DETERMINATION OF PROJECT COSTS Exemption of Small Hydroelectric Power Projects of 5 Megawatts or Less § 4.106 Standard terms and conditions of case-specific exemption from licensing. Any case-specific exemption from licensing granted for a small hydroelectric power project is...
Teaching Case: Enterprise Architecture Specification Case Study
ERIC Educational Resources Information Center
Steenkamp, Annette Lerine; Alawdah, Amal; Almasri, Osama; Gai, Keke; Khattab, Nidal; Swaby, Carval; Abaas, Ramy
2013-01-01
A graduate course in enterprise architecture had a team project component in which a real-world business case, provided by an industry sponsor, formed the basis of the project charter and the architecture statement of work. The paper aims to share the team project experience on developing the architecture specifications based on the business case…
To Tip or Not to Tip: The Case of the Congo Basin Rainforest Realm
NASA Astrophysics Data System (ADS)
Pietsch, S.; Bednar, J. E.; Fath, B. D.; Winter, P. A.
2017-12-01
The future response of the Congo basin rainforest, the second largest tropical carbon reservoir, to climate change is still under debate. Different Climate projections exist stating increase and decrease in rainfall and different changes in rainfall patterns. Within this study we assess all options of climate change possibilities to define the climatic thresholds of Congo basin rainforest stability and assess the limiting conditions for rainforest persistence. We use field data from 199 research plots from the Western Congo basin to calibrate and validate a complex BioGeoChemistry model (BGC-MAN) and assess model performance against an array of possible future climates. Next, we analyze the reasons for the occurrence of tipping points, their spatial and temporal probability of occurrence, will present effects of hysteresis and derive probabilistic spatial-temporal resilience landscapes for the region. Additionally, we will analyze attractors of forest growth dynamics and assess common linear measures for early warning signals of sudden shifts in system dynamics for their robustness in the context of the Congo Basin case, and introduce the correlation integral as a nonlinear measure of risk assessment.
NASA Technical Reports Server (NTRS)
Pfister, Leonhard; Chan, Kwoklong R.; Gary, Bruce; Singh, Hanwant B. (Technical Monitor)
1995-01-01
The advent of high altitude aircraft measurements in the stratosphere over tropical convective systems has made it possible to observe the mesoscale disturbances in the temperature field that these systems excite. Such measurements show that these disturbances have horizontal scales comparable to those of the underlying anvils (about 50-100 km) with peak to peak theta surface variations of about 300-400 meters. Moreover, correlative wind measurements from the tropical phase of the Stratosphere-Troposphere Exchange Project (STEP) clearly show that these disturbances are gravity waves. We present two case studies of anvil-scale gravity waves over convective systems. Using steady and time-dependent linear models of gravity wave propagation in the stratosphere, we show: (1) that the underlying convective systems are indeed the source of the observed phenomena; and (2) that their generating mechanism can be crudely represented as flow over a time-dependent mountain. We will then discuss the effects gravity waves of the observed amplitudes have on the circulation of the middle atmosphere, particularly the quasi-biennial, and semiannual oscillations.
Emergent constraints on climate-carbon cycle feedbacks in the CMIP5 Earth system models
NASA Astrophysics Data System (ADS)
Wenzel, Sabrina; Cox, Peter M.; Eyring, Veronika; Friedlingstein, Pierre
2014-05-01
An emergent linear relationship between the long-term sensitivity of tropical land carbon storage to climate warming (γLT) and the short-term sensitivity of atmospheric carbon dioxide (CO2) to interannual temperature variability (γIAV) has previously been identified by Cox et al. (2013) across an ensemble of Earth system models (ESMs) participating in the Coupled Climate-Carbon Cycle Model Intercomparison Project (C4MIP). Here we examine whether such a constraint also holds for a new set of eight ESMs participating in Phase 5 of the Coupled Model Intercomparison Project. A wide spread in tropical land carbon storage is found for the quadrupling of atmospheric CO2, which is of the order of 252 ± 112 GtC when carbon-climate feedbacks are enabled. Correspondingly, the spread in γLT is wide (-49 ± 40 GtC/K) and thus remains one of the key uncertainties in climate projections. A tight correlation is found between the long-term sensitivity of tropical land carbon and the short-term sensitivity of atmospheric CO2 (γLT versus γIAV), which enables the projections to be constrained with observations. The observed short-term sensitivity of CO2 (-4.4 ± 0.9 GtC/yr/K) sharpens the range of γLT to -44 ± 14 GtC/K, which overlaps with the probability density function derived from the C4MIP models (-53 ± 17 GtC/K) by Cox et al. (2013), even though the lines relating γLT and γIAV differ in the two cases. Emergent constraints of this type provide a means to focus ESM evaluation against observations on the metrics most relevant to projections of future climate change.
Ghost Dark Energy with Non-Linear Interaction Term
NASA Astrophysics Data System (ADS)
Ebrahimi, E.
2016-06-01
Here we investigate ghost dark energy (GDE) in the presence of a non-linear interaction term between dark matter and dark energy. To this end we take into account a general form for the interaction term. Then we discuss about different features of three choices of the non-linear interacting GDE. In all cases we obtain equation of state parameter, w D = p/ ρ, the deceleration parameter and evolution equation of the dark energy density parameter (Ω D ). We find that in one case, w D cross the phantom line ( w D < -1). However in two other classes w D can not cross the phantom divide. The coincidence problem can be solved in these models completely and there exist good agreement between the models and observational values of w D , q. We study squared sound speed {vs2}, and find that for one case of non-linear interaction term {vs2} can achieves positive values at late time of evolution.
ERIC Educational Resources Information Center
Wawro, Megan; Rasmussen, Chris; Zandieh, Michelle; Sweeney, George Franklin; Larson, Christine
2012-01-01
In this paper we present an innovative instructional sequence for an introductory linear algebra course that supports students' reinvention of the concepts of span, linear dependence, and linear independence. Referred to as the Magic Carpet Ride sequence, the problems begin with an imaginary scenario that allows students to build rich imagery and…
State-Dependent Pseudo-Linear Filter for Spacecraft Attitude and Rate Estimation
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, Richard R.
2001-01-01
This paper presents the development and performance of a special algorithm for estimating the attitude and angular rate of a spacecraft. The algorithm is a pseudo-linear Kalman filter, which is an ordinary linear Kalman filter that operates on a linear model whose matrices are current state estimate dependent. The nonlinear rotational dynamics equation of the spacecraft is presented in the state space as a state-dependent linear system. Two types of measurements are considered. One type is a measurement of the quaternion of rotation, which is obtained from a newly introduced star tracker based apparatus. The other type of measurement is that of vectors, which permits the use of a variety of vector measuring sensors like sun sensors and magnetometers. While quaternion measurements are related linearly to the state vector, vector measurements constitute a nonlinear function of the state vector. Therefore, in this paper, a state-dependent linear measurement equation is developed for the vector measurement case. The state-dependent pseudo linear filter is applied to simulated spacecraft rotations and adequate estimates of the spacecraft attitude and rate are obtained for the case of quaternion measurements as well as of vector measurements.
High Precision Piezoelectric Linear Motors for Operations at Cryogenic Temperatures and Vacuum
NASA Technical Reports Server (NTRS)
Wong, D.; Carman, G.; Stam, M.; Bar-Cohen, Y.; Sen, A.; Henry, P.; Bearman, G.; Moacanin, J.
1995-01-01
The use of an electromechanical device for optically positioning a mirror system during the pre-project phase of the Pluto Fast Flyby mission was evaluated at JPL. The device under consideration was a piezoelectric driven linear motor functionally dependent upon a time varying electric field which induces displacements ranging from submicrons to millimeters with positioning accuracy within nanometers.
ERIC Educational Resources Information Center
Nordtveit, Bjorn Harald
2010-01-01
Development is often understood as a linear process of change towards Western modernity, a vision that is challenged by this paper, arguing that development efforts should rather be connected to the local stakeholders' sense of their own development. Further, the paper contends that Complexity Theory is more effective than a linear theory of…
ERIC Educational Resources Information Center
Findorff, Irene K.
This document summarizes the results of a project at Tulane University that was designed to adapt, test, and evaluate a computerized information and menu planning system utilizing linear programing techniques for use in school lunch food service operations. The objectives of the menu planning were to formulate menu items into a palatable,…
ERIC Educational Resources Information Center
McCluskey, James J.
1997-01-01
A study of 160 undergraduate journalism students trained to design projects (stacks) using HyperCard on Macintosh computers determined that right-brain dominant subjects outperformed left-brain and mixed-brain dominant subjects, whereas left-brain dominant subjects out performed mixed-brain dominant subjects in several areas. Recommends future…
An Application of the Quadrature-Free Discontinuous Galerkin Method
NASA Technical Reports Server (NTRS)
Lockard, David P.; Atkins, Harold L.
2000-01-01
The process of generating a block-structured mesh with the smoothness required for high-accuracy schemes is still a time-consuming process often measured in weeks or months. Unstructured grids about complex geometries are more easily generated, and for this reason, methods using unstructured grids have gained favor for aerodynamic analyses. The discontinuous Galerkin (DG) method is a compact finite-element projection method that provides a practical framework for the development of a high-order method using unstructured grids. Higher-order accuracy is obtained by representing the solution as a high-degree polynomial whose time evolution is governed by a local Galerkin projection. The traditional implementation of the discontinuous Galerkin uses quadrature for the evaluation of the integral projections and is prohibitively expensive. Atkins and Shu introduced the quadrature-free formulation in which the integrals are evaluated a-priori and exactly for a similarity element. The approach has been demonstrated to possess the accuracy required for acoustics even in cases where the grid is not smooth. Other issues such as boundary conditions and the treatment of non-linear fluxes have also been studied in earlier work This paper describes the application of the quadrature-free discontinuous Galerkin method to a two-dimensional shear layer problem. First, a brief description of the method is given. Next, the problem is described and the solution is presented. Finally, the resources required to perform the calculations are given.
The Selection of Computed Tomography Scanning Schemes for Lengthy Symmetric Objects
NASA Astrophysics Data System (ADS)
Trinh, V. B.; Zhong, Y.; Osipov, S. P.
2017-04-01
. The article describes the basic computed tomography scan schemes for lengthy symmetric objects: continuous (discrete) rotation with a discrete linear movement; continuous (discrete) rotation with discrete linear movement to acquire 2D projection; continuous (discrete) linear movement with discrete rotation to acquire one-dimensional projection and continuous (discrete) rotation to acquire of 2D projection. The general method to calculate the scanning time is discussed in detail. It should be extracted the comparison principle to select a scanning scheme. This is because data are the same for all scanning schemes: the maximum energy of the X-ray radiation; the power of X-ray radiation source; the angle of the X-ray cone beam; the transverse dimension of a single detector; specified resolution and the maximum time, which is need to form one point of the original image and complies the number of registered photons). It demonstrates the possibilities of the above proposed method to compare the scanning schemes. Scanning object was a cylindrical object with the mass thickness is 4 g/cm2, the effective atomic number is 15 and length is 1300 mm. It analyzes data of scanning time and concludes about the efficiency of scanning schemes. It examines the productivity of all schemes and selects the effective one.
Spinning Rocket Simulator Turntable Design
NASA Technical Reports Server (NTRS)
Miles, Robert W.
2001-01-01
Contained herein is the research and data acquired from the Turntable Design portion of the Spinning Rocket Simulator (SRS) project. The SRS Project studies and eliminates the effect of coning on thrust-propelled spacecraft. This design and construction of the turntable adds a structural support for the SRS model and two degrees of freedom. The two degrees of freedom, radial and circumferential, will help develop a simulated thrust force perpendicular to the plane of the spacecraft model while undergoing an unstable coning motion. The Turntable consists of a ten-foot linear track mounted to a sprocket and press-fit to a thrust bearing. A two-inch high column grounded by a Triangular Baseplate supports this bearing and houses the slip rings and pressurized, air-line swivel. The thrust bearing allows the entire system to rotate under the moment applied through the chain-driven sprocket producing a circumferential degree of freedom. The radial degree of freedom is given to the model through the helically threaded linear track. This track allows the Model Support and Counter Balance to simultaneously reposition according to the coning motion of the Model. Two design factors that hinder the linear track are bending and twist due to torsion. A Standard Aluminum "C" channel significantly reduces these two deflections. Safety considerations dictate the design of all the components involved in this project.
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Context image for PIA03667 Linear Clouds These clouds are located near the edge of the south polar region. The cloud tops are the puffy white features in the bottom half of the image. Image information: VIS instrument. Latitude -80.1N, Longitude 52.1E. 17 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.Prieto-Barrios, M; Velasco-Tamariz, V; Tous-Romero, F; Burillo-Martinez, S; Zarco-Olivo, C; Rodriguez-Peralto, J L; Ortiz-Romero, P L
2018-03-01
A 65-year-old pluripathological woman attended our hospital with a cutaneous eruption of sudden appearance after vancomycin treatment. She presented targetoid lesions affecting approximately 25-30% of her body surface, large erosions with mucosal lesions and positive Nikolsky sign. Under the initial clinical suspicion of toxic epidermal necrolysis (TEN), and considering the recent literature of successful use of etanercept in these cases, she was treated with a single dose of this antitumour necrosis factor (anti-TNF) agent. Subsequently, the exanthema progression stopped and resolution of the lesions happened in a few days. Later on, histopathology revealed a subepidermal blister with dense neutrophilic infiltrate and linear deposits of immunoglobulin A (IgA) on the dermoepidermal junction, allowing us to establish the diagnosis of drug-induced linear IgA dermatosis mimicking TEN. Linear IgA dermatosis can have severe clinical manifestations, even mimicking TEN, and can have high mortality, especially in drug-induced cases. We have not found any other report of linear IgA dermatosis treated with etanercept in the English literature. Anti-TNF medications could represent useful therapeutic alternatives in this dermatosis. © 2017 British Association of Dermatologists.
Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...
2015-11-12
Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi
Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less
Lu, Hui; Yan, Fei; Wang, Wei; Wu, Laiwa; Ma, Weiping; Chen, Jing; Shen, Xin; Mei, Jian
2013-01-01
Tuberculosis (TB) in internal migrants is one of three threats for TB control in China. To address this threat, a project was launched in eight of the 19 districts of Shanghai in 2007 to provide transportation subsidies and living allowances for all migrant TB cases. This study aims to determine if this project contributed to improved TB control outcomes among migrants in urban Shanghai. This was a community intervention study. The data were derived from the TB Management Information System in three project districts and three non-project districts in Shanghai between 2006 and 2010. The impact of the project was estimated in a difference-in-difference (DID) analysis framework, and a multivariable binary logistic regression analysis. A total of 1872 pulmonary TB (PTB) cases in internal migrants were included in the study. The treatment success rate (TSR) for migrant smear-positive cases in project districts increased from 59.9% in 2006 to 87.6% in 2010 (P < 0.001). The crude DID improvement of TSR was 18.9%. There was an increased probability of TSR in the project group before and after the project intervention period (coefficient = 1.156, odds ratio = 3.178, 95% confidence interval: 1.305-7.736, P = 0.011). The study showed the project could improve treatment success in migrant PTB cases. This was a short-term programme using special financial subsidies for all migrant PTB cases. It is recommended that project funds be continuously invested by governments with particular focus on the more vulnerable PTB cases among migrants.
Linearization instability for generic gravity in AdS spacetime
NASA Astrophysics Data System (ADS)
Altas, Emel; Tekin, Bayram
2018-01-01
In general relativity, perturbation theory about a background solution fails if the background spacetime has a Killing symmetry and a compact spacelike Cauchy surface. This failure, dubbed as linearization instability, shows itself as non-integrability of the perturbative infinitesimal deformation to a finite deformation of the background. Namely, the linearized field equations have spurious solutions which cannot be obtained from the linearization of exact solutions. In practice, one can show the failure of the linear perturbation theory by showing that a certain quadratic (integral) constraint on the linearized solutions is not satisfied. For non-compact Cauchy surfaces, the situation is different and for example, Minkowski space having a non-compact Cauchy surface, is linearization stable. Here we study, the linearization instability in generic metric theories of gravity where Einstein's theory is modified with additional curvature terms. We show that, unlike the case of general relativity, for modified theories even in the non-compact Cauchy surface cases, there are some theories which show linearization instability about their anti-de Sitter backgrounds. Recent D dimensional critical and three dimensional chiral gravity theories are two such examples. This observation sheds light on the paradoxical behavior of vanishing conserved charges (mass, angular momenta) for non-vacuum solutions, such as black holes, in these theories.
Implementing interorganizational cooperation in labour market reintegration: a case study.
Ståhl, Christian
2012-06-01
To bring people with complex medical, social and vocational needs back to the labour market, interorganizational cooperation is often needed. Yet, studies of processes and strategies for achieving sustainable interorganizational cooperation are sparse. The aim of this study was to analyse the implementation processes of Swedish legislation on financial coordination, with specific focus on different strategies for and perspectives on implementing interorganizational cooperation. A multiple-case study was used, where two local associations for financial coordination were studied in order to elucidate and compare the development of cooperative work in two settings. The material, collected during a 3-year period, consisted of documents, individual interviews with managers, and focus groups with officials. Two different implementation strategies were identified. In case 1, a linear strategy was used to implement cooperative projects, which led to difficulties in maintaining cooperative work forms due to a fragmented and time-limited implementation process. In case 2, an interactive strategy was used, where managers and politicians were continuously involved in developing a central cooperation team that became a central part of a developing structure for interorganizational cooperation. An interactive cooperation strategy with long-term joint financing was here shown to be successful in overcoming organizational barriers to cooperation. It is suggested that a strategy based on adaptation to local conditions, flexibility and constant evaluation is preferred for developing sustainable interorganizational cooperation when implementing policies or legislation affecting interorganizational relationships.
Systems Engineering Technical Leadership Development Program
2012-02-01
leading others in creative problem solving, complexity, and why projects fail . These topics were additionally supported by case studies designed to...Your Core Values Dominick Wed 12:30-1:30 Lunch Wed 1:30-2:45 Case Study: Why Projects Fail Pennotti Wed 2:45-3:00 Break Wed 3:00-4:30 Project...Case Study: When Good Wasn’t Good Enough 11. Technical Value-5: Group Project: AR2D2 RFP 12. Customer Expectation-1: Lecture: Why Systems Fail
ILC TARGET WHEEL RIM FRAGMENT/GUARD PLATE IMPACT ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagler, L
2008-07-17
A positron source component is needed for the International Linear Collider Project. The leading design concept for this source is a rotating titanium alloy wheel whose spokes rotate through an intense localized magnetic field. The system is composed of an electric motor, flexible motor/drive-shaft coupling, stainless steel drive-shaft, two Plumber's Block tapered roller bearings, a titanium alloy target wheel, and electromagnet. Surrounding the target wheel and magnet is a steel frame with steel guarding plates intended to contain shrapnel in case of catastrophic wheel failure. Figure 1 is a layout of this system (guard plates not shown for clarity). Thismore » report documents the FEA analyses that were performed at LLNL to help determine, on a preliminary basis, the required guard plate thickness for three potential plate steels.« less
Oscillations of end loaded cantilever beams
NASA Astrophysics Data System (ADS)
Macho-Stadler, E.; Elejalde-García, M. J.; Llanos-Vázquez, R.
2015-09-01
This article presents several simple experiments based on changing transverse vibration frequencies in a cantilever beam, when acted on by an external attached mass load at the free end. By using a mechanical wave driver, available in introductory undergraduate laboratories, we provide various experimental results for end loaded cantilever beams that fit reasonably well into a linear equation. The behaviour of the cantilever beam’s weak-damping resonance response is studied for the case of metal resonance strips. As the mass load increases, a more pronounced decrease occurs in the fundamental frequency of beam vibration. It is important to note that cantilever construction is often used in architectural design and engineering construction projects but current analysis also predicts the influence of mass load on the sound generated by musical free reeds with boundary conditions similar to a cantilever beam.
Incidence of childhood linear scleroderma and systemic sclerosis in the UK and Ireland.
Herrick, Ariane L; Ennis, Holly; Bhushan, Monica; Silman, Alan J; Baildam, Eileen M
2010-02-01
Childhood scleroderma encompasses a rare, poorly understood spectrum of conditions. Our aim was to ascertain the incidence of childhood scleroderma in its different forms in the UK and Ireland, and to describe the age, sex, and ethnicity of the cases. The members of 5 specialist medical associations including pediatricians, dermatologists, and rheumatologists were asked to report all cases of abnormal skin thickening suspected to be localized (including linear) scleroderma or systemic sclerosis (SSc) in children <16 years of age first seen between July 2005 and July 2007. We received notification of 185 potential cases, and 94 valid cases were confirmed: 87 (93%) with localized scleroderma and 7 (7%) with SSc. This gave an incidence rate per million children per year of 3.4 (95% confidence interval [95% CI] 2.7-4.1) for localized scleroderma, including an incidence rate of 2.5 (95% CI 1.8-3.1) for linear scleroderma, and 0.27 (95% CI 0.1-0.5) for SSc. Of the 87 localized cases, 62 (71%) had linear disease. Of localized disease cases, 55 (63%) were female, 71 (82%) were classified as white British, and the patients' mean age when first seen in secondary care was 10.4 years. Of the 7 SSc cases, all were female, 6 (86%) were white British, and the mean age when first seen was 12.1 years. The median delay between onset and being first seen was 13.1 months for localized scleroderma and 7.2 months for SSc. These data provide additional estimates of the incidence of this rare disorder and its subforms.
Electrochemical growth of linear conducting crystals in microgravity
NASA Technical Reports Server (NTRS)
Cronise, Raymond J., IV
1988-01-01
Much attention has been given to the synthesis of linear conducting materials. These inorganic, organic, and polymeric materials have some very interesting electrical and optical properties, including low temperature superconductivity. Because of the anisotropic nature of these compounds, impurities and defects strongly influences the unique physical properties of such crystals. Investigations have demonstrated that electrochemical growth has provided the most reproducible and purest crystals. Space, specifically microgravity, eliminates phenomena such as buoyancy driven convection, and could permit formation of crystals many times purer than the ones grown to date. Several different linear conductors were flown on Get Away Special G-007 on board the Space Shuttle Columbia, STS 61-C, the first of a series of Project Explorer payloads. These compounds were grown by electrochemical methods, and the growth was monitored by photographs taken throughout the mission. Due to some thermal problems, no crystals of appreciable size were grown. The experimental results will be incorporated into improvements for the next 2 missions of Project Explorer. The results and conclusions of the first mission are discussed.
NASA Astrophysics Data System (ADS)
Rewieński, M.; Lamecki, A.; Mrozowski, M.
2013-09-01
This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.
Learning quadratic receptive fields from neural responses to natural stimuli.
Rajan, Kanaka; Marre, Olivier; Tkačik, Gašper
2013-07-01
Models of neural responses to stimuli with complex spatiotemporal correlation structure often assume that neurons are selective for only a small number of linear projections of a potentially high-dimensional input. In this review, we explore recent modeling approaches where the neural response depends on the quadratic form of the input rather than on its linear projection, that is, the neuron is sensitive to the local covariance structure of the signal preceding the spike. To infer this quadratic dependence in the presence of arbitrary (e.g., naturalistic) stimulus distribution, we review several inference methods, focusing in particular on two information theory-based approaches (maximization of stimulus energy and of noise entropy) and two likelihood-based approaches (Bayesian spike-triggered covariance and extensions of generalized linear models). We analyze the formal relationship between the likelihood-based and information-based approaches to demonstrate how they lead to consistent inference. We demonstrate the practical feasibility of these procedures by using model neurons responding to a flickering variance stimulus.
Geometric mean for subspace selection.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2009-02-01
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.
LINEAR COLLIDER PHYSICS RESOURCE BOOK FOR SNOWMASS 2001.
DOE Office of Scientific and Technical Information (OSTI.GOV)
ABE,T.; DAWSON,S.; HEINEMEYER,S.
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decademore » or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup {minus}} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup {minus}} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup {minus}} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup {minus}} experiments can provide.« less
Linear Collider Physics Resource Book for Snowmass 2001
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peskin, Michael E
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decademore » or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup -} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup -} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup -} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup -} experiments can provide.« less
ERIC Educational Resources Information Center
Arantes do Amaral, Joao Alberto
2017-01-01
In this case study we discuss the dynamics that drive a free-of-charge project-based learning extension course. We discuss the lessons learned in the course, "Laboratory of Social Projects." The course aimed to teach project management skills to the participants. It was conducted from August to November of 2015, at Federal University of…
International energy outlook 1996
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-05-01
This International Energy Outlook presents historical data from 1970 to 1993 and EIA`s projections of energy consumption and carbon emissions through 2015 for 6 country groups. Prospects for individual fuels are discussed. Summary tables of the IEO96 world energy consumption, oil production, and carbon emissions projections are provided in Appendix A. The reference case projections of total foreign energy consumption and of natural gas, coal, and renewable energy were prepared using EIA`s World Energy Projection System (WEPS) model. Reference case projections of foreign oil production and consumption were prepared using the International Energy Module of the National Energy Modeling Systemmore » (NEMS). Nuclear consumption projections were derived from the International Nuclear Model, PC Version (PC-INM). Alternatively, nuclear capacity projections were developed using two methods: the lower reference case projections were based on analysts` knowledge of the nuclear programs in different countries; the upper reference case was generated by the World Integrated Nuclear Evaluation System (WINES)--a demand-driven model. In addition, the NEMS Coal Export Submodule (CES) was used to derive flows in international coal trade. As noted above, foreign projections of electricity demand are now projected as part of the WEPS. 64 figs., 62 tabs.« less
Ikeda, Shigaku; Kawada, Juri; Yaguchi, Hitoshi; Ogawa, Hideoki
2003-01-01
Multiple hair follicle nevi are an extremely rare condition. In 1998, a case of unilateral multiple hair follicle nevi, ipsilateral alopecia and ipsilateral leptomeningeal angiomatosis of the brain was first reported from Japan. Very recently, hair follicle nevus in a distribution following Blaschko's lines has also been reported. In this paper, we observed a congenital case of unilateral, systematized linear hair follicle nevi associated with congenital, ipsilateral, multiple plaque lesions resembling epidermal nevi but lacking leptomeningeal angiomatosis of the brain. These cases implicate the possibility of a novel neurocutaneous syndrome. Additional cases should be sought in order to determine whether this condition is pathophysiologically distinct. Copyright 2003 S. Karger AG, Basel
NASA Astrophysics Data System (ADS)
Wang, Q.; Liu, Z. J.; Zheng, C. Y.; Xiao, C. Z.; Feng, Q. S.; Zhang, H. C.; He, X. T.
2018-01-01
The longitudinal relativistic effect on stimulated Raman backscattering (SRBS) is investigated by using one-dimensional (1D) Vlasov-Maxwell simulations. Using a short backscattered light seed pulse with a very small amplitude, the linear gain spectra of SRBS in the strongly convective regime is presented by combining the relativistic and non-relativistic 1D Vlasov-Maxwell simulations, which is in agreement with the steady-state linear theory. More interestingly, by considering transition from convective to absolute instability due to electron trapping, we successfully predict the critical duration of the seed which can just trigger the kinetic inflation of the excited SRBS after the seed leaves the simulation box. The critical duration in the relativistic case is much shorter than that in the nonrelativistic case, which indicates that the kinetic inflation more easily occurs in the relativistic case than in the nonrelativistic case. In the weakly convective regime, the transition from convective to absolute instability for SRBS can directly occur in the linear regime due to the longitudinal relativistic modification. For the same pump, our simulations first demonstrate that the SRBS excited by a short and small seed pulse is a convective instability in the nonrelativistic case but becomes an absolute instability due to the decrease of the linear Landau damping from the longitudinal relativistic modification in the relativistic case. In more detail, the growth rate of the backscattered light is also in excellent agreement with theoretical prediction.
ERIC Educational Resources Information Center
Bove, Liliana L.; Davies, W. Martin
2009-01-01
This case study outlines the use of client-sponsored research projects in a quantitative postgraduate marketing research subject conducted in a 12-week semester in a research-intensive Australian university. The case study attempts to address the dearth of recent literature on client-sponsored research projects in the discipline of marketing.…
Airpower Projection in the Anti-Access/Area Denial Environment: Dispersed Operations
2015-02-01
Raptor Case Study.....................................................................6 Risks to Dispersed Operations...project airpower, this paper breaks down a case study of the Rapid Raptor concept. The risks with executing a dispersed model are analyzed and mitigation...will force leaders to look at alternative ways to project power. Alternative Option: Rapid Raptor Case Study The ability to defend forward operating
Kumar, K Vasanth
2006-08-21
The experimental equilibrium data of malachite green onto activated carbon were fitted to the Freundlich, Langmuir and Redlich-Peterson isotherms by linear and non-linear method. A comparison between linear and non-linear of estimating the isotherm parameters was discussed. The four different linearized form of Langmuir isotherm were also discussed. The results confirmed that the non-linear method as a better way to obtain isotherm parameters. The best fitting isotherm was Langmuir and Redlich-Peterson isotherm. Redlich-Peterson is a special case of Langmuir when the Redlich-Peterson isotherm constant g was unity.
NASA Astrophysics Data System (ADS)
O'Brien, Ricky T.; Cooper, Benjamin J.; Kipritidis, John; Shieh, Chun-Chien; Keall, Paul J.
2014-02-01
Four dimensional cone beam computed tomography (4DCBCT) images suffer from angular under sampling and bunching of projections due to a lack of feedback between the respiratory signal and the acquisition system. To address this problem, respiratory motion guided 4DCBCT (RMG-4DCBCT) regulates the gantry velocity and projection time interval, in response to the patient’s respiratory signal, with the aim of acquiring evenly spaced projections in a number of phase or displacement bins during the respiratory cycle. Our previous study of RMG-4DCBCT was limited to sinusoidal breathing traces. Here we expand on that work to provide a practical algorithm for the case of real patient breathing data. We give a complete description of RMG-4DCBCT including full details on how to implement the algorithms to determine when to move the gantry and when to acquire projections in response to the patient’s respiratory signal. We simulate a realistic working RMG-4DCBCT system using 112 breathing traces from 24 lung cancer patients. Acquisition used phase-based binning and parameter settings typically used on commercial 4DCBCT systems (4 min acquisition time, 1200 projections across 10 respiratory bins), with the acceleration and velocity constraints of current generation linear accelerators. We quantified streaking artefacts and image noise for conventional and RMG-4DCBCT methods by reconstructing projection data selected from an oversampled set of Catphan phantom projections. RMG-4DCBCT allows us to optimally trade-off image quality, acquisition time and image dose. For example, for the same image quality and acquisition time as conventional 4DCBCT approximately half the imaging dose is needed. Alternatively, for the same imaging dose, the image quality as measured by the signal to noise ratio, is improved by 63% on average. C-arm cone beam computed tomography systems, with an acceleration up to 200°/s2, a velocity up to 100°/s and the acquisition of 80 projections per second, allow the image acquisition time to be reduced to below 60 s. We have made considerable progress towards realizing a system to reduce projection clustering in conventional 4DCBCT imaging and hence reduce the imaging dose to the patient.
Project resource reallocation algorithm
NASA Technical Reports Server (NTRS)
Myers, J. E.
1981-01-01
A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.
[Clinical application of biofragmentable anastomosis ring for intestinal anastomosis].
Ye, Feng; Lin, Jian-jiang
2006-11-01
To compare the efficacy of the biofragmentable anastomotic ring (BAR) with conventional hand-sutured and stapling techniques,and to evaluate the safety and applicability of the BAR in intestinal anastomosis. The totol of 498 patients performed intestinal anastomosis from January 2000 to November 2005 were allocated to BAR group (n=186), hand-sutured group (n=177) and linear cutter group (n=135). The operative time, postoperative convalescence and corresponding complication were recorded. Postoperative anastomotic inflammation and anastomotic stenosis were observed during half or one year follow-up of 436 patients. The operative time was (102 +/- 16) min in the BAR group, (121 +/- 15) min in the hand-sutured group, and (105 +/- 18 ) min in the linear cutter group. The difference was significant statistically (P <0.05). The operative time in BAR group and linear cutter group was shorter than hand-sutured group. One case of anastomotic leakage was noted in the BAR group, one case in the hand-sutured group, and none in the linear cutter group. They were cured by conservative methods. One case of anastomotic obstruction happened in the BAR group, one case in the hand-sutured group. Two of them were cured by conservative methods. Two cases of anastomotic obstruction happened in the hand-sutured group. However, one of them required reoperation to remove the obstruction. In the BAR, hand-sutured and the linear cutter group, the postoperative first flatus time was (67.2+/- 4.6) h, (70.2 +/- 5.8) h and (69.2 +/- 6.2)h, respectively. No significant differences were observed among three groups(P > 0.05). The rate of postoperative anastomotic inflammation was 3.0 % (5/164) in the BAR group, 47.8 % (76/159) in hand-sutured group and 7.1 % (8/113) in the linear cutter group. The difference was significant statistically (P <0.05). The rate of postoperative anastomotic inflammation in the BAR group and in the linear cutter group was less than that in hand-sutured group. BAR is one of rapid,safe and effective methods in intestinal anastomosis. It has less anastomotic inflammatory reaction than hand-sutured technique. It should be considered equal to manual and stapler methods.
On the interaction of small-scale linear waves with nonlinear solitary waves
NASA Astrophysics Data System (ADS)
Xu, Chengzhu; Stastna, Marek
2017-04-01
In the study of environmental and geophysical fluid flows, linear wave theory is well developed and its application has been considered for phenomena of various length and time scales. However, due to the nonlinear nature of fluid flows, in many cases results predicted by linear theory do not agree with observations. One of such cases is internal wave dynamics. While small-amplitude wave motion may be approximated by linear theory, large amplitude waves tend to be solitary-like. In some cases, when the wave is highly nonlinear, even weakly nonlinear theories fail to predict the wave properties correctly. We study the interaction of small-scale linear waves with nonlinear solitary waves using highly accurate pseudo spectral simulations that begin with a fully nonlinear solitary wave and a train of small-amplitude waves initialized from linear waves. The solitary wave then interacts with the linear waves through either an overtaking collision or a head-on collision. During the collision, there is a net energy transfer from the linear wave train to the solitary wave, resulting in an increase in the kinetic energy carried by the solitary wave and a phase shift of the solitary wave with respect to a freely propagating solitary wave. At the same time the linear waves are greatly reduced in amplitude. The percentage of energy transferred depends primarily on the wavelength of the linear waves. We found that after one full collision cycle, the longest waves may retain as much as 90% of the kinetic energy they had initially, while the shortest waves lose almost all of their initial energy. We also found that a head-on collision is more efficient in destroying the linear waves than an overtaking collision. On the other hand, the initial amplitude of the linear waves has very little impact on the percentage of energy that can be transferred to the solitary wave. Because of the nonlinearity of the solitary wave, these results provide us some insight into wave-mean flow interaction in a fully nonlinear framework.
An Evaluation of Project iRead: A Program Created to Improve Sight Word Recognition
ERIC Educational Resources Information Center
Marshall, Theresa Meade
2014-01-01
This program evaluation was undertaken to examine the relationship between participation in Project iRead and student gains in word recognition, fluency, and comprehension as measured by the Phonological Awareness Literacy Screening (PALS) Test. Linear regressions compared the 2012-13 PALS results from 5,140 first and second grade students at…
Creating a Project on Difference Equations with Primary Sources: Challenges and Opportunities
ERIC Educational Resources Information Center
Ruch, David
2014-01-01
This article discusses the creation of a student project about linear difference equations using primary sources. Early 18th-century developments in the area are outlined, focusing on efforts by Abraham De Moivre (1667-1754) and Daniel Bernoulli (1700-1782). It is explained how primary sources from these authors can be used to cover material…
ERIC Educational Resources Information Center
Shiovitz-Ezra, Sharon; Leitsch, Sara A.
2010-01-01
The authors explore associations between objective and subjective social network characteristics and loneliness in later life, using data from the National Social Life, Health, and Aging Project, a nationally representative sample of individuals ages 57 to 85 in the United States. Hierarchical linear regression was used to examine the associations…
Alan K. Swanson; Solomon Z. Dobrowski; Andrew O. Finley; James H. Thorne; Michael K. Schwartz
2013-01-01
The uncertainty associated with species distribution model (SDM) projections is poorly characterized, despite its potential value to decision makers. Error estimates from most modelling techniques have been shown to be biased due to their failure to account for spatial autocorrelation (SAC) of residual error. Generalized linear mixed models (GLMM) have the ability to...
Case Study of 'Engineering Peer Meetings' in JPL's ST-6 Project
NASA Technical Reports Server (NTRS)
Chao, Lawrence P.; Tumer, Irem
2004-01-01
This design process error-proofing case study describes a design review practice implemented by a project manager at NASA Jet Propulsion Laboratory. There are many types of reviews at NASA: required and not, formalized and informal, programmatic and technical. Standing project formal reviews such as the Preliminary Design Review (PDR) and Critical Design Review (CDR) are a required part of every project and mission development. However, the engineering peer reviews that support teams technical work on such projects are often informal, ad hoc, and inconsistent across the organization. This case study discusses issues and innovations identified by a project manager at JPL and implemented in 'engineering peer meetings' for his group.
Case Study of "Engineering Peer Meetings" in JPL's ST-6 Project
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Chao, Lawrence P.
2003-01-01
This design process error-proofing case study describes a design review practice implemented by a project manager at NASA Jet Propulsion Laboratory. There are many types of reviews at NASA: required and not, formalized and informal, programmatic and technical. Standing project formal reviews such as the Preliminary Design Review (PDR) and Critical Design Review (CDR) are a required part of every project and mission development. However, the engineering peer reviews that support teams technical work on such projects are often informal, ad hoc, and inconsistent across the organization. This case study discusses issues and innovations identified by a project manager at JPL and implemented in "engineering peer meetings" for his group.
NASA Astrophysics Data System (ADS)
Araneda, Bernardo
2018-04-01
We present weighted covariant derivatives and wave operators for perturbations of certain algebraically special Einstein spacetimes in arbitrary dimensions, under which the Teukolsky and related equations become weighted wave equations. We show that the higher dimensional generalization of the principal null directions are weighted conformal Killing vectors with respect to the modified covariant derivative. We also introduce a modified Laplace–de Rham-like operator acting on tensor-valued differential forms, and show that the wave-like equations are, at the linear level, appropriate projections off shell of this operator acting on the curvature tensor; the projection tensors being made out of weighted conformal Killing–Yano tensors. We give off shell operator identities that map the Einstein and Maxwell equations into weighted scalar equations, and using adjoint operators we construct solutions of the original field equations in a compact form from solutions of the wave-like equations. We study the extreme and zero boost weight cases; extreme boost corresponding to perturbations of Kundt spacetimes (which includes near horizon geometries of extreme black holes), and zero boost to static black holes in arbitrary dimensions. In 4D our results apply to Einstein spacetimes of Petrov type D and make use of weighted Killing spinors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naime, Andre, E-mail: andre.naime.ibama@gmail.com
The environmental regulation of hazardous projects with risk-based decision-making processes can lead to a deficient management of human exposure to technological hazards. Such an approach for regulation is criticized for simplifying the complexity of decisions involving the economic, social, and environmental aspects of the installation and operation of hazardous facilities in urban areas. Results of a Brazilian case study indicate that oil and gas transmission pipelines may represent a threat to diverse communities if the relationship between such linear projects and human populations is overlooked by regulatory bodies. Results also corroborate known challenges to the implementation of EIA processes andmore » outline limitations to an effective environmental and risk management. Two preliminary topics are discussed to strengthen similar regulatory practices. Firstly, an effective integration between social impact assessment and risk assessment in EIA processes to have a more comprehensive understanding of the social fabric. Secondly, the advancement of traditional management practices for hazardous installations to pursue a strong transition from assessment and evaluation to management and control and to promote an effective interaction between land-use planning and environmental regulation.« less
Giant Perpendicular Magnetic Anisotropy of Graphene-Co Heterostructures
NASA Astrophysics Data System (ADS)
Yang, Hongxin; Hallal, Ali; Chshiev, Mairbek; Spintec theory Team
We report strongly enhanced perpendicular anisotropy (PMA) of Co films by graphene coating via ab-initio calculations. The results show that graphene coating can improve the surface anisotropy of Co film up to twice large of the bare Co case and keep the film effective anisotropy being out-of-plane till 25 Å of Co, in agreement with experiments. Our layer resolved analysis reveals that PMA of Co (Co/Gr) films mainly originates from the adjacent 3 Co layers close to surface (interface) and can be strongly influenced by graphene. Furthermore, orbital hybridization analysis uncovers the origin of the PMA enhancement which is due to graphene-Co bonding causing an inversion of Co 3dz 2 and 3dx 2 - y 2 Bloch states close to Fermi level. Finally, we propose to design Co-graphene heterostructures which possess a linearly increasing surface anisotropy and a constant effective anisotropy. These findings point towards a possible engineering graphene-Co junctions with giant anisotropy, which stands as a hallmark for future spintronic information processing. This work was supported by European Graphene Flagship, European Union-funded STREP project CONCEPT-GRAPHENE, French ANR Projects NANOSIM-GRAPHENE and NMGEM
NASA Astrophysics Data System (ADS)
Majumder, Subir; Biswas, Tushar; Bhadra, Shaymal K.
2016-10-01
Existence of out-of-plane conical dispersion for a triangular photonic crystal lattice is reported. It is observed that conical dispersion is maintained for a number of out-of-plane wave vectors (k z ). We study a case where Dirac like linear dispersion exists but the photonic density of states is not vanishing, called Dwarf Dirac cone (DDC) which does not support localized modes. We demonstrate the trapping of such modes by introducing defects in the crystal. Interestingly, we find by k-point sampling as well as by tuning trapped frequency that such a conical dispersion has an inherent light confining property and it is governed by neither of the known wave confining mechanisms like total internal reflection, band gap guidance. Our study reveals that such a conical dispersion in a non-vanishing photonic density of states induces unexpected intense trapping of light compared with those at other points in the continuum. Such studies provoke fabrication of new devices with exciting properties and new functionalities. Project supported by Director, CSIR-CGCRI, the DST, Government of India, and the CSIR 12th Plan Project (GLASSFIB), India.
Is cepstrum averaging applicable to circularly polarized electric-field data?
NASA Astrophysics Data System (ADS)
Tunnell, T.
1990-04-01
In FY 1988 a cepstrum averaging technique was developed to eliminate the ground reflections from charged particle beam (CPB) electromagnetic pulse (EMP) data. The work was done for the Los Alamos National Laboratory Project DEWPOINT at SST-7. The technique averages the cepstra of horizontally and vertically polarized electric field data (i.e., linearly polarized electric field data). This cepstrum averaging technique was programmed into the FORTRAN codes CEP and CEPSIM. Steve Knox, the principal investigator for Project DEWPOINT, asked the authors to determine if the cepstrum averaging technique could be applied to circularly polarized electric field data. The answer is, Yes, but some modifications may be necessary. There are two aspects to this answer that we need to address, namely, the Yes and the modifications. First, regarding the Yes, the technique is applicable to elliptically polarized electric field data in general: circular polarization is a special case of elliptical polarization. Secondly, regarding the modifications, greater care may be required in computing the phase in the calculation of the complex logarithm. The calculation of the complex logarithm is the most critical step in cepstrum-based analysis. This memorandum documents these findings.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne
2011-11-01
We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.
ERIC Educational Resources Information Center
Marschall, Gosia; Andrews, Paul
2015-01-01
In this article we present an exploratory case study of six Polish teachers' perspectives on the teaching of linear equations to grade six students. Data, which derived from semi-structured interviews, were analysed against an extant framework and yielded a number of commonly held beliefs about what teachers aimed to achieve and how they would…
A system for aerodynamic design and analysis of supersonic aircraft. Part 4: Test cases
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.
1980-01-01
An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Representative test cases and associated program output are presented.
Towards bridging the gap between climate change projections and maize producers in South Africa
NASA Astrophysics Data System (ADS)
Landman, Willem A.; Engelbrecht, Francois; Hewitson, Bruce; Malherbe, Johan; van der Merwe, Jacobus
2018-05-01
Multi-decadal regional projections of future climate change are introduced into a linear statistical model in order to produce an ensemble of austral mid-summer maximum temperature simulations for southern Africa. The statistical model uses atmospheric thickness fields from a high-resolution (0.5° × 0.5°) reanalysis-forced simulation as predictors in order to develop a linear recalibration model which represents the relationship between atmospheric thickness fields and gridded maximum temperatures across the region. The regional climate model, the conformal-cubic atmospheric model (CCAM), projects maximum temperatures increases over southern Africa to be in the order of 4 °C under low mitigation towards the end of the century or even higher. The statistical recalibration model is able to replicate these increasing temperatures, and the atmospheric thickness-maximum temperature relationship is shown to be stable under future climate conditions. Since dry land crop yields are not explicitly simulated by climate models but are sensitive to maximum temperature extremes, the effect of projected maximum temperature change on dry land crops of the Witbank maize production district of South Africa, assuming other factors remain unchanged, is then assessed by employing a statistical approach similar to the one used for maximum temperature projections.
Variations in the Solution of Linear First-Order Differential Equations. Classroom Notes
ERIC Educational Resources Information Center
Seaman, Brian; Osler, Thomas J.
2004-01-01
A special project which can be given to students of ordinary differential equations is described in detail. Students create new differential equations by changing the dependent variable in the familiar linear first-order equation (dv/dx)+p(x)v=q(x) by means of a substitution v=f(y). The student then creates a table of the new equations and…
Annual Energy Outlook 2016 With Projections to 2040
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
The Annual Energy Outlook 2016 (AEO2016), prepared by the U.S. Energy Information Administration (EIA), presents long-term projections of energy supply, demand, and prices through 2040. The projections, focused on U.S. energy markets, are based on results from EIA’s National Energy Modeling System (NEMS). NEMS enables EIA to make projections under alternative, internallyconsistent sets of assumptions. The analysis in AEO2016 focuses on the Reference case and 17 alternative cases. EIA published an Early Release version of the AEO2016 Reference case (including U.S. Environmental Protection Agency’s (EPA) Clean Power Plan (CPP)) and a No CPP case (excluding the CPP) in May 2016.
IMNN: Information Maximizing Neural Networks
NASA Astrophysics Data System (ADS)
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.
Costs of venous thromboembolism associated with hospitalization for medical illness.
Cohoon, Kevin P; Leibson, Cynthia L; Ransom, Jeanine E; Ashrani, Aneel A; Petterson, Tanya M; Long, Kirsten Hall; Bailey, Kent R; Heit, Johm A
2015-04-01
To determine population-based estimates of medical costs attributable to venous thromboembolism (VTE) among patients currently or recently hospitalized for acute medical illness. Population-based cohort study conducted in Olmsted County, Minnesota. Using Rochester Epidemiology Project (REP) resources, we identified all Olmsted County residents with objectively diagnosed incident VTE during or within 92 days of hospitalization for acute medical illness over the 18-year period of 1988 to 2005 (n=286). One Olmsted County resident hospitalized for medical illness without VTE was matched to each case for event date (±1 year), duration of prior medical history, and active cancer status. Subjects were followed forward in REP provider-linked billing data for standardized, inflation-adjusted direct medical costs (excluding outpatient pharmaceutical costs) from 1 year before their respective event or index date to the earliest of death, emigration from Olmsted County, or December 31, 2011 (study end date). We censored follow-up such that each case and matched control had similar periods of observation. We used generalized linear modeling (controlling for age, sex, preexisting conditions, and costs 1 year before index) to predict costs for cases and controls. Adjusted mean predicted costs were 2.5-fold higher for cases ($62,838) than for controls ($24,464) (P<.001) from index to up to 5 years post index. Cost differences between cases and controls were greatest within the first 3 months after the event date (mean difference=$16,897) but costs remained significantly higher for cases compared with controls for up to 3 years. VTE during or after recent hospitalization for medical illness contributes a substantial economic burden.
Directed electromagnetic wave propagation in 1D metamaterial: Projecting operators method
NASA Astrophysics Data System (ADS)
Ampilogov, Dmitrii; Leble, Sergey
2016-07-01
We consider a boundary problem for 1D electrodynamics modeling of a pulse propagation in a metamaterial medium. We build and apply projecting operators to a Maxwell system in time domain that allows to split the linear propagation problem to directed waves for a material relations with general dispersion. Matrix elements of the projectors act as convolution integral operators. For a weak nonlinearity we generalize the linear results still for arbitrary dispersion and derive the system of interacting right/left waves with combined (hybrid) amplitudes. The result is specified for the popular metamaterial model with Drude formula for both permittivity and permeability coefficients. We also discuss and investigate stationary solutions of the system related to some boundary regimes.
Blockwise conjugate gradient methods for image reconstruction in volumetric CT.
Qiu, W; Titley-Peloquin, D; Soleimani, M
2012-11-01
Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Non-linear vibrations of sandwich viscoelastic shells
NASA Astrophysics Data System (ADS)
Benchouaf, Lahcen; Boutyour, El Hassan; Daya, El Mostafa; Potier-Ferry, Michel
2018-04-01
This paper deals with the non-linear vibration of sandwich viscoelastic shell structures. Coupling a harmonic balance method with the Galerkin's procedure, one obtains an amplitude equation depending on two complex coefficients. The latter are determined by solving a classical eigenvalue problem and two linear ones. This permits to get the non-linear frequency and the non-linear loss factor as functions of the displacement amplitude. To validate our approach, these relationships are illustrated in the case of a circular sandwich ring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
NASA Astrophysics Data System (ADS)
Puķīte, Jānis; Wagner, Thomas
2016-05-01
We address the application of differential optical absorption spectroscopy (DOAS) of scattered light observations in the presence of strong absorbers (in particular ozone), for which the absorption optical depth is a non-linear function of the trace gas concentration. This is the case because Beer-Lambert law generally does not hold for scattered light measurements due to many light paths contributing to the measurement. While in many cases linear approximation can be made, for scenarios with strong absorptions non-linear effects cannot always be neglected. This is especially the case for observation geometries, for which the light contributing to the measurement is crossing the atmosphere under spatially well-separated paths differing strongly in length and location, like in limb geometry. In these cases, often full retrieval algorithms are applied to address the non-linearities, requiring iterative forward modelling of absorption spectra involving time-consuming wavelength-by-wavelength radiative transfer modelling. In this study, we propose to describe the non-linear effects by additional sensitivity parameters that can be used e.g. to build up a lookup table. Together with widely used box air mass factors (effective light paths) describing the linear response to the increase in the trace gas amount, the higher-order sensitivity parameters eliminate the need for repeating the radiative transfer modelling when modifying the absorption scenario even in the presence of a strong absorption background. While the higher-order absorption structures can be described as separate fit parameters in the spectral analysis (so-called DOAS fit), in practice their quantitative evaluation requires good measurement quality (typically better than that available from current measurements). Therefore, we introduce an iterative retrieval algorithm correcting for the higher-order absorption structures not yet considered in the DOAS fit as well as the absorption dependence on temperature and scattering processes.
[Morphea or juvenile localised scleroderma: Case report].
Strickler, Alexis; Gallo, Silvanna; Jaramillo, Pedro; de Toro, Gonzalo
2016-01-01
Morphea or juvenile localised scleroderma (JLS) is an autoimmune, inflammatory, chronic, slowly progressive connective tissue disease of unknown cause that preferably affects skin and underlying tissues. To report a case of Juvenil Localised scleroderma in an 8-year old girl, contributing to an early diagnosis and treatment. The case is presented of an 8 year-old girl who presented with indurated hypopigmented plaques, of linear distribution in the right upper extremity of two years onset, together with papery texture hyperpigmented indurated plaques with whitish areas of thinned skin in right lower extremity, and leg and ankle swelling. The clinical features and diagnostic tests, including histology were compatible with linear and pansclerotic JLS. She started with immunosuppressive therapy, physiotherapy, and occupational therapy. We report a case of linear and pansclerotic ELJ type, in which there was a 2 year delay in diagnosis, however the response to treatment was positive as expected. Copyright © 2016 Sociedad Chilena de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.
Response statistics of rotating shaft with non-linear elastic restoring forces by path integration
NASA Astrophysics Data System (ADS)
Gaidai, Oleg; Naess, Arvid; Dimentberg, Michael
2017-07-01
Extreme statistics of random vibrations is studied for a Jeffcott rotor under uniaxial white noise excitation. Restoring force is modelled as elastic non-linear; comparison is done with linearized restoring force to see the force non-linearity effect on the response statistics. While for the linear model analytical solutions and stability conditions are available, it is not generally the case for non-linear system except for some special cases. The statistics of non-linear case is studied by applying path integration (PI) method, which is based on the Markov property of the coupled dynamic system. The Jeffcott rotor response statistics can be obtained by solving the Fokker-Planck (FP) equation of the 4D dynamic system. An efficient implementation of PI algorithm is applied, namely fast Fourier transform (FFT) is used to simulate dynamic system additive noise. The latter allows significantly reduce computational time, compared to the classical PI. Excitation is modelled as Gaussian white noise, however any kind distributed white noise can be implemented with the same PI technique. Also multidirectional Markov noise can be modelled with PI in the same way as unidirectional. PI is accelerated by using Monte Carlo (MC) estimated joint probability density function (PDF) as initial input. Symmetry of dynamic system was utilized to afford higher mesh resolution. Both internal (rotating) and external damping are included in mechanical model of the rotor. The main advantage of using PI rather than MC is that PI offers high accuracy in the probability distribution tail. The latter is of critical importance for e.g. extreme value statistics, system reliability, and first passage probability.
NASA Astrophysics Data System (ADS)
Khorrami, Mohammad; Shariati, Ahmad; Aghamohammadi, Amir; Fatollahi, Amir H.
2012-01-01
It is shown that as far as the linear diffusion equation meets both time- and space-translational invariance, the time dependence of a moment of degree α is a polynomial of degree at most equal to α, while all connected moments are at most linear functions of time. As a special case, the variance is an at most linear function of time.
When linear stability does not exclude nonlinear instability
Kevrekidis, P. G.; Pelinovsky, D. E.; Saxena, A.
2015-05-29
We describe a mechanism that results in the nonlinear instability of stationary states even in the case where the stationary states are linearly stable. In this study, this instability is due to the nonlinearity-induced coupling of the linearization’s internal modes of negative energy with the continuous spectrum. In a broad class of nonlinear Schrödinger equations considered, the presence of such internal modes guarantees the nonlinear instability of the stationary states in the evolution dynamics. To corroborate this idea, we explore three prototypical case examples: (a) an antisymmetric soliton in a double-well potential, (b) a twisted localized mode in a one-dimensionalmore » lattice with cubic nonlinearity, and (c) a discrete vortex in a two-dimensional saturable lattice. In all cases, we observe a weak nonlinear instability, despite the linear stability of the respective states.« less
Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li
2009-02-01
Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.
NASA Astrophysics Data System (ADS)
Koparan, Timur; Güven, Bülent
2015-07-01
The point of this study is to define the effect of project-based learning approach on 8th Grade secondary-school students' statistical literacy levels for data representation. To achieve this goal, a test which consists of 12 open-ended questions in accordance with the views of experts was developed. Seventy 8th grade secondary-school students, 35 in the experimental group and 35 in the control group, took this test twice, one before the application and one after the application. All the raw scores were turned into linear points by using the Winsteps 3.72 modelling program that makes the Rasch analysis and t-tests, and an ANCOVA analysis was carried out with the linear points. Depending on the findings, it was concluded that the project-based learning approach increases students' level of statistical literacy for data representation. Students' levels of statistical literacy before and after the application were shown through the obtained person-item maps.
Kinetic Rate Kernels via Hierarchical Liouville-Space Projection Operator Approach.
Zhang, Hou-Dao; Yan, YiJing
2016-05-19
Kinetic rate kernels in general multisite systems are formulated on the basis of a nonperturbative quantum dissipation theory, the hierarchical equations of motion (HEOM) formalism, together with the Nakajima-Zwanzig projection operator technique. The present approach exploits the HEOM-space linear algebra. The quantum non-Markovian site-to-site transfer rate can be faithfully evaluated via projected HEOM dynamics. The developed method is exact, as evident by the comparison to the direct HEOM evaluation results on the population evolution.
SLAC modulator system improvements and reliability results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donaldson, A.R.
1998-06-01
In 1995, an improvement project was completed on the 244 klystron modulators in the linear accelerator. The modulator system has been previously described. This article offers project details and their resulting effect on modulator and component reliability. Prior to the project, the authors had collected four operating cycles (1991 through 1995) of MTTF data. In this discussion, the '91 data will be excluded since the modulators operated at 60 Hz. The five periods following the '91 run were reviewed due to the common repetition rate at 120 Hz.
NASA Technical Reports Server (NTRS)
Jamison, J. W.
1994-01-01
CFORM was developed by the Kennedy Space Center Robotics Lab to assist in linear control system design and analysis using closed form and transient response mechanisms. The program computes the closed form solution and transient response of a linear (constant coefficient) differential equation. CFORM allows a choice of three input functions: the Unit Step (a unit change in displacement); the Ramp function (step velocity); and the Parabolic function (step acceleration). It is only accurate in cases where the differential equation has distinct roots, and does not handle the case for roots at the origin (s=0). Initial conditions must be zero. Differential equations may be input to CFORM in two forms - polynomial and product of factors. In some linear control analyses, it may be more appropriate to use a related program, Linear Control System Design and Analysis (KSC-11376), which uses root locus and frequency response methods. CFORM was written in VAX FORTRAN for a VAX 11/780 under VAX VMS 4.7. It has a central memory requirement of 30K. CFORM was developed in 1987.
Projection of two biphoton qutrits onto a maximally entangled state.
Halevy, A; Megidish, E; Shacham, T; Dovrat, L; Eisenberg, H S
2011-04-01
Bell state measurements, in which two quantum bits are projected onto a maximally entangled state, are an essential component of quantum information science. We propose and experimentally demonstrate the projection of two quantum systems with three states (qutrits) onto a generalized maximally entangled state. Each qutrit is represented by the polarization of a pair of indistinguishable photons-a biphoton. The projection is a joint measurement on both biphotons using standard linear optics elements. This demonstration enables the realization of quantum information protocols with qutrits, such as teleportation and entanglement swapping. © 2011 American Physical Society
ERIC Educational Resources Information Center
Gold, Eva; Pickron-Davis, Marcine; Brown, Chris
This report describes Philadelphia, Pennsylvania's, Alliance Organizing Project (AOP), which organized parents and families of Philadelphia's public school students to become full partners in Philadelphia school reform. It is one of five case studies in the Indicators Project on Education Organizing, which identified eight indicators of the impact…
LYNX community advocacy & service engagement (CASE) project final report.
DOT National Transportation Integrated Search
2009-05-14
This report is a final assessment of the Community Advocacy & Service Engagement (CASE) project, a LYNX-FTA research project designed : to study transit education and public engagement methods in Central Florida. In the Orlando area, as in other part...
The Rescue911 Emergency Response Information System (ERIS): A Systems Development Project Case
ERIC Educational Resources Information Center
Cohen, Jason F.; Thiel, Franz H.
2010-01-01
This teaching case presents a systems development project useful for courses in object-oriented analysis and design. The case has a strong focus on the business, methodology, modeling and implementation aspects of systems development. The case is centered on a fictitious ambulance and emergency services company (Rescue911). The case describes that…
Fixing convergence of Gaussian belief propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jason K; Bickson, Danny; Dolev, Danny
Gaussian belief propagation (GaBP) is an iterative message-passing algorithm for inference in Gaussian graphical models. It is known that when GaBP converges it converges to the correct MAP estimate of the Gaussian random vector and simple sufficient conditions for its convergence have been established. In this paper we develop a double-loop algorithm for forcing convergence of GaBP. Our method computes the correct MAP estimate even in cases where standard GaBP would not have converged. We further extend this construction to compute least-squares solutions of over-constrained linear systems. We believe that our construction has numerous applications, since the GaBP algorithm ismore » linked to solution of linear systems of equations, which is a fundamental problem in computer science and engineering. As a case study, we discuss the linear detection problem. We show that using our new construction, we are able to force convergence of Montanari's linear detection algorithm, in cases where it would originally fail. As a consequence, we are able to increase significantly the number of users that can transmit concurrently.« less
García-Diego, Fernando-Juan; Sánchez-Quinche, Angel; Merello, Paloma; Beltrán, Pedro; Peris, Cristófol
2013-01-01
In this study we propose an electronic system for linear positioning of a magnet independent of its modulus, which could vary because of aging, different fabrication process, etc. The system comprises a linear array of 24 Hall Effect sensors of proportional response. The data from all sensors are subject to a pretreatment (normalization) by row (position) making them independent on the temporary variation of its magnetic field strength. We analyze the particular case of the individual flow in milking of goats. The multiple regression analysis allowed us to calibrate the electronic system with a percentage of explanation R2 = 99.96%. In our case, the uncertainty in the linear position of the magnet is 0.51 mm that represents 0.019 L of goat milk. The test in farm compared the results obtained by direct reading of the volume with those obtained by the proposed electronic calibrated system, achieving a percentage of explanation of 99.05%. PMID:23793020
Contextual Multi-armed Bandits under Feature Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yun, Seyoung; Nam, Jun Hyun; Mo, Sangwoo
We study contextual multi-armed bandit problems under linear realizability on rewards and uncertainty (or noise) on features. For the case of identical noise on features across actions, we propose an algorithm, coined NLinRel, having O(T⁷/₈(log(dT)+K√d)) regret bound for T rounds, K actions, and d-dimensional feature vectors. Next, for the case of non-identical noise, we observe that popular linear hypotheses including NLinRel are impossible to achieve such sub-linear regret. Instead, under assumption of Gaussian feature vectors, we prove that a greedy algorithm has O(T²/₃√log d)regret bound with respect to the optimal linear hypothesis. Utilizing our theoretical understanding on the Gaussian case,more » we also design a practical variant of NLinRel, coined Universal-NLinRel, for arbitrary feature distributions. It first runs NLinRel for finding the ‘true’ coefficient vector using feature uncertainties and then adjust it to minimize its regret using the statistical feature information. We justify the performance of Universal-NLinRel on both synthetic and real-world datasets.« less
Are non-linearity effects of absorption important for MAX-DOAS observations?
NASA Astrophysics Data System (ADS)
Pukite, Janis; Wang, Yang; Wagner, Thomas
2017-04-01
For scattered light observations the absorption optical depth depends non-linearly on the trace gas concentrations if their absorption is strong. This is the case because the Beer-Lambert law is generally not applicable for scattered light measurements due to many (i.e. more than one) light paths contributing to the measurement. While in many cases a linear approximation can be made, for scenarios with strong absorption non-linear effects cannot always be neglected. This is especially the case for observation geometries with spatially extended and diffuse light paths, especially in satellite limb geometry but also for nadir measurements as well. Fortunately the effects of non-linear effects can be quantified by means of expanding the radiative transfer equation in a Taylor series with respect to the trace gas absorption coefficients. Herewith if necessary (1) the higher order absorption structures can be described as separate fit parameters in the DOAS fit and (2) the algorithm constraints of retrievals of VCDs and profiles can be improved by considering higher order sensitivity parameters. In this study we investigate the contribution of the higher order absorption structures for MAX-DOAS observation geometry for different atmospheric and ground properties (cloud and aerosol effects, trace gas amount, albedo) and geometry (different Sun and viewing angles).
NASA Astrophysics Data System (ADS)
Hawken, A. J.; Granett, B. R.; Iovino, A.; Guzzo, L.; Peacock, J. A.; de la Torre, S.; Garilli, B.; Bolzonella, M.; Scodeggio, M.; Abbas, U.; Adami, C.; Bottini, D.; Cappi, A.; Cucciati, O.; Davidzon, I.; Fritz, A.; Franzetti, P.; Krywult, J.; Le Brun, V.; Le Fèvre, O.; Maccagni, D.; Małek, K.; Marulli, F.; Polletta, M.; Pollo, A.; Tasca, L. A. M.; Tojeiro, R.; Vergani, D.; Zanichelli, A.; Arnouts, S.; Bel, J.; Branchini, E.; De Lucia, G.; Ilbert, O.; Moscardini, L.; Percival, W. J.
2017-11-01
We aim to develop a novel methodology for measuring thegrowth rate of structure around cosmic voids. We identified voids in the completed VIMOS Public Extragalactic Redshift Survey (VIPERS), using an algorithm based on searching for empty spheres. We measured the cross-correlation between the centres of voids and the complete galaxy catalogue. The cross-correlation function exhibits a clear anisotropy in both VIPERS fields (W1 and W4), which is characteristic of linear redshift space distortions. By measuring the projected cross-correlation and then de-projecting it we are able to estimate the un-distorted cross-correlation function. We propose that given a sufficiently well-measured cross-correlation function one should be able to measure the linear growth rate of structure by applying a simple linear Gaussian streaming model for the redshift space distortions (RSD). Our study of voids in 306 mock galaxy catalogues mimicking the VIPERS fields suggests that VIPERS is capable of measuring β, the ratio of the linear growth rate to the bias, with an error of around 25%. Applying our method to the VIPERS data, we find a value for the redshift space distortion parameter, β = 0.423-0.108+0.104 which, given the bias of the galaxy population we use, gives a linear growth rate of f σ8 = 0.296-0.078+0.075 at z = 0.727. These results are consistent with values observed in parallel VIPERS analyses that use standard techniques. Based on observations collected at the European Southern Observatory, Cerro Paranal, Chile, using the Very Large Telescope under programs 182.A-0886 and partly 070.A-9007. Also based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS.
Score-moment combined linear discrimination analysis (SMC-LDA) as an improved discrimination method.
Han, Jintae; Chung, Hoeil; Han, Sung-Hwan; Yoon, Moon-Young
2007-01-01
A new discrimination method called the score-moment combined linear discrimination analysis (SMC-LDA) has been developed and its performance has been evaluated using three practical spectroscopic datasets. The key concept of SMC-LDA was to use not only the score from principal component analysis (PCA), but also the moment of the spectrum, as inputs for LDA to improve discrimination. Along with conventional score, moment is used in spectroscopic fields as an effective alternative for spectral feature representation. Three different approaches were considered. Initially, the score generated from PCA was projected onto a two-dimensional feature space by maximizing Fisher's criterion function (conventional PCA-LDA). Next, the same procedure was performed using only moment. Finally, both score and moment were utilized simultaneously for LDA. To evaluate discrimination performances, three different spectroscopic datasets were employed: (1) infrared (IR) spectra of normal and malignant stomach tissue, (2) near-infrared (NIR) spectra of diesel and light gas oil (LGO) and (3) Raman spectra of Chinese and Korean ginseng. For each case, the best discrimination results were achieved when both score and moment were used for LDA (SMC-LDA). Since the spectral representation character of moment was different from that of score, inclusion of both score and moment for LDA provided more diversified and descriptive information.
NASA Astrophysics Data System (ADS)
Kodama, Hajime; Watanabe, Manabu; Sato, Eiichi; Oda, Yasuyuki; Hagiwara, Osahiko; Matsukiyo, Hiroshi; Osawa, Akihiro; Enomoto, Toshiyuki; Kusachi, Shinya; Sato, Shigehiro; Ogawa, Akira
2013-07-01
X-ray photons are directly detected using a 100 MHz ready-made silicon P-intrinsic-N X-ray diode (Si-PIN-XD). The Si-PIN-XD is shielded using an aluminum case with a 25-µm-thick aluminum window and a BNC connector. The photocurrent from the Si-PIN-XD is amplified by charge sensitive and shaping amplifiers, and the event pulses are sent to a multichannel analyzer (MCA) to measure X-ray spectra. At a tube voltage of 90 kV, we observe K-series characteristic X-rays of tungsten. Photon-counting computed tomography (PC-CT) is accomplished by repeated linear scans and rotations of an object, and projection curves of the object are obtained by linear scanning at a tube current of 2.0 mA. The exposure time for obtaining a tomogram is 10 min with scan steps of 0.5 mm and rotation steps of 1.0°. At a tube voltage of 90 kV, the maximum count rate is 150 kcps. We carry out PC-CT using gadolinium media and confirm the energy-dispersive effect with changes in the lower level voltage of the event pulse using a comparator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl
For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less
Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl
2016-05-25
For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less
Space Radiation Cancer Risk Projections and Uncertainties - 2010
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Kim, Myung-Hee Y.; Chappell, Lori J.
2011-01-01
Uncertainties in estimating health risks from galactic cosmic rays greatly limit space mission lengths and potential risk mitigation evaluations. NASA limits astronaut exposures to a 3% risk of exposure-induced death and protects against uncertainties using an assessment of 95% confidence intervals in the projection model. Revisions to this model for lifetime cancer risks from space radiation and new estimates of model uncertainties are described here. We review models of space environments and transport code predictions of organ exposures, and characterize uncertainties in these descriptions. We summarize recent analysis of low linear energy transfer radio-epidemiology data, including revision to Japanese A-bomb survivor dosimetry, longer follow-up of exposed cohorts, and reassessments of dose and dose-rate reduction effectiveness factors. We compare these projections and uncertainties with earlier estimates. Current understanding of radiation quality effects and recent data on factors of relative biological effectiveness and particle track structure are reviewed. Recent radiobiology experiment results provide new information on solid cancer and leukemia risks from heavy ions. We also consider deviations from the paradigm of linearity at low doses of heavy ions motivated by non-targeted effects models. New findings and knowledge are used to revise the NASA risk projection model for space radiation cancer risks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jan Hesthaven
2012-02-06
Final report for DOE Contract DE-FG02-98ER25346 entitled Parallel High Order Accuracy Methods Applied to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences. Principal Investigator Jan S. Hesthaven Division of Applied Mathematics Brown University, Box F Providence, RI 02912 Jan.Hesthaven@Brown.edu February 6, 2012 Note: This grant was originally awarded to Professor David Gottlieb and the majority of the work envisioned reflects his original ideas. However, when Prof Gottlieb passed away in December 2008, Professor Hesthaven took over as PI to ensure proper mentoring of students and postdoctoral researchers already involved in the project. This unusual circumstance has naturally impacted themore » project and its timeline. However, as the report reflects, the planned work has been accomplished and some activities beyond the original scope have been pursued with success. Project overview and main results The effort in this project focuses on the development of high order accurate computational methods for the solution of hyperbolic equations with application to problems with strong shocks. While the methods are general, emphasis is on applications to gas dynamics with strong shocks.« less
Clustered Multi-Task Learning for Automatic Radar Target Recognition
Li, Cong; Bao, Weimin; Xu, Luping; Zhang, Hua
2017-01-01
Model training is a key technique for radar target recognition. Traditional model training algorithms in the framework of single task leaning ignore the relationships among multiple tasks, which degrades the recognition performance. In this paper, we propose a clustered multi-task learning, which can reveal and share the multi-task relationships for radar target recognition. To further make full use of these relationships, the latent multi-task relationships in the projection space are taken into consideration. Specifically, a constraint term in the projection space is proposed, the main idea of which is that multiple tasks within a close cluster should be close to each other in the projection space. In the proposed method, the cluster structures and multi-task relationships can be autonomously learned and utilized in both of the original and projected space. In view of the nonlinear characteristics of radar targets, the proposed method is extended to a non-linear kernel version and the corresponding non-linear multi-task solving method is proposed. Comprehensive experimental studies on simulated high-resolution range profile dataset and MSTAR SAR public database verify the superiority of the proposed method to some related algorithms. PMID:28953267
Soft drink consumption and gestational diabetes risk in the SUN project.
Donazar-Ezcurra, Mikel; Lopez-Del Burgo, Cristina; Martinez-Gonzalez, Miguel A; Basterra-Gortari, Francisco J; de Irala, Jokin; Bes-Rastrollo, Maira
2018-04-01
Gestational diabetes mellitus (GDM) prevalence is increasing worldwide. To the best of our knowledge the specific evaluation of soft drink consumption as a risk factor for developing GDM has only been conducted in the Nurses' Health Study II. To investigate the incidence of GDM according to soft drink consumption in the SUN project. The "Seguimiento Universidad de Navarra" (SUN) project is a prospective and dynamic cohort which included data of 3396 women who notified at least one pregnancy between December 1999 and March 2012. A validated 136-item semi-quantitative food frequency questionnaire was used to assess soft drink consumption. Four categories of sugar-sweetened soft drink (SSSD) and diet soft drink (DSD) consumption (servings) were established: rarely or never (<1/month), low (1-3/month), intermediate (>3/month and ≤1/week) and high (≥2/week). Potential confounders were adjusted through non-conditional logistic regression models. During the follow-up, we identified 172 incident cases of GDM. After adjusting for age, baseline body mass index, family history of diabetes, smoking, total energy intake, physical activity, parity, fast-food consumption, adherence to Mediterranean dietary pattern, alcohol intake, multiple pregnancy, cardiovascular disease/hypertension at baseline, fiber intake, following special diet and snacking, SSSD consumption was significantly associated with an increased risk of incident GDM, with multivariable adjusted odds ratios (OR) of 2.03 (95% confidence interval [CI]: 1.25-3.31) and 1.67 (95% CI: 1.01-2.77) for the highest and intermediate categories, respectively, versus the lowest category (p for linear trend: 0.006). Conversely, DSD consumption was not associated with GDM incidence (adjusted OR: 0.82; 95% CI: 0.52-1.31) for the highest versus the lowest category (p for linear trend: 0.258). Additional sensitivity analyses did not change the results. Higher consumption of SSSDs before pregnancy was an independent risk factor for GDM, however, no association was observed between DSD consumption and GDM risk. Copyright © 2017 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Linear Optimization and Image Reconstruction
1994-06-01
final example is again a novel one. We formulate the problem of computer assisted tomographic ( CAT ) image reconstruction as a linear optimization...possibility that a patient, Fred, suffers from a brain tumor. Further, the physician opts to make use of the CAT (Computer Aided Tomography) scan device...and examine the inside of Fred’s head without exploratory surgery. The CAT scan machine works by projecting a finite number of X-rays of known
Stochastic Dynamic Mixed-Integer Programming (SD-MIP)
2015-05-05
stochastic linear programming ( SLP ) problems. By using a combination of ideas from cutting plane theory of deterministic MIP (especially disjunctive...developed to date. b) As part of this project, we have also developed tools for very large scale Stochastic Linear Programming ( SLP ). There are...several reasons for this. First, SLP models continue to challenge many of the fastest computers to date, and many applications within the DoD (e.g
ERIC Educational Resources Information Center
Keller, Edward L.
This unit, which looks at applications of linear algebra to population studies, is designed to help pupils: (1) understand an application of matrix algebra to the study of populations; (2) see how knowledge of eigen values and eigen vectors is useful in studying powers of matrices; and (3) be briefly exposed to some difficult but interesting…
Development of Driver/Vehicle Steering Interaction Models for Dynamic Analysis
1988-12-01
Figure 5-10. The Linearized Single-Unit Vehicle Model ............................... 41 Figure 5-11. Interpretation of the Single-Unit Model...The starting point for the driver modelling research conducted under this project was a linear preview control model originally proposed by MacAdam 1...regardless of its origin, can pass at least the elementary validation test of exhibiting "cross-over model"-like- behavior in the vicinity of its
ERIC Educational Resources Information Center
Hook, Colin; Ethridge, James
As part of Project IMPACT's efforts to identify and develop procedures for complying with the impact requirements of Public Law 94-482, a case study was made of Illinois Projects in Horticulture. Fourteen horticulture projects in high schools and junior colleges were discovered through a previous study, personal interviews with two University of…
A refinement of the combination equations for evaporation
Milly, P.C.D.
1991-01-01
Most combination equations for evaporation rely on a linear expansion of the saturation vapor-pressure curve around the air temperature. Because the temperature at the surface may differ from this temperature by several degrees, and because the saturation vapor-pressure curve is nonlinear, this approximation leads to a certain degree of error in those evaporation equations. It is possible, however, to introduce higher-order polynomial approximations for the saturation vapor-pressure curve and to derive a family of explicit equations for evaporation, having any desired degree of accuracy. Under the linear approximation, the new family of equations for evaporation reduces, in particular cases, to the combination equations of H. L. Penman (Natural evaporation from open water, bare soil and grass, Proc. R. Soc. London, Ser. A193, 120-145, 1948) and of subsequent workers. Comparison of the linear and quadratic approximations leads to a simple approximate expression for the error associated with the linear case. Equations based on the conventional linear approximation consistently underestimate evaporation, sometimes by a substantial amount. ?? 1991 Kluwer Academic Publishers.
Limit cycles in planar piecewise linear differential systems with nonregular separation line
NASA Astrophysics Data System (ADS)
Cardin, Pedro Toniol; Torregrosa, Joan
2016-12-01
In this paper we deal with planar piecewise linear differential systems defined in two zones. We consider the case when the two linear zones are angular sectors of angles α and 2 π - α, respectively, for α ∈(0 , π) . We study the problem of determining lower bounds for the number of isolated periodic orbits in such systems using Melnikov functions. These limit cycles appear studying higher order piecewise linear perturbations of a linear center. It is proved that the maximum number of limit cycles that can appear up to a sixth order perturbation is five. Moreover, for these values of α, we prove the existence of systems with four limit cycles up to fifth order and, for α = π / 2, we provide an explicit example with five up to sixth order. In general, the nonregular separation line increases the number of periodic orbits in comparison with the case where the two zones are separated by a straight line.
Shi, Hongli; Yang, Zhi; Luo, Shuqian
2017-01-01
The beam hardening artifact is one of most important modalities of metal artifact for polychromatic X-ray computed tomography (CT), which can impair the image quality seriously. An iterative approach is proposed to reduce beam hardening artifact caused by metallic components in polychromatic X-ray CT. According to Lambert-Beer law, the (detected) projections can be expressed as monotonic nonlinear functions of element geometry projections, which are the theoretical projections produced only by the pixel intensities (image grayscale) of certain element (component). With help of a prior knowledge on spectrum distribution of X-ray beam source and energy-dependent attenuation coefficients, the functions have explicit expressions. Newton-Raphson algorithm is employed to solve the functions. The solutions are named as the synthetical geometry projections, which are the nearly linear weighted sum of element geometry projections with respect to mean of each attenuation coefficient. In this process, the attenuation coefficients are modified to make Newton-Raphson iterative functions satisfy the convergence conditions of fixed pointed iteration(FPI) so that the solutions will approach the true synthetical geometry projections stably. The underlying images are obtained using the projections by general reconstruction algorithms such as the filtered back projection (FBP). The image gray values are adjusted according to the attenuation coefficient means to obtain proper CT numbers. Several examples demonstrate the proposed approach is efficient in reducing beam hardening artifacts and has satisfactory performance in the term of some general criteria. In a simulation example, the normalized root mean square difference (NRMSD) can be reduced 17.52% compared to a newest algorithm. Since the element geometry projections are free from the effect of beam hardening, the nearly linear weighted sum of them, the synthetical geometry projections, are almost free from the effect of beam hardening. By working out the synthetical geometry projections, the proposed approach becomes quite efficient in reducing beam hardening artifacts.
Li, Yixue; Li, Guoxing; Zeng, Qiang; Liang, Fengchao; Pan, Xiaochuan
2018-02-01
Temperature has been associated with population health, but few studies have projected the future temperature-related years of life lost attributable to climate change. To project future temperature-related disease burden in Tianjin, we selected years of life lost (YLL) as the dependent variable to explore YLL attributable to climate change. A generalized linear model (GLM) and distributed lag non-linear model were combined to assess the non-linear and delayed effects of temperature on the YLL of non-accidental mortality. Then, we calculated the YLL changes attributable to future climate scenarios in 2055 and 2090. The relationships of daily mean temperature with the YLL of non-accident mortality were basically U-shaped. Both the daily mean temperature increase on high-temperature days and its drop on low-temperature days caused an increase of YLL and non-accidental deaths. The temperature-related YLL will worsen if future climate change exceeds 2 °C. In addition, the adverse effects of extreme temperature on YLL occurred more quickly than that of the overall temperature. The impact of low temperature was greater than that of high temperature. Men were vulnerable to high temperature compared with women. This analysis highlights that the government should formulate environmental policies to reach the Paris Agreement goal. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amai, W.; Espinoza, J. Jr.; Fletcher, D.R.
1997-06-01
This Software Requirements Specification (SRS) describes the features to be provided by the software for the GIS-T/ISTEA Pooled Fund Study Phase C Linear Referencing Engine project. This document conforms to the recommendations of IEEE Standard 830-1984, IEEE Guide to Software Requirements Specification (Institute of Electrical and Electronics Engineers, Inc., 1984). The software specified in this SRS is a proof-of-concept implementation of the Linear Referencing Engine as described in the GIS-T/ISTEA pooled Fund Study Phase B Summary, specifically Sheet 13 of the Phase B object model. The software allows an operator to convert between two linear referencing methods and a datummore » network.« less
Hunt, Daniel; Knuchel-Takano, André; Jaccard, Abbygail; Bhimjiyani, Arti; Retat, Lise; Selvarajah, Chit; Brown, Katrina; Webber, Laura L; Brown, Martin
2018-03-01
Smoking is still the most preventable cause of cancer, and a leading cause of premature mortality and health inequalities in the UK. This study modelled the health and economic impacts of achieving a 'tobacco-free' ambition (TFA) where, by 2035, less than 5% of the population smoke tobacco across all socioeconomic groups. A non-linear multivariate regression model was fitted to cross-sectional smoking data to create projections to 2035. These projections were used to predict the future incidence and costs of 17 smoking-related diseases using a microsimulation approach. The health and economic impacts of achieving a TFA were evaluated against a predicted baseline scenario, where current smoking trends continue. If trends continue, the prevalence of smoking in the UK was projected to be 10% by 2035-well above a TFA. If this ambition were achieved by 2035, it could mean 97 300 +/- 5 300 new cases of smoking-related diseases are avoided by 2035 (tobacco-related cancers: 35 900+/- 4 100; chronic obstructive pulmonary disease: 29 000 +/- 2 700; stroke: 24 900 +/- 2 700; coronary heart disease: 7600 +/- 2 700), including around 12 350 diseases avoided in 2035 alone. The consequence of this health improvement is predicted to avoid £67 +/- 8 million in direct National Health Service and social care costs, and £548 million in non-health costs, in 2035 alone. These findings strengthen the case to set bold targets on long-term declines in smoking prevalence to achieve a tobacco 'endgame'. Results demonstrate the health and economic benefits that meeting a TFA can achieve over just 20 years. Effective ambitions and policy interventions are needed to reduce the disease and economic burden of smoking. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Monitoring hydrofrac-induced seismicity by surface arrays - the DHM-Project Basel case study
NASA Astrophysics Data System (ADS)
Blascheck, P.; Häge, M.; Joswig, M.
2012-04-01
The method "nanoseismic monitoring" was applied during the hydraulic stimulation at the Deep-Heat-Mining-Project (DHM-Project) Basel. Two small arrays in a distance of 2.1 km and 4.8 km to the borehole recorded continuously for two days. During this time more than 2500 seismic events were detected. The method of the surface monitoring of induced seismicity was compared to the reference which the hydrofrac monitoring presented. The latter was conducted by a network of borehole seismometers by Geothermal Explorers Limited. Array processing provides a outlier resistant, graphical jack-knifing localization method which resulted in a average deviation towards the reference of 850 m. Additionally, by applying the relative localization master-event method, the NNW-SSE strike direction of the reference was confirmed. It was shown that, in order to successfully estimate the magnitude of completeness as well as the b-value at the event rate and detection sensibility present, 3 h segments of data are sufficient. This is supported by two segment out of over 13 h of evaluated data. These segments were chosen so that they represent a time during the high seismic noise during normal working hours in daytime as well as the minimum anthropogenic noise at night. The low signal-to-noise ratio was compensated by the application of a sonogram event detection as well as a coincidence analysis within each array. Sonograms allow by autoadaptive, non-linear filtering to enhance signals whose amplitudes are just above noise level. For these events the magnitude was determined by the master-event method, allowing to compute the magnitude of completeness by the entire-magnitude-range method provided by the ZMAP toolbox. Additionally, the b-values were determined and compared to the reference values. An introduction to the method of "nanoseismic monitoring" will be given as well as the comparison to reference data in the Basel case study.
Creswell, Jacob; Sahu, Suvanand; Blok, Lucie; Bakker, Mirjam I; Stevens, Robert; Ditiu, Lucica
2014-01-01
Globally, TB notifications have stagnated since 2007, and sputum smear positive notifications have been declining despite policies to improve case detection. We evaluate results of 28 interventions focused on improving TB case detection. We measured additional sputum smear positive cases treated, defined as the intervention area's increase in case notification during the project compared to the previous year. Projects were encouraged to select control areas and collect historical notification data. We used time series negative binomial regression for over-dispersed cross-sectional data accounting for fixed and random effects to test the individual projects' effects on TB notification while controlling for trend and control populations. Twenty-eight projects, 19 with control populations, completed at least four quarters of case finding activities, covering a population of 89.2 million. Among all projects sputum smear positive (SS+) TB notifications increased 24.9% and annualized notification rates increased from 69.1 to 86.2/100,000 (p = 0.0209) during interventions. Among the 19 projects with control populations, SS+TB case notifications increased 36.9% increase while in the control populations a 3.6% decrease was observed. Fourteen (74%) of the 19 projects' SS+TB notification rates in intervention areas increased from the baseline to intervention period when controlling for historical trends and notifications in control areas. Interventions were associated with large increases in TB notifications across many settings, using an array of interventions. Many people with TB are not reached using current approaches. Different methods and interventions tailored to local realities are urgently needed.
NASA Astrophysics Data System (ADS)
Shin, Sungkyun; Müller, Detlef; Kim, Y. J.; Tatarov, Boyan; Shin, Dongho; Seifert, Patric; Noh, Young Min
2013-01-01
The linear particle depolarization ratios were retrieved from the observation with a multiwavelength Raman lidar at the Gwangju Institute of Science and Technology (GIST), Korea (35.11°N, 126.54°E). The measurements were carried out in spring (March to May) 2011. The transmission ratio measurements were performed to solve problems of the depolarization-dependent transmission at a receiver of the lidar and applied to correct the retrieved depolarization ratio of Asian dust at first time in Korea. The analyzed data from the GIST multiwavelength Raman lidar were classified into three categories according to the linear particle depolarization ratios, which are pure Asian dust on 21 March, the intermediate case which means Asian dust mixed with urban pollution on 13 May, and haze case on 10 April. The measured transmission ratios were applied to these cases respectively. We found that the transmission ratio is needed to be used to retrieve the accurate depolarization ratio of Asian dust and also would be useful to distinguish the mixed dust particles between intermediate case and haze. The particle depolarization ratios of pure Asian dust were approximately 0.25 at 532 nm and 0.14 at 532 nm for the intermediate case. The linear particle depolarization ratios of pure Asian dust observed with the GIST multiwavelength Raman lidar were compared to the linear particle depolarization ratios of Saharan dust observed in Morocco and Asian dust observed both in Japan and China.
Worker Dislocation. Case Studies of Causes and Cures.
ERIC Educational Resources Information Center
Cook, Robert F., Ed.
Case studies were made of the following dislocated worker programs: Cummins Engine Company Dislocated Worker Project; GM-UAW Metropolitan Pontiac Retraining and Employment Program; Minnesota Iron Range Dislocated Worker Project; Missouri Dislocated Worker Program Job Search Assistance, Inc.; Hillsborough, North Carolina, Dislocated Worker Project;…
The Baade-Wesselink projection factor of the δ-Scuti stars AI Vel and β Cas
NASA Astrophysics Data System (ADS)
Guiglion, G.; Nardetto, N.; Domiciano de Souza, A.; Mathias, P.; Mourard, D.; Poretti, E.
2012-12-01
The Baade-Wesselink method of distance determination is based on the oscillations of pulsating stars. After determining the angular diameter and the linear radius variations, the distance is derived by a simple ratio. The linear radius variation is measured by integrating the pulsation velocity (hereafter V_{puls}) over one pulsating cycle. However, from observations we have only access to the radial velocity (V_{rad}) because of the projection along the line-of-sight. The projection factor, used to convert the radial velocity into the pulsation velocity, is defined by: p = V_{puls} / V_{rad}. We aim to derive the projection factor for two δ-Scuti stars, the high amplitude pulsator AI Vel and the fast rotator β Cas. The geometric component of the projection factor is derived using a limb-darkening model of the intensity distribution of AI Vel, and a fast rotator model for β Cas. Then, by comparing the radial velocity curves of several spectral lines forming at different levels in the atmosphere, we derive directly the velocity gradient (in a part of the atmosphere of the star) using SOPHIE/OHP data for β Cas and HARPS/ESO data for AI Vel, which is used to derive a dynamical projection factor for both stars. We find p = 1.44 ± 0.05 for AI Vel and p = 1.41 ± 0.25 for β Cas. By comparing Cepheids and δ-Scuti stars, these results bring valuable insights into the dynamical structure of pulsating star atmospheres.
The Case Study as Research Heuristic: Lessons from the R&D Value Mapping Project.
ERIC Educational Resources Information Center
Bozeman, Barry; Klein, Hans K.
1999-01-01
Examines the role of prototype case studies as the foundation for later evaluation through two studies from the "R&D Value Mapping Project," a study that will involve more than 30 cases. Explores the usefulness of case studies in defining and assessing subsequent research efforts. (SLD)
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
Axisymmetric problem of fretting wear for a foundation with a nonuniform coating and rough punch
NASA Astrophysics Data System (ADS)
Manzhirov, A. V.; Kazakov, K. E.
2018-05-01
The axisymmetric contact problem with fretting wear for an elastic foundation with a longitudinally nonuniform (surface nonuniform) coating and a rigid punch with a rough foundation has been solved for the first time. The case of linear wear is considered. The nonuniformity of the coating and punch roughness are described by a different rapidly changing functions. This strong nonuniformity arises when coatings are deposited using modern additive manufacturing technologies. The problem is reduced the solution of an integral equation with two different integral operators: a compact self-adjoint positively defined operator with respect to the coordinate and the non-self-adjoint integral Volterra operator with respect to time. The solution is obtained in series using the projection method of the authors. The efficiency of the proposed approach for constructing a high-accuracy approximate solution to the problem (with only a few expansion terms retained) is demonstrated.
Accretion-driven turbulence in filaments - I. Non-gravitational accretion
NASA Astrophysics Data System (ADS)
Heigl, S.; Burkert, A.; Gritschneder, M.
2018-03-01
We study accretion-driven turbulence for different inflow velocities in star-forming filaments using the code RAMSES. Filaments are rarely isolated objects and their gravitational potential will lead to radially dominated accretion. In the non-gravitational case, accretion by itself can already provoke non-isotropic, radially dominated turbulent motions responsible for the complex structure and non-thermal line widths observed in filaments. We find that there is a direct linear relation between the absolute value of the total density-weighted velocity dispersion and the infall velocity. The turbulent velocity dispersion in the filaments is independent of sound speed or any net flow along the filament. We show that the density-weighted velocity dispersion acts as an additional pressure term, supporting the filament in hydrostatic equilibrium. Comparing to observations, we find that the projected non-thermal line width variation is generally subsonic independent of inflow velocity.
Chaos and Forecasting - Proceedings of the Royal Society Discussion Meeting
NASA Astrophysics Data System (ADS)
Tong, Howell
1995-04-01
The Table of Contents for the full book PDF is as follows: * Preface * Orthogonal Projection, Embedding Dimension and Sample Size in Chaotic Time Series from a Statistical Perspective * A Theory of Correlation Dimension for Stationary Time Series * On Prediction and Chaos in Stochastic Systems * Locally Optimized Prediction of Nonlinear Systems: Stochastic and Deterministic * A Poisson Distribution for the BDS Test Statistic for Independence in a Time Series * Chaos and Nonlinear Forecastability in Economics and Finance * Paradigm Change in Prediction * Predicting Nonuniform Chaotic Attractors in an Enzyme Reaction * Chaos in Geophysical Fluids * Chaotic Modulation of the Solar Cycle * Fractal Nature in Earthquake Phenomena and its Simple Models * Singular Vectors and the Predictability of Weather and Climate * Prediction as a Criterion for Classifying Natural Time Series * Measuring and Characterising Spatial Patterns, Dynamics and Chaos in Spatially-Extended Dynamical Systems and Ecologies * Non-Linear Forecasting and Chaos in Ecology and Epidemiology: Measles as a Case Study
Cluster synchronization of community network with distributed time delays via impulsive control
NASA Astrophysics Data System (ADS)
Leng, Hui; Wu, Zhao-Yan
2016-11-01
Cluster synchronization is an important dynamical behavior in community networks and deserves further investigations. A community network with distributed time delays is investigated in this paper. For achieving cluster synchronization, an impulsive control scheme is introduced to design proper controllers and an adaptive strategy is adopted to make the impulsive controllers unified for different networks. Through taking advantage of the linear matrix inequality technique and constructing Lyapunov functions, some synchronization criteria with respect to the impulsive gains, instants, and system parameters without adaptive strategy are obtained and generalized to the adaptive case. Finally, numerical examples are presented to demonstrate the effectiveness of the theoretical results. Project supported by the National Natural Science Foundation of China (Grant No. 61463022), the Natural Science Foundation of Jiangxi Province, China (Grant No. 20161BAB201021), and the Natural Science Foundation of Jiangxi Educational Committee, China (Grant No. GJJ14273).
Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho
2014-01-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299
Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho
2014-11-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.
NASA Astrophysics Data System (ADS)
Wang, Lanning; Chen, Weimin; Li, Lizhen
2017-06-01
This paper is concerned with the problems of dissipative stability analysis and control of the two-dimensional (2-D) Fornasini-Marchesini local state-space (FM LSS) model. Based on the characteristics of the system model, a novel definition of 2-D FM LSS (Q, S, R)-α-dissipativity is given first, and then a sufficient condition in terms of linear matrix inequality (LMI) is proposed to guarantee the asymptotical stability and 2-D (Q, S, R)-α-dissipativity of the systems. As its special cases, 2-D passivity performance and 2-D H∞ performance are also discussed. Furthermore, by use of this dissipative stability condition and projection lemma technique, 2-D (Q, S, R)-α-dissipative state-feedback control problem is solved as well. Finally, a numerical example is given to illustrate the effectiveness of the proposed method.
Liu, Qingshan; Guo, Zhishan; Wang, Jun
2012-02-01
In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Log-linear human chorionic gonadotropin elimination in cases of retained placenta percreta.
Stitely, Michael L; Gerard Jackson, M; Holls, William H
2014-02-01
To describe the human chorionic gonadotropin (hCG) elimination rate in patients with intentionally retained placenta percreta. Medical records for cases of placenta percreta with intentional retention of the placenta were reviewed. The natural log of the hCG levels were plotted versus time and then the elimination rate equations were derived. The hCG elimination rate equations were log-linear in three cases individually (R (2) = 0.96-0.99) and in aggregate R (2) = 0.92). The mean half-life of hCG elimination was 146.3 h (6.1 days). The elimination of hCG in patients with intentionally retained placenta percreta is consistent with a two-compartment elimination model. The hCG elimination in retained placenta percreta is predictable in a log-linear manner that is similar to other reports of retained abnormally adherent placentae treated with or without methotrexate.
Cushing disease in a toddler: not all obese children are just fat.
Moriarty, Megan; Hoe, Francis
2009-08-01
Cushing disease is exceedingly rare in children, especially in those under the age of 2 years. This case report describes an 18-month-old female child who presented with morbid obesity, decreased linear growth, and reversal of developmental milestones. Her diagnosis was delayed; however, she was successfully treated by surgical excision of the microadenoma. This was followed by resolution of signs and symptoms of Cushing syndrome. Although the patient's hypertension resolved, linear growth improved and development began to progress, she is still developmentally delayed and now has hypopituitarism. Review of this case, as well as a handful of other cases of infantile Cushing disease in the literature, suggests that features such as hypertension and slowed linear growth, which are rare in nutritional causes of obesity in infants, can help identify this rare, but life-threatening, illness among an increasing number of overweight infants.
Generalized massive optimal data compression
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin
2018-05-01
In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.
Regions of attraction and ultimate boundedness for linear quadratic regulators with nonlinearities
NASA Technical Reports Server (NTRS)
Joshi, S. M.
1984-01-01
The closed-loop stability of multivariable linear time-invariant systems controlled by optimal linear quadratic (LQ) regulators is investigated for the case when the feedback loops have nonlinearities N(sigma) that violate the standard stability condition, sigma N(sigma) or = 0.5 sigma(2). The violations of the condition are assumed to occur either (1) for values of sigma away from the origin (sigma = 0) or (2) for values of sigma in a neighborhood of the origin. It is proved that there exists a region of attraction for case (1) and a region of ultimate boundedness for case (2), and estimates are obtained for these regions. The results provide methods for selecting the performance function parameters to design LQ regulators with better tolerance to nonlinearities. The results are demonstrated by application to the problem of attitude and vibration control of a large, flexible space antenna in the presence of actuator nonlinearities.
Advanced EVA Suit Camera System Development Project
NASA Technical Reports Server (NTRS)
Mock, Kyla
2016-01-01
The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was also spent creating a case for the original interface board that is already being used. This design is being done by use of Creo 2. Due to time constraints, I may not be able to complete the 3-D printing portion of this design, but I was able to use my knowledge of the interface board and Altium Design to help in the task. As a side project, I assisted another intern in selecting and programming a microprocessor to control linear actuators. These linear actuators will be used to move various increments of polyethylene for controlled radiation testing. For this, we began the software portion of the project using the Arduino's coding environment to control an Arduino Due and H-Bridge components. Along with the obvious learning of computer programs such as Altium Design and Creo 2, I also acquired more skills with networking and collaborating with others, being able to multi-task because of responsibilities to work on various projects, and how to set realistic goals in the work place. Like many internship projects, this project will be continued and improved, so I also had the chance to improve my organization and communication skills as I documented all of my meetings and research. As a result of my internship at JSC, I desire to continue a career with NASA, whether that be through another internship or possibly a co-op. I am excited to return to my university and continue my education in electrical engineering because of all of my experiences at JSC.
Multiscale Poly-(ϵ-caprolactone) Scaffold Mimicking Nonlinearity in Tendon Tissue Mechanics
Banik, Brittany L.; Lewis, Gregory S.; Brown, Justin L.
2016-01-01
Regenerative medicine plays a critical role in the future of medicine. However, challenges remain to balance stem cells, biomaterial scaffolds, and biochemical factors to create successful and effective scaffold designs. This project analyzes scaffold architecture with respect to mechanical capability and preliminary mesenchymal stem cell response for tendon regeneration. An electrospun fiber scaffold with tailorable properties based on a “Chinese-fingertrap” design is presented. The unique criss-crossed fiber structures demonstrate non-linear mechanical response similar to that observed in native tendon. Mechanical testing revealed that optimizing the fiber orientation resulted in the characteristic “S”-shaped curve, demonstrating a toe region and linear elastic region. This project has promising research potential across various disciplines: vascular engineering, nerve regeneration, and ligament and tendon tissue engineering. PMID:27141530
Identification of critical factors affecting flexibility in hospital construction projects.
Olsson, Nils E O; Hansen, Geir K
2010-01-01
This paper analyzes the dynamics relating to flexibility in a hospital project context. Three research questions are addressed: (1) When is flexibility used in the life cycle of a project? (2) What are the stakeholders' perspectives on project flexibility? And (3) What is the nature of the interaction between flexibility in the process of a project and flexibility in terms of the characteristics of a building? Flexibility is discussed from both a project management point of view and from a hospital architecture perspective. Flexibility in project life cycle and from a stakeholder perspective is examined, and the interaction between flexibility in scope lock-in and building flexibility is investigated. The results are based on case studies of four Norwegian hospital projects. Information relating to the projects has been obtained from evaluation reports, other relevant documents, and interviews. Observations were codified and analyzed based on selected parameters that represent different aspects of flexibility. One of the cases illustrates how late changes can have a significant negative impact on the project itself, contributing to delays and cost overruns. Another case illustrates that late scope lock-in on a limited part of the project, in this case related to medical equipment, can be done in a controlled manner. Project owners and users appear to have given flexibility high priority. Project management teams are less likely to embrace changes and late scope lock-in. Architects and consultants are important for translating program requirements into physical design. A highly flexible building did not stop some stakeholders from pushing for significant changes and extensions during construction.
Case studies of transportation investment to identify the impacts on the local and state economy.
DOT National Transportation Integrated Search
2013-01-01
This project provides case studies of the impact of transportation investments on local economies. We use multiple : approaches to measure impacts since the effects of transportation projects can vary according to the size of a : project and the size...
24 CFR 241.1010 - Feasibility letter.
Code of Federal Regulations, 2010 CFR
2010-04-01
... SUPPLEMENTARY FINANCING FOR INSURED PROJECT MORTGAGES Insurance for Equity Loans and Acquisition Loans... Commissioner's estimate of the supportable loan amount, based upon the project's equity in the case of an equity loan and based on the project's purchase price in the case of an acquisition loan, but such...
Bergler-Czop, Beata; Lis-Święty, Anna; Brzezińska-Wcisło, Ligia
2009-01-01
Background Hemifacial atrophy (Parry-Romberg syndrome) is a relatively rare disease. The etiology of the disease is not clear. Some authors postulate its relation with limited scleroderma linearis. Linear scleroderma "en coup de sabre" is characterized by clinical presence of most commonly one-sided linear syndrome. In a number of patients, neurological affection is the medium of the disease. The treatment of both scleroderma varieties is similar to the treatment of limited systemic sclerosis. Case presentation We present two cases of a disease: a case of a 49-year-old woman with a typical image of hemifacial atrophy, without any changes of the nervous system and a case of a 33-year-old patient with an "en coup de sabre" scleroderma and with CNS tumor. Conclusion We described typical cases of a rare diseases, hemifacial atrophy and "en coup de sabre" scleroderma. In the patient diagnosed with Parry-Romberg syndrome, with Borrelia burgdoferi infection and with minor neurological symptoms, despite a four-year case history, there was a lack of proper diagnosis and treatment. In the second patient only skin changes without any neurological symptoms could be observed and only a precise neurological diagnosis revealed the presence of CNS tumor. PMID:19635150
The PAC-MAN model: Benchmark case for linear acoustics in computational physics
NASA Astrophysics Data System (ADS)
Ziegelwanger, Harald; Reiter, Paul
2017-10-01
Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.
ERIC Educational Resources Information Center
Haas, Adrian, Ed.
This conference report provides summaries of presentations of country case studies from a project to investigate factors that impinged upon the status of technical and vocational education (TVE) in Asian and Pacific countries. The report includes the case study project terms of reference, a list of delegates, and agenda. Summaries follow of the…
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
ERIC Educational Resources Information Center
Exotech Systems, Inc., Falls Church, VA.
Volume IV of the evaluation report consists of case studies from 10 migrant education projects in 8 of the sample States. These projects were visited in July through September 1973. The case studies give noteworthy or innovative aspects of the projects, detailed descriptions, and the functions. The projects are: (1) Harnett County Summer Migrant…
1975-09-30
systems a linear model results in an object f being mappad into an image _ by a point spread function matrix H. Thus with noise j +Hf +n (1) The simplest... linear models for imaging systems are given by space invariant point spread functions (SIPSF) in which case H is block circulant. If the linear model is...Ij,...,k-IM1 is a set of two dimensional indices each distinct and prior to k. Modeling Procedare: To derive the linear predictor (block LP of figure
NASA Astrophysics Data System (ADS)
Hashemi, Sayed Masoud; Lee, Young; Eriksson, Markus; Nordström, Hâkan; Mainprize, James; Grouza, Vladimir; Huynh, Christopher; Sahgal, Arjun; Song, William Y.; Ruschin, Mark
2017-03-01
A Contrast and Attenuation-map (CT-number) Linearity Improvement (CALI) framework is proposed for cone-beam CT (CBCT) images used for brain stereotactic radiosurgery (SRS). The proposed framework is used together with our high spatial resolution iterative reconstruction algorithm and is tailored for the Leksell Gamma Knife ICON (Elekta, Stockholm, Sweden). The incorporated CBCT system in ICON facilitates frameless SRS planning and treatment delivery. The ICON employs a half-cone geometry to accommodate the existing treatment couch. This geometry increases the amount of artifacts and together with other physical imperfections causes image inhomogeneity and contrast reduction. Our proposed framework includes a preprocessing step, involving a shading and beam-hardening artifact correction, and a post-processing step to correct the dome/capping artifact caused by the spatial variations in x-ray energy generated by bowtie-filter. Our shading correction algorithm relies solely on the acquired projection images (i.e. no prior information required) and utilizes filtered-back-projection (FBP) reconstructed images to generate a segmented bone and soft-tissue map. Ideal projections are estimated from the segmented images and a smoothed version of the difference between the ideal and measured projections is used in correction. The proposed beam-hardening and dome artifact corrections are segmentation free. The CALI was tested on CatPhan, as well as patient images acquired on the ICON system. The resulting clinical brain images show substantial improvements in soft contrast visibility, revealing structures such as ventricles and lesions which were otherwise un-detectable in FBP-reconstructed images. The linearity of the reconstructed attenuation-map was also improved, resulting in more accurate CT#.
49 CFR 173.59 - Description of terms for explosives.
Code of Federal Regulations, 2011 CFR
2011-10-01
... deflagration produce inflation, linear or rotary motion; activate diaphragms, valves or switches, or project.... Articles whose functioning depends of physico-chemical reaction of their contents with water. Cord...
Improving tuberculosis control through public-private collaboration in India: literature review.
Dewan, Puneet K; Lal, S S; Lonnroth, Knut; Wares, Fraser; Uplekar, Mukund; Sahu, Suvanand; Granich, Reuben; Chauhan, Lakhbir Singh
2006-03-11
To review the characteristics of public-private mix projects in India and their effect on case notification and treatment outcomes for tuberculosis. Literature review. Review of surveillance records from Indian tuberculosis programme project, evaluation reports, and medical literature for public-private mix projects in India. Project characteristics, tuberculosis case notification of new patients with sputum smear results positive for acid fast bacilli, and treatment outcome. Of 24 identified public-private mix projects, data were available from 14 (58%), involving private practitioners, corporations, and non-governmental organisations. In all reviewed projects, the public sector tuberculosis programme provided training and supervision of private providers. Among the five projects with available data on historical controls, case notification rates were higher after implementation of a public-private mix project. Among seven projects involving private practitioners, 2796 of 12 147 (23%) new patients positive for acid fast bacilli were attributed to private providers. Corporate based and non-governmental organisations served as the main source for tuberculosis programme services in seven project areas, detecting 9967 new patients positive for acid fast bacilli. In nine of 12 projects with data on treatment outcomes, private providers exceeded the programme target of 85% treatment success for new patients positive for acid fast bacilli. Public-private mix activities were associated with increased case notification, while maintaining acceptable treatment outcomes. Collaborations between public and private providers of health care hold considerable potential to improve tuberculosis control in India.
Neuroimaging and clinical findings in a case of linear scleroderma en coup de sabre.
Duman, Ikram E; Ekinci, Gazanfer
2018-06-01
Linear scleroderma "en coup de sabre" is a subset of localized scleroderma with band-like sclerotic lesions typically involving the frontoparietal regions of the scalp. En coup de sabre and Parry-Romberg syndrome are variants of linear morphea on the head and neck that can be associated with neurologic manifestations. On imaging, patients may have lesions in the cerebrum ipsilateral to the scalp abnormality. We present a case of an 8-year-old girl with a left frontoparietal "en coup de sabre" scalp lesion and describe the neuroimaging findings of frontoparietal white matter lesion discovered incidentally on routine magnetic resonance imaging. The patient had no neurologic symptoms given the lesion identified.
A Centered Projective Algorithm for Linear Programming
1988-02-01
zx/l to (PA Karmarkar’s algorithm iterates this procedure. An alternative method, the so-called affine variant (first proposed by Dikin [6] in 1967...trajectories, II. Legendre transform coordinates . central trajectories," manuscripts, to appear in Transactions of the American [6] I.I. Dikin ...34Iterative solution of problems of linear and quadratic programming," Soviet Mathematics Dokladv 8 (1967), 674-675. [7] I.I. Dikin , "On the speed of an
R&D status of linear collider technology at KEK
NASA Astrophysics Data System (ADS)
Urakawa, Junji
1992-02-01
This paper gives an outline of the Japan Linear Collider (JLC) project, especially JLC-I. The status of the various R&D works is particularly presented for the following topics: (1) electron and positron sources, (2) S-band injector linacs, (3) damping rings, (4) high power klystrons and accelerating structures, (5) the final focus system. Finally, the status of the construction and design studies for the Accelerator Test Facility (ATF) is summarized.
ERIC Educational Resources Information Center
Grimaldi, Ralph P.
This material was developed to provide an application of matrix mathematics in chemistry, and to show the concepts of linear independence and dependence in vector spaces of dimensions greater than three in a concrete setting. The techniques presented are not intended to be considered as replacements for such chemical methods as oxidation-reduction…
Project-Based Learning in Electronic Technology: A Case Study
ERIC Educational Resources Information Center
Li, Li
2015-01-01
A case study of project-based learning (PBL) implemented in Tianjin University of Technology and Education is presented. This multidiscipline project is innovated to meet the novel requirements of industry while keeping its traditional effectiveness in driving students to apply knowledge to practice and problem-solving. The implementation of PBL…
Jones, Stephen G; Coulter, Steven; Conner, William
2013-01-01
To determine what, if any, opportunity exists in using administrative medical claims data for supplemental reporting to the state infectious disease registry system. Cases of five tick-borne (Lyme disease (LD), babesiosis, ehrlichiosis, Rocky Mountain spotted fever (RMSF), tularemia) and two mosquito-borne diseases (West Nile virus, La Crosse viral encephalitis) reported to the Tennessee Department of Health during 2000-2009 were selected for study. Similarly, medically diagnosed cases from a Tennessee-based managed care organization (MCO) claims data warehouse were extracted for the same time period. MCO and Tennessee Department of Health incidence rates were compared using a complete randomized block design within a general linear mixed model to measure potential supplemental reporting opportunity. MCO LD incidence was 7.7 times higher (p<0.001) than that reported to the state, possibly indicating significant under-reporting (∼196 unreported cases per year). MCO data also suggest about 33 cases of RMSF go unreported each year in Tennessee (p<0.001). Three cases of babesiosis were discovered using claims data, a significant finding as this disease was only recently confirmed in Tennessee. Data sharing between MCOs and health departments for vaccine information already exists (eg, the Vaccine Safety Datalink Rapid Cycle Analysis project). There may be a significant opportunity in Tennessee to supplement the current passive infectious disease reporting system with administrative claims data, particularly for LD and RMSF. There are limitations with administrative claims data, but health plans may help bridge data gaps and support the federal administration's vision of combining public and private data into one source.
ERIC Educational Resources Information Center
Irwin, Gretchen; Wessel, Lark; Blackman, Harvey
2012-01-01
This case describes a database redesign project for the United States Department of Agriculture's National Animal Germplasm Program (NAGP). The case provides a valuable context for teaching and practicing database analysis, design, and implementation skills, and can be used as the basis for a semester-long team project. The case demonstrates the…
Integrated Approaches to Testing and Assessment Under ...
Presentation on IATA case study for phenols which is joint project with Health Canada and NCCT, presented at the OECD IATA Case Study Meeting No. 2 in Paris, France. Presentation on IATA case study for phenols which is joint project with Health Canada and NCCT, presented at the OECD IATA Case Study Meeting No. 2 in Paris, France.
Re-entry vehicle shape for enhanced performance
NASA Technical Reports Server (NTRS)
Brown, James L. (Inventor); Garcia, Joseph A. (Inventor); Prabhu, Dinesh K. (Inventor)
2008-01-01
A convex shell structure for enhanced aerodynamic performance and/or reduced heat transfer requirements for a space vehicle that re-enters an atmosphere. The structure has a fore-body, an aft-body, a longitudinal axis and a transverse cross sectional shape, projected on a plane containing the longitudinal axis, that includes: first and second linear segments, smoothly joined at a first end of each the first and second linear segments to an end of a third linear segment by respective first and second curvilinear segments; and a fourth linear segment, joined to a second end of each of the first and second segments by curvilinear segments, including first and second ellipses having unequal ellipse parameters. The cross sectional shape is non-symmetric about the longitudinal axis. The fourth linear segment can be replaced by a sum of one or more polynomials, trigonometric functions or other functions satisfying certain constraints.
NASA Technical Reports Server (NTRS)
Pitman, C. L.; Erb, D. M.; Izygon, M. E.; Fridge, E. M., III; Roush, G. B.; Braley, D. M.; Savely, R. T.
1992-01-01
The United State's big space projects of the next decades, such as Space Station and the Human Exploration Initiative, will need the development of many millions of lines of mission critical software. NASA-Johnson (JSC) is identifying and developing some of the Computer Aided Software Engineering (CASE) technology that NASA will need to build these future software systems. The goal is to improve the quality and the productivity of large software development projects. New trends are outlined in CASE technology and how the Software Technology Branch (STB) at JSC is endeavoring to provide some of these CASE solutions for NASA is described. Key software technology components include knowledge-based systems, software reusability, user interface technology, reengineering environments, management systems for the software development process, software cost models, repository technology, and open, integrated CASE environment frameworks. The paper presents the status and long-term expectations for CASE products. The STB's Reengineering Application Project (REAP), Advanced Software Development Workstation (ASDW) project, and software development cost model (COSTMODL) project are then discussed. Some of the general difficulties of technology transfer are introduced, and a process developed by STB for CASE technology insertion is described.
Polynomial elimination theory and non-linear stability analysis for the Euler equations
NASA Technical Reports Server (NTRS)
Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.
1986-01-01
Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.
Managing a big ground-based astronomy project: the Thirty Meter Telescope (TMT) project
NASA Astrophysics Data System (ADS)
Sanders, Gary H.
2008-07-01
TMT is a big science project and its scale is greater than previous ground-based optical/infrared telescope projects. This paper will describe the ideal "linear" project and how the TMT project departs from that ideal. The paper will describe the needed adaptations to successfully manage real world complexities. The progression from science requirements to a reference design, the development of a product-oriented Work Breakdown Structure (WBS) and an organization that parallels the WBS, the implementation of system engineering, requirements definition and the progression through Conceptual Design to Preliminary Design will be summarized. The development of a detailed cost estimate structured by the WBS, and the methodology of risk analysis to estimate contingency fund requirements will be summarized. Designing the project schedule defines the construction plan and, together with the cost model, provides the basis for executing the project guided by an earned value performance measurement system.
Can air temperature be used to project influences of climate change on stream temperature?
Arismendi, Ivan; Safeeq, Mohammad; Dunham, Jason B.; Johnson, Sherri L.
2014-01-01
Worldwide, lack of data on stream temperature has motivated the use of regression-based statistical models to predict stream temperatures based on more widely available data on air temperatures. Such models have been widely applied to project responses of stream temperatures under climate change, but the performance of these models has not been fully evaluated. To address this knowledge gap, we examined the performance of two widely used linear and nonlinear regression models that predict stream temperatures based on air temperatures. We evaluated model performance and temporal stability of model parameters in a suite of regulated and unregulated streams with 11–44 years of stream temperature data. Although such models may have validity when predicting stream temperatures within the span of time that corresponds to the data used to develop them, model predictions did not transfer well to other time periods. Validation of model predictions of most recent stream temperatures, based on air temperature–stream temperature relationships from previous time periods often showed poor performance when compared with observed stream temperatures. Overall, model predictions were less robust in regulated streams and they frequently failed in detecting the coldest and warmest temperatures within all sites. In many cases, the magnitude of errors in these predictions falls within a range that equals or exceeds the magnitude of future projections of climate-related changes in stream temperatures reported for the region we studied (between 0.5 and 3.0 °C by 2080). The limited ability of regression-based statistical models to accurately project stream temperatures over time likely stems from the fact that underlying processes at play, namely the heat budgets of air and water, are distinctive in each medium and vary among localities and through time.
Kutywayo, Dumisani; Chemura, Abel; Kusena, Winmore; Chidoko, Pardon; Mahoya, Caleb
2013-01-01
The production of agricultural commodities faces increased risk of pests, diseases and other stresses due to climate change and variability. This study assesses the potential distribution of agricultural pests under projected climatic scenarios using evidence from the African coffee white stem borer (CWB), Monochamus leuconotus (Pascoe) (Coleoptera: Cerambycidae), an important pest of coffee in Zimbabwe. A species distribution modeling approach utilising Boosted Regression Trees (BRT) and Generalized Linear Models (GLM) was applied on current and projected climate data obtained from the WorldClim database and occurrence data (presence and absence) collected through on-farm biological surveys in Chipinge, Chimanimani, Mutare and Mutasa districts in Zimbabwe. Results from both the BRT and GLM indicate that precipitation-related variables are more important in determining species range for the CWB than temperature related variables. The CWB has extensive potential habitats in all coffee areas with Mutasa district having the largest model average area suitable for CWB under current and projected climatic conditions. Habitat ranges for CWB will increase under future climate scenarios for Chipinge, Chimanimani and Mutare districts while it will decrease in Mutasa district. The highest percentage change in area suitable for the CWB was for Chimanimani district with a model average of 49.1% (3 906 ha) increase in CWB range by 2080. The BRT and GLM predictions gave similar predicted ranges for Chipinge, Chimanimani and Mutasa districts compared to the high variation in current and projected habitat area for CWB in Mutare district. The study concludes that suitable area for CWB will increase significantly in Zimbabwe due to climate change and there is need to develop adaptation mechanisms. PMID:24014222
Narrowing the range of water availability projections in China using the Budyko framework
NASA Astrophysics Data System (ADS)
Osborne, Joe; Lambert, Hugo
2017-04-01
There is a growing demand for reliable 21st-century projections of water availability at the regional scale. Used alone, global climate models (GCMs) are unsuitable for generating such projections at catchment scales in the presence of simulated aridity biases. This is because the Budyko framework dictates that the partitioning of precipitation into runoff and evapotranspiration scales as a non-linear function of aridity. Therefore, GCMs are typically used in tandem with global hydrological models (GHMs), but this process is computationally expensive. Here, considering a Chinese case study, we utilise the Budyko framework to make use of plentiful GCM output, without the need for GHMs. We first apply the framework to 20th-century observations to show that the significant declines in Yellow river discharge between 1951 and 2000 cannot be accounted for by modelled climate change alone, with human activities playing a larger but poorly quantified role. We further show that the Budyko framework can be used to narrow the range of water availability projections in the Yangtze and Yellow river catchments by 33% an 72%, respectively, in the 21st-century RCP8.5 business-as-usual emission scenario. In the Yellow catchment the best-guess end-of-21st-century change in runoff decreases from an increase of 0.09 mm/d in raw multi-model mean output to an increase of 0.04 mm/d in Budyko corrected multi-model mean output. While this is a valuable finding, we stress that these changes could be dwarfed by changes due to human activity in the 21st century, unless strict water management policies are implemented.
Spacecraft platform cost estimating relationships
NASA Technical Reports Server (NTRS)
Gruhl, W. M.
1972-01-01
The three main cost areas of unmanned satellite development are discussed. The areas are identified as: (1) the spacecraft platform (SCP), (2) the payload or experiments, and (3) the postlaunch ground equipment and operations. The SCP normally accounts for over half of the total project cost and accurate estimates of SCP costs are required early in project planning as a basis for determining total project budget requirements. The development of single formula SCP cost estimating relationships (CER) from readily available data by statistical linear regression analysis is described. The advantages of single formula CER are presented.
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
The Vertical Linear Fractional Initialization Problem
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Hartley, Tom T.
1999-01-01
This paper presents a solution to the initialization problem for a system of linear fractional-order differential equations. The scalar problem is considered first, and solutions are obtained both generally and for a specific initialization. Next the vector fractional order differential equation is considered. In this case, the solution is obtained in the form of matrix F-functions. Some control implications of the vector case are discussed. The suggested method of problem solution is shown via an example.
Flight control application of new stability robustness bounds for linear uncertain systems
NASA Technical Reports Server (NTRS)
Yedavalli, Rama K.
1993-01-01
This paper addresses the issue of obtaining bounds on the real parameter perturbations of a linear state-space model for robust stability. Based on Kronecker algebra, new, easily computable sufficient bounds are derived that are much less conservative than the existing bounds since the technique is meant for only real parameter perturbations (in contrast to specializing complex variation case to real parameter case). The proposed theory is illustrated with application to several flight control examples.
Optimization algorithms for large-scale multireservoir hydropower systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiew, K.L.
Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another.more » The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.« less
Integration of system identification and finite element modelling of nonlinear vibrating structures
NASA Astrophysics Data System (ADS)
Cooper, Samson B.; DiMaio, Dario; Ewins, David J.
2018-03-01
The Finite Element Method (FEM), Experimental modal analysis (EMA) and other linear analysis techniques have been established as reliable tools for the dynamic analysis of engineering structures. They are often used to provide solutions to small and large structures and other variety of cases in structural dynamics, even those exhibiting a certain degree of nonlinearity. Unfortunately, when the nonlinear effects are substantial or the accuracy of the predicted response is of vital importance, a linear finite element model will generally prove to be unsatisfactory. As a result, the validated linear FE model requires further enhancement so that it can represent and predict the nonlinear behaviour exhibited by the structure. In this paper, a pragmatic approach to integrating test-based system identification and FE modelling of a nonlinear structure is presented. This integration is based on three different phases: the first phase involves the derivation of an Underlying Linear Model (ULM) of the structure, the second phase includes experiment-based nonlinear identification using measured time series and the third phase covers augmenting the linear FE model and experimental validation of the nonlinear FE model. The proposed case study is demonstrated on a twin cantilever beam assembly coupled with a flexible arch shaped beam. In this case, polynomial-type nonlinearities are identified and validated with force-controlled stepped-sine test data at several excitation levels.
The MONET code for the evaluation of the dose in hadrontherapy
NASA Astrophysics Data System (ADS)
Embriaco, A.
2018-01-01
The MONET is a code for the computation of the 3D dose distribution for protons in water. For the lateral profile, MONET is based on the Molière theory of multiple Coulomb scattering. To take into account also the nuclear interactions, we add to this theory a Cauchy-Lorentz function, where the two parameters are obtained by a fit to a FLUKA simulation. We have implemented the Papoulis algorithm for the passage from the projected to a 2D lateral distribution. For the longitudinal profile, we have implemented a new calculation of the energy loss that is in good agreement with simulations. The inclusion of the straggling is based on the convolution of energy loss with a Gaussian function. In order to complete the longitudinal profile, also the nuclear contributions are included using a linear parametrization. The total dose profile is calculated in a 3D mesh by evaluating at each depth the 2D lateral distributions and by scaling them at the value of the energy deposition. We have compared MONET with FLUKA in two cases: a single Gaussian beam and a lateral scan. In both cases, we have obtained a good agreement for different energies of protons in water.
Spin-dependent tunneling recombination in heterostructures with a magnetic layer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denisov, K. S., E-mail: denisokonstantin@gmail.com; Rozhansky, I. V.; Averkiev, N. S.
We propose a mechanism for the generation of spin polarization in semiconductor heterostructures with a quantum well and a magnetic impurity layer spatially separated from it. The spin polarization of carriers in a quantum well originates from spin-dependent tunneling recombination at impurity states in the magnetic layer, which is accompanied by a fast linear increase in the degree of circular polarization of photoluminescence from the quantum well. Two situations are theoretically considered. In the first case, resonant tunneling to the spin-split sublevels of the impurity center occurs and spin polarization is caused by different populations of resonance levels in themore » quantum well for opposite spin projections. In the second, nonresonant case, the spin-split impurity level lies above the occupied states of electrons in the quantum well and plays the role of an intermediate state in the two-stage coherent spin-dependent recombination of an electron from the quantum well and a hole in the impurity layer. The developed theory allows us to explain both qualitatively and quantitatively the kinetics of photoexcited electrons in experiments with photoluminescence with time resolution in Mn-doped InGaAs heterostructures.« less
NASA Technical Reports Server (NTRS)
Mulenburg, Gerald M.
2000-01-01
Study of characteristics and relationships of project managers of complex projects in the National Aeronautics and Space Administration. Study is based on Research Design, Data Collection, Interviews, Case Studies, and Data Analysis across varying disciplines such as biological research, space research, advanced aeronautical test facilities, aeronautic flight demonstrations, and projects at different NASA centers to ensure that findings were not endemic to one type of project management, or to one Center's management philosophies. Each project is treated as a separate case with the primary data collected during semi-structured interviews with the project manager responsible for the overall project. Results of the various efforts show some definite similarities of characteristics and relationships among the project managers in the study. A model for how the project managers formulated and managed their projects is included.
Metric freeness and projectivity for classical and quantum normed modules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helemskii, A Ya
2013-07-31
In functional analysis, there are several diverse approaches to the notion of projective module. We show that a certain general categorical scheme contains all basic versions as special cases. In this scheme, the notion of free object comes to the foreground, and, in the best categories, projective objects are precisely retracts of free ones. We are especially interested in the so-called metric version of projectivity and characterize the metrically free classical and quantum (= operator) normed modules. Informally speaking, so-called extremal projectivity, which was known earlier, is interpreted as a kind of 'asymptotical metric projectivity'. In addition, we answer themore » following specific question in the geometry of normed spaces: what is the structure of metrically projective modules in the simplest case of normed spaces? We prove that metrically projective normed spaces are precisely the subspaces of l{sub 1}(M) (where M is a set) that are denoted by l{sub 1}{sup 0}(M) and consist of finitely supported functions. Thus, in this case, projectivity coincides with freeness. Bibliography: 28 titles.« less
Permafrost Hazards and Linear Infrastructure
NASA Astrophysics Data System (ADS)
Stanilovskaya, Julia; Sergeev, Dmitry
2014-05-01
The international experience of linear infrastructure planning, construction and exploitation in permafrost zone is being directly tied to the permafrost hazard assessment. That procedure should also consider the factors of climate impact and infrastructure protection. The current global climate change hotspots are currently polar and mountain areas. Temperature rise, precipitation and land ice conditions change, early springs occur more often. The big linear infrastructure objects cross the territories with different permafrost conditions which are sensitive to the changes in air temperature, hydrology, and snow accumulation which are connected to climatic dynamics. One of the most extensive linear structures built on permafrost worldwide are Trans Alaskan Pipeline (USA), Alaska Highway (Canada), Qinghai-Xizang Railway (China) and Eastern Siberia - Pacific Ocean Oil Pipeline (Russia). Those are currently being influenced by the regional climate change and permafrost impact which may act differently from place to place. Thermokarst is deemed to be the most dangerous process for linear engineering structures. Its formation and development depend on the linear structure type: road or pipeline, elevated or buried one. Zonal climate and geocryological conditions are also of the determining importance here. All the projects are of the different age and some of them were implemented under different climatic conditions. The effects of permafrost thawing have been recorded every year since then. The exploration and transportation companies from different countries maintain the linear infrastructure from permafrost degradation in different ways. The highways in Alaska are in a good condition due to governmental expenses on annual reconstructions. The Chara-China Railroad in Russia is under non-standard condition due to intensive permafrost response. Standards for engineering and construction should be reviewed and updated to account for permafrost hazards caused by the climate change. Extra maintenance activity is needed for existence infrastructure to stay operable. Engineers should run climate models under the most pessimistic scenarios when planning new infrastructure projects. That would allow reducing the potential shortcomings related to the permafrost thawing.
The Programming Language Python In Earth System Simulations
NASA Astrophysics Data System (ADS)
Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.
2004-12-01
Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems
Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...
2012-01-01
Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less
Dikshit, Rajesh P; Yeole, B B; Nagrani, Rajini; Dhillon, P; Badwe, R; Bray, Freddie
2012-08-01
Increasing trends in the incidence of breast cancer have been observed in India, including Mumbai. These have likely stemmed from an increasing adoption of lifestyle factors more akin to those commonly observed in westernized countries. Analyses of breast cancer trends and corresponding estimation of the future burden are necessary to better plan rationale cancer control programmes within the country. We used data from the population-based Mumbai Cancer Registry to study time trends in breast cancer incidence rates 1976-2005 and stratified them according to younger (25-49) and older age group (50-74). Age-period-cohort models were fitted and the net drift used as a measure of the estimated annual percentage change (EAPC). Age-period-cohort models and population projections were used to predict the age-adjusted rates and number of breast cancer cases circa 2025. Breast cancer incidence increased significantly among older women over three decades (EAPC = 1.6%; 95% CI 1.1-2.0), while lesser but significant 1% increase in incidence among younger women was observed (EAPC = 1.0; 95% CI 0.2-1.8). Non-linear period and cohort effects were observed; a trends-based model predicted a close-to-doubling of incident cases by 2025 from 1300 mean cases per annum in 2001-2005 to over 2500 cases in 2021-2025. The incidence of breast cancer has increased in Mumbai during last two to three decades, with increases greater among older women. The number of breast cancer cases is predicted to double to over 2500 cases, the vast majority affecting older women. Copyright © 2012 Elsevier Ltd. All rights reserved.
Quaco, Carrie
2017-09-14
Project Career is a five year NIDILRR-funded interprofessional demonstration project aimed to improve the academic and career success of undergraduate students who have a traumatic brain injury (TBI). The information for this case study was collected and synthesized by an occupational therapy graduate student intern for one of the Project Career sites in collaboration with the Technology and Employment Coordinator for the site, the co-PI for Project Career, and the student participant. A case study is presented to provide an understanding of one of the Project Career participant's experience using a telehealth service delivery approach to working with Project Career for academic and career support. The participant's case notes, direct communication with the intern, and outcome assessments were used to perform a qualitative analysis. The participant reported that he believed Project Career was an effective support service for him. However, the participant's initial and 6-month outcome assessment scores are inconclusive regarding improvements in his academic abilities and satisfaction with academic and career attainment. Further research on the effectiveness of using a telehealth service delivery approach to working with undergraduate students with a TBI is needed.
Nonlinear Extraction of Independent Components of Natural Images Using Radial Gaussianization
Lyu, Siwei; Simoncelli, Eero P.
2011-01-01
We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent. A widely studied linear solution, known as independent component analysis (ICA), exists for the case when the signal is generated as a linear transformation of independent nongaussian sources. Here, we examine a complementary case, in which the source is nongaussian and elliptically symmetric. In this case, no invertible linear transform suffices to decompose the signal into independent components, but we show that a simple nonlinear transformation, which we call radial gaussianization (RG), is able to remove all dependencies. We then examine this methodology in the context of natural image statistics. We first show that distributions of spatially proximal bandpass filter responses are better described as elliptical than as linearly transformed independent sources. Consistent with this, we demonstrate that the reduction in dependency achieved by applying RG to either nearby pairs or blocks of bandpass filter responses is significantly greater than that achieved by ICA. Finally, we show that the RG transformation may be closely approximated by divisive normalization, which has been used to model the nonlinear response properties of visual neurons. PMID:19191599
A general theory of linear cosmological perturbations: scalar-tensor and vector-tensor theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagos, Macarena; Baker, Tessa; Ferreira, Pedro G.
We present a method for parametrizing linear cosmological perturbations of theories of gravity, around homogeneous and isotropic backgrounds. The method is sufficiently general and systematic that it can be applied to theories with any degrees of freedom (DoFs) and arbitrary gauge symmetries. In this paper, we focus on scalar-tensor and vector-tensor theories, invariant under linear coordinate transformations. In the case of scalar-tensor theories, we use our framework to recover the simple parametrizations of linearized Horndeski and ''Beyond Horndeski'' theories, and also find higher-derivative corrections. In the case of vector-tensor theories, we first construct the most general quadratic action for perturbationsmore » that leads to second-order equations of motion, which propagates two scalar DoFs. Then we specialize to the case in which the vector field is time-like (à la Einstein-Aether gravity), where the theory only propagates one scalar DoF. As a result, we identify the complete forms of the quadratic actions for perturbations, and the number of free parameters that need to be defined, to cosmologically characterize these two broad classes of theories.« less
Making a georeferenced mosaic of historical map series using constrained polynomial fit
NASA Astrophysics Data System (ADS)
Molnár, G.
2009-04-01
Present day GIS software packages make it possible to handle several hundreds of rasterised map sheets. For proper usage of such datasets we usually have two requirements: First these map sheets should be georeferenced, secondly these georeferenced maps should fit properly together, without overlap and short. Both requirements can be fulfilled easily, if the geodetic background for the map series is accurate, and the projection of the map series is known. In this case the individual map sheets should be georeferenced in the projected coordinate system of the map series. This means every individual map sheets are georeferenced using overprinted coordinate grid or image corner projected coordinates as ground control points (GCPs). If after this georeferencing procedure the map sheets do not fit together (for example because of using different projection for every map sheet, as it is in the case of Third Military Survey) a common projection can be chosen, and all the georeferenced maps should be transformed to this common projection using a map-to-map transformation. If the geodetic background is not so strong, ie. there are distortions inside the map sheets, a polynomial (linear quadratic or cubic) polynomial fit can be used for georeferencing the map sheets. Finding identical surface objects (as GCPs) on the historical map and on a present day cartographic map, let us to determine a transformation between raw image coordinates (x,y) and the projected coordinates (Easting, Northing, E,N). This means, for all the map sheets, several GCPs should be found, (for linear, quadratic of cubic transformations at least 3, 5 or 10 respectively) and every map sheets should be transformed to a present day coordinate system individually using these GCPs. The disadvantage of this method is that, after the transformation, the individual transformed map sheets not necessarily fit together properly any more. To overcome this problem neither the reverse order of procedure helps: if we make the mosaic first (eg. graphically) and we try the polynomial fit of this mosaic afterwards, neither using this can we reduce the error of internal inaccuracy of the map-sheets. We can overcome this problem by calculating the transformation parameters of polynomial fit with constrains (Mikhail, 1976). The constrain is that the common edge of two neighboring map-sheets should be transformed identically, ie. the right edge of the left image and the left edge of the right image should fit together after the transformation. This condition should fulfill for all the internal (not only the vertical, but also for the horizontal) edges of the mosaic. Constrains are expressed as a relationship between parameters: The parameters of the polynomial transformation should fulfill not only the least squares adjustment criteria but also the constrain: the transformed coordinates should be identical on the image edges. (With the example mentioned above, for image points of the rightmost column of the left image the transformed coordinates should be the same a for the image points of the leftmost column of the right image, and these transformed coordinates can depend on the line number image coordinate of the raster point.) The normal equation system can be calculated with Lagrange-multipliers. The resulting set of parameters for all map-sheets should be applied on the transformation of the images. This parameter set can not been directly applied in GIS software for the transformation. The simplest solution applying this parameters is ‘simulating' GCPs for every image, and applying these simulated GCPs for the georeferencing of the individual map sheets. This method is applied on a set of map-sheets of the First military Survey of the Habsburg Empire with acceptable results. Reference: Mikhail, E. M.: Observations and Least Squares. IEP—A Dun-Donnelley Publisher, New York, 1976. 497 pp.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Bisimulation equivalence of differential-algebraic systems
NASA Astrophysics Data System (ADS)
Megawati, Noorma Yulia; Schaft, Arjan van der
2018-01-01
In this paper, the notion of bisimulation relation for linear input-state-output systems is extended to general linear differential-algebraic (DAE) systems. Geometric control theory is used to derive a linear-algebraic characterisation of bisimulation relations, and an algorithm for computing the maximal bisimulation relation between two linear DAE systems. The general definition is specialised to the case where the matrix pencil sE - A is regular. Furthermore, by developing a one-sided version of bisimulation, characterisations of simulation and abstraction are obtained.
NASA Astrophysics Data System (ADS)
2016-11-01
Officials at the International Linear Collider (ILC) - a proposed successor to the Large Hadron Collider at CERN - have turned to Hello Kitty to help promote the project, which is set to be built in Japan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J
Purpose: Metal objects create severe artifacts in kilo-voltage (kV) CT image reconstructions due to the high attenuation coefficients of high atomic number objects. Most of the techniques devised to reduce this artifact utilize a two-step approach, which do not reliably yield the qualified reconstructed images. Thus, for accuracy and simplicity, this work presents a one-step reconstruction method based on a modified penalized weighted least-squares (PWLS) technique. Methods: Existing techniques for metal artifact reduction mostly adopt a two-step approach, which conduct additional reconstruction with the modified projection data from the initial reconstruction. This procedure does not consistently perform well due tomore » the uncertainties in manipulating the metal-contaminated projection data by thresholding and linear interpolation. This study proposes a one-step reconstruction process using a new PWLS operation with total-variation (TV) minimization, while not manipulating the projection. The PWLS for CT reconstruction has been investigated using a pre-defined weight, based on the variance of the projection datum at each detector bin. It works well when reconstructing CT images from metal-free projection data, which does not appropriately penalize metal-contaminated projection data. The proposed work defines the weight at each projection element under the assumption of a Poisson random variable. This small modification using element-wise penalization has a large impact in reducing metal artifacts. For evaluation, the proposed technique was assessed with two noisy, metal-contaminated digital phantoms, against the existing PWLS with TV minimization and the two-step approach. Result: The proposed PWLS with TV minimization greatly improved the metal artifact reduction, relative to the other techniques, by watching the results. Numerically, the new approach lowered the normalized root-mean-square error about 30 and 60% for the two cases, respectively, compared to the two-step method. Conclusion: A new PWLS operation shows promise for improving metal artifact reduction in CT imaging, as well as simplifying the reconstructing procedure.« less
Governance of the International Linear Collider Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, B.; /Oxford U.; Barish, B.
Governance models for the International Linear Collider Project are examined in the light of experience from similar international projects around the world. Recommendations for one path which could be followed to realize the ILC successfully are outlined. The International Linear Collider (ILC) is a unique endeavour in particle physics; fully international from the outset, it has no 'host laboratory' to provide infrastructure and support. The realization of this project therefore presents unique challenges, in scientific, technical and political arenas. This document outlines the main questions that need to be answered if the ILC is to become a reality. It describesmore » the methodology used to harness the wisdom displayed and lessons learned from current and previous large international projects. From this basis, it suggests both general principles and outlines a specific model to realize the ILC. It recognizes that there is no unique model for such a laboratory and that there are often several solutions to a particular problem. Nevertheless it proposes concrete solutions that the authors believe are currently the best choices in order to stimulate discussion and catalyze proposals as to how to bring the ILC project to fruition. The ILC Laboratory would be set up by international treaty and be governed by a strong Council to whom a Director General and an associated Directorate would report. Council would empower the Director General to give strong management to the project. It would take its decisions in a timely manner, giving appropriate weight to the financial contributions of the member states. The ILC Laboratory would be set up for a fixed term, capable of extension by agreement of all the partners. The construction of the machine would be based on a Work Breakdown Structure and value engineering and would have a common cash fund sufficiently large to allow the management flexibility to optimize the project's construction. Appropriate contingency, clearly apportioned at both a national and global level, is essential if the project is to be realised. Finally, models for running costs and decommissioning at the conclusion of the ILC project are proposed. This document represents an interim report of the bodies and individuals studying these questions inside the structure set up and supervised by the International Committee for Future Accelerators (ICFA). It represents a request for comment to the international community in all relevant disciplines, scientific, technical and most importantly, political. Many areas require further study and some, in particular the site selection process, have not yet progressed sufficiently to be addressed in detail in this document. Discussion raised by this document will be vital in framing the final proposals due to be published in 2012 in the Technical Design Report being prepared by the Global Design Effort of the ILC.« less
Interdisciplinary Student Teams Projects: A Case Study
ERIC Educational Resources Information Center
Kruck, S. E.; Teer, Faye P.
2009-01-01
In today's organizations team work has become an integral part of the day-to-day routine. For this reason, University professors are including group projects in many courses. In such group assessments, we advocate the use of interdisciplinary teams, where possible. As a case study, we report an interdisciplinary group technical project with…
Youth Leadership Development through School-Based Civic Engagement Activities: A Case Study
ERIC Educational Resources Information Center
Horstmeier, Robin Peiter; Ricketts, Kristina G.
2009-01-01
Leadership development through a civic engagement activity in a local FFA chapter is explored. Through a case study design, researchers illuminate a project that encouraged youth leadership development through the creation and execution of a civic engagement project in their own local community. Holistically, FFA members viewed the project as a…
NASA Astrophysics Data System (ADS)
Spirig, Christoph; Bhend, Jonas
2015-04-01
Climate information indices (CIIs) represent a way to communicate climate conditions to specific sectors and the public. As such, CIIs provide actionable information to stakeholders in an efficient way. Due to their non-linear nature, such CIIs can behave differently than the underlying variables, such as temperature. At the same time, CIIs do not involve impact models with different sources of uncertainties. As part of the EU project EUPORIAS (EUropean Provision Of Regional Impact Assessment on a Seasonal-to-decadal timescale) we have developed examples of seasonal forecasts of CIIs. We present forecasts and analyses of the skill of seasonal forecasts for CIIs that are relevant to a variety of economic sectors and a range of stakeholders: heating and cooling degree days as proxies for energy demand, various precipitation and drought-related measures relevant to agriculture and hydrology, a wild fire index, a climate-driven mortality index and wind-related indices tailored to renewable energy producers. Common to all examples is the finding of limited forecast skill over Europe, highlighting the challenge for providing added-value services to stakeholders operating in Europe. The reasons for the lack of forecast skill vary: often we find little skill in the underlying variable(s) precisely in those areas that are relevant for the CII, in other cases the nature of the CII is particularly demanding for predictions, as seen in the case of counting measures such as frost days or cool nights. On the other hand, several results suggest there may be some predictability in sub-regions for certain indices. Several of the exemplary analyses show potential for skillful forecasts and prospect for improvements by investing in post-processing. Furthermore, those cases for which CII forecasts showed similar skill values as those of the underlying meteorological variables, forecasts of CIIs provide added value from a user perspective.
Search for subgrid scale parameterization by projection pursuit regression
NASA Technical Reports Server (NTRS)
Meneveau, C.; Lund, T. S.; Moin, Parviz
1992-01-01
The dependence of subgrid-scale stresses on variables of the resolved field is studied using direct numerical simulations of isotropic turbulence, homogeneous shear flow, and channel flow. The projection pursuit algorithm, a promising new regression tool for high-dimensional data, is used to systematically search through a large collection of resolved variables, such as components of the strain rate, vorticity, velocity gradients at neighboring grid points, etc. For the case of isotropic turbulence, the search algorithm recovers the linear dependence on the rate of strain (which is necessary to transfer energy to subgrid scales) but is unable to determine any other more complex relationship. For shear flows, however, new systematic relations beyond eddy viscosity are found. For the homogeneous shear flow, the results suggest that products of the mean rotation rate tensor with both the fluctuating strain rate and fluctuating rotation rate tensors are important quantities in parameterizing the subgrid-scale stresses. A model incorporating these terms is proposed. When evaluated with direct numerical simulation data, this model significantly increases the correlation between the modeled and exact stresses, as compared with the Smagorinsky model. In the case of channel flow, the stresses are found to correlate with products of the fluctuating strain and rotation rate tensors. The mean rates of rotation or strain do not appear to be important in this case, and the model determined for homogeneous shear flow does not perform well when tested with channel flow data. Many questions remain about the physical mechanisms underlying these findings, about possible Reynolds number dependence, and, given the low level of correlations, about their impact on modeling. Nevertheless, demonstration of the existence of causal relations between sgs stresses and large-scale characteristics of turbulent shear flows, in addition to those necessary for energy transfer, provides important insight into the relation between scales in turbulent flows.
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Generic project definitions for improvement of health care delivery: a case-based approach.
Niemeijer, Gerard C; Does, Ronald J M M; de Mast, Jeroen; Trip, Albert; van den Heuvel, Jaap
2011-01-01
The purpose of this article is to create actionable knowledge, making the definition of process improvement projects in health care delivery more effective. This study is a retrospective analysis of process improvement projects in hospitals, facilitating a case-based reasoning approach to project definition. Data sources were project documentation and hospital-performance statistics of 271 Lean Six Sigma health care projects from 2002 to 2009 of general, teaching, and academic hospitals in the Netherlands and Belgium. Objectives and operational definitions of improvement projects in the sample, analyzed and structured in a uniform format and terminology. Extraction of reusable elements of earlier project definitions, presented in the form of 9 templates called generic project definitions. These templates function as exemplars for future process improvement projects, making the selection, definition, and operationalization of similar projects more efficient. Each template includes an explicated rationale, an operationalization in the form of metrics, and a prototypical example. Thus, a process of incremental and sustained learning based on case-based reasoning is facilitated. The quality of project definitions is a crucial success factor in pursuits to improve health care delivery. We offer 9 tried and tested improvement themes related to patient safety, patient satisfaction, and business-economic performance of hospitals.
NASA Astrophysics Data System (ADS)
Dang, Tong; Zhang, Binzheng; Wiltberge, Michael; Wang, Wenbin; Varney, Roger; Dou, Xiankang; Wan, Weixing; Lei, Jiuhou
2018-01-01
In this study, the correlations between the fluxes of precipitating soft electrons in the cusp region and solar wind coupling functions are investigated utilizing the Lyon-Fedder-Mobarry global magnetosphere model simulations. We conduct two simulation runs during periods from 20 March 2008 to 16 April 2008 and from 15 to 24 December 2014, which are referred as "Equinox Case" and "Solstice Case," respectively. The simulation results of Equinox Case show that the plasma number density in the high-latitude cusp region scales well with the solar wind number density (ncusp/nsw=0.78), which agrees well with the statistical results from the Polar spacecraft measurements. For the Solstice Case, the plasma number density of high-latitude cusp in both hemispheres increases approximately linearly with upstream solar wind number density with prominent hemispheric asymmetry. Due to the dipole tilt effect, the average number density ratio ncusp/nsw in the Southern (summer) Hemisphere is nearly 3 times that in the Northern (winter) Hemisphere. In addition to the solar wind number density, 20 solar wind coupling functions are tested for the linear correlation with the fluxes of precipitating cusp soft electrons. The statistical results indicate that the solar wind dynamic pressure p exhibits the highest linear correlation with the cusp electron fluxes for both equinox and solstice conditions, with correlation coefficients greater than 0.75. The linear regression relations for equinox and solstice cases may provide an empirical calculation for the fluxes of cusp soft electron precipitation based on the upstream solar wind driving conditions.
Flawed Execution: A Case Study on Operational Contract Support
2016-06-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA JOINT APPLIED PROJECT FLAWED EXECUTION: A CASE STUDY ON OPERATIONAL CONTRACT SUPPORT June 2016...applied project 4. TITLE AND SUBTITLE FLAWED EXECUTION: A CASE STUDY ON OPERATIONAL CONTRACT SUPPORT 5. FUNDING NUMBERS 6. AUTHOR(S) Scott F...unlimited FLAWED EXECUTION: A CASE STUDY ON OPERATIONAL CONTRACT SUPPORT Scott F. Taggart, Captain, United States Marine Corps Jacob Ledford
NASA Technical Reports Server (NTRS)
Sankaran, V.
1974-01-01
An iterative procedure for determining the constant gain matrix that will stabilize a linear constant multivariable system using output feedback is described. The use of this procedure avoids the transformation of variables which is required in other procedures. For the case in which the product of the output and input vector dimensions is greater than the number of states of the plant, general solution is given. In the case in which the states exceed the product of input and output vector dimensions, a least square solution which may not be stable in all cases is presented. The results are illustrated with examples.
The effect of the observer vantage point on perceived distortions in linear perspective images.
Todorović, Dejan
2009-01-01
Some features of linear perspective images may look distorted. Such distortions appear in two drawings by Jan Vredeman de Vries involving perceived elliptical, instead of circular, pillars and tilted, instead of upright, columns. Distortions may be due to factors intrinsic to the images, such as violations of the so-called Perkins's laws, or factors extrinsic to them, such as observing the images from positions different from their center of projection. When the correct projection centers for the two drawings were reconstructed, it was found that they were very close to the images and, therefore, practically unattainable in normal observation. In two experiments, enlarged versions of images were used as stimuli, making the positions of the projection centers attainable for observers. When observed from the correct positions, the perceived distortions disappeared or were greatly diminished. Distortions perceived from other positions were smaller than would be predicted by geometrical analyses, possibly due to flatness cues in the images. The results are relevant for the practical purposes of creating faithful impressions of 3-D spaces using 2-D images.
Hessian Schatten-norm regularization for linear inverse problems.
Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael
2013-05-01
We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne
2016-11-01
Coarse grid projection (CGP) methodology is a novel multigrid method for systems involving decoupled nonlinear evolution equations and linear elliptic equations. The nonlinear equations are solved on a fine grid and the linear equations are solved on a corresponding coarsened grid. Mapping functions transfer data between the two grids. Here we propose a version of CGP for incompressible flow computations using incremental pressure correction methods, called IFEi-CGP (implicit-time-integration, finite-element, incremental coarse grid projection). Incremental pressure correction schemes solve Poisson's equation for an intermediate variable and not the pressure itself. This fact contributes to IFEi-CGP's efficiency in two ways. First, IFEi-CGP preserves the velocity field accuracy even for a high level of pressure field grid coarsening and thus significant speedup is achieved. Second, because incremental schemes reduce the errors that arise from boundaries with artificial homogenous Neumann conditions, CGP generates undamped flows for simulations with velocity Dirichlet boundary conditions. Comparisons of the data accuracy and CPU times for the incremental-CGP versus non-incremental-CGP computations are presented.
NASA Astrophysics Data System (ADS)
Saputra, Y. A.; Setyaningtyas, V. E. D.; Latiffianti, E.; Wijaya, S. H.; Ladamay, O. S. A.
2018-04-01
Measuring project success level is a challenging activity. This area of works has attracted many researchers to look deeper into the method of measurement, success factor identification, risk management, and many others relevant topics. However, the project management scope is limited until the project handover stage. After a project handover, the control of a project management changes from Project Management Team to the project owner/commercialization team. From an investor’s point of view, the success of a project delivery needs to be followed by the success of commercialization phase. This paper aims to present an approach on how we track and measure the progress and success level of a project investment in the commercialization phase. This is an interesting topic which probably often being forgotten in many practical case. Our proposed concept modify Freeman and Beale concept by estimating the variance between the Planned Net Present Value / Annual Worth (as it is in the Feasibility Study Document) and the Actual Net Present Value / Annual Worth (until the point time of evaluation). The gap will lead us to the next analysis and give us some important information, especially exposing whether our project investment performs better than the planning or underperformed. Some corrective actions can be suggested based on the provided information. Practical cases to exercise the concept is also provided and discussed; one case in a property sector in the middle of commercialization phase, and another case in a Power Plant investment approaching the end of commercialization phase.
Ambient temperature and coronary heart disease mortality in Beijing, China: a time series study
2012-01-01
Background Many studies have examined the association between ambient temperature and mortality. However, less evidence is available on the temperature effects on coronary heart disease (CHD) mortality, especially in China. In this study, we examined the relationship between ambient temperature and CHD mortality in Beijing, China during 2000 to 2011. In addition, we compared time series and time-stratified case-crossover models for the non-linear effects of temperature. Methods We examined the effects of temperature on CHD mortality using both time series and time-stratified case-crossover models. We also assessed the effects of temperature on CHD mortality by subgroups: gender (female and male) and age (age > =65 and age < 65). We used a distributed lag non-linear model to examine the non-linear effects of temperature on CHD mortality up to 15 lag days. We used Akaike information criterion to assess the model fit for the two designs. Results The time series models had a better model fit than time-stratified case-crossover models. Both designs showed that the relationships between temperature and group-specific CHD mortality were non-linear. Extreme cold and hot temperatures significantly increased the risk of CHD mortality. Hot effects were acute and short-term, while cold effects were delayed by two days and lasted for five days. The old people and women were more sensitive to extreme cold and hot temperatures than young and men. Conclusions This study suggests that time series models performed better than time-stratified case-crossover models according to the model fit, even though they produced similar non-linear effects of temperature on CHD mortality. In addition, our findings indicate that extreme cold and hot temperatures increase the risk of CHD mortality in Beijing, China, particularly for women and old people. PMID:22909034
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2015-04-05
The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.
Linear {GLP}-algebras and their elementary theories
NASA Astrophysics Data System (ADS)
Pakhomov, F. N.
2016-12-01
The polymodal provability logic {GLP} was introduced by Japaridze in 1986. It is the provability logic of certain chains of provability predicates of increasing strength. Every polymodal logic corresponds to a variety of polymodal algebras. Beklemishev and Visser asked whether the elementary theory of the free {GLP}-algebra generated by the constants \\mathbf{0}, \\mathbf{1} is decidable [1]. For every positive integer n we solve the corresponding question for the logics {GLP}_n that are the fragments of {GLP} with n modalities. We prove that the elementary theory of the free {GLP}_n-algebra generated by the constants \\mathbf{0}, \\mathbf{1} is decidable for all n. We introduce the notion of a linear {GLP}_n-algebra and prove that all free {GLP}_n-algebras generated by the constants \\mathbf{0}, \\mathbf{1} are linear. We also consider the more general case of the logics {GLP}_α whose modalities are indexed by the elements of a linearly ordered set α: we define the notion of a linear algebra and prove the latter result in this case.
Library-ABE Projects. Case Studies.
ERIC Educational Resources Information Center
MacVicar, Phyllis
This document contains 41 case studies submitted to the Appalachian Adult Education Center by the staffs of four projects demonstrating library services to disadvantaged adults, in cooperation with adult basic education programs. Included in each case study is the coping skill area in which an individual need was recognized and met through the…
DOT National Transportation Integrated Search
2000-03-01
This case study is one of a series of case studies that examine procurement approaches used to deliver Intelligent Transportation System (ITS) projects. ITS projects are often complex and leverage the latest technology in telecommunications, computer...
Ramjet Tactical Missile Propulsion Status
2002-11-01
EM Linear Actuator Figure 25 - MAR C-R-282Ramjet Figure 26 - AL4RC-R-282Ramjet Testing HIGH SPEED ANTI-RADIATION MISSILE DEMONSRATION ( HSAD ) The High...Speed Anti-Radiation Demonstration ( HSAD ) Project is focused on maturing an advanced propulsion concept that is compatible with the guidance...navigation and control (GNC) section of the Air-Ground Missile-88E (AGM-88f), Advanced Anti-Radiation Guided Missile (AARGM) Program. The HSAD Project
Construction and use of an inexpensive "anthropometer".
Clarke, G S
1996-08-01
An inexpensive anthropometer, suitable for use in undergraduate projects, was constructed from aluminum rod and components designed from laboratory retort stands. Only modest workshop skills and widely available machine tools were required to produce the device, which could be used to take accurate and reproducible measurements of linear dimensions and the angles of orientation of body segments. Its use in student projects indicates the value of the angular measurements in investigating posture.
NASA Astrophysics Data System (ADS)
Nayfeh, A. H.
1983-09-01
An analysis is presented of the response of multidegree-of-freedom systems with quadratic non-linearities to a harmonic parametric excitation in the presence of an internal resonance of the combination type ω3 ≈ ω2 + ω1, where the ωn are the linear natural frequencies of the systems. In the case of a fundamental resonance of the third mode (i.e., Ω ≈ω 3, where Ω is the frequency of the excitation), one can identify two critical values ζ 1 and ζ 2, where ζ 2 ⩾ ζ 1, of the amplitude F of the excitation. The value F = ζ2 corresponds to the transition from stable to unstable solutions. When F < ζ1, the motion decays to zero according to both linear and non-linear theories. When F > ζ2, the motion grows exponentially with time according to the linear theory but the non-linearity limits the motion to a finite amplitude steady state. The amplitude of the third mode, which is directly excited, is independent of F, whereas the amplitudes of the first and second modes, which are indirectly excited through the internal resonance, are functions of F. When ζ1 ⩽ F ⩽ ζ2, the motion decays or achieves a finite amplitude steady state depending on the initial conditions according to the non-linear theory, whereas it decays to zero according to the linear theory. This is an example of subcritical instability. In the case of a fundamental resonance of either the first or second mode, the trivial response is the only possible steady state. When F ⩽ ζ2, the motion decays to zero according to both linear and non-linear theories. When F > ζ2, the motion grows exponentially with time according to the linear theory but it is aperiodic according to the non-linear theory. Experiments are being planned to check these theoretical results.
Mariano-Goulart, D; Fourcade, M; Bernon, J L; Rossi, M; Zanca, M
2003-01-01
Thanks to an experimental study based on simulated and physical phantoms, the propagation of the stochastic noise in slices reconstructed using the conjugate gradient algorithm has been analysed versus iterations. After a first increase corresponding to the reconstruction of the signal, the noise stabilises before increasing linearly with iterations. The level of the plateau as well as the slope of the subsequent linear increase depends on the noise in the projection data.
Theoretical considerations of some nonlinear aspects of hypersonic panel flutter
NASA Technical Reports Server (NTRS)
Mcintosh, S. C., Jr.
1974-01-01
A research project to analyze the effects of hypersonic nonlinear aerodynamic loading on panel flutter is reported. The test equipment and procedures for conducting the tests are explained. The effects of aerodynamic linearities on stability were evaluated by determining constant-initial-energy amplitude-sensitive stability boundaries and comparing them with the corresponding linear stability boundaries. An attempt to develop an alternative method of analysis for systems where amplitude-sensitive instability is possible is presented.