Browsing Mathematics and Statistics by Publication date
Now showing items 120 of 44

The linear algebra of interpolation with finite applications giving computational methods for multivariate polynomialsLinear representation and the duality of the biorthonormality relationship express the linear algebra of interpolation by way of the evaluation mapping. In the finite case the standard bases relate the maps to Gramian matrices. Five equivalent conditions on these objects are found which characterize the solution of the interpolation problem. This algebra succinctly describes the solution space of ordinary linear initial value problems. Multivariate polynomial spaces and multidimensional node sets are described by multiindex sets. Geometric considerations of normalization and dimensionality lead to cardinal bases for Lagrange interpolation on regular node sets. More general Hermite functional sets can also be solved by generalized Newton methods using geometry and multiindices. Extended to countably infinite spaces, the method calls upon theorems of modern analysis.

NonNormality In Scalar Delay Differential EquationsAnalysis of stability for delay differential equations (DDEs) is a tool in a variety of fields such as nonlinear dynamics in physics, biology, and chemistry, engineering and pure mathematics. Stability analysis is based primarily on the eigenvalues of a discretized system. Situations exist in which practical and numerical results may not match expected stability inferred from such approaches. The reasons and mechanisms for this behavior can be related to the eigenvectors associated with the eigenvalues. When the operator associated to a linear (or linearized) DDE is significantly nonnormal, the stability analysis must be adapted as demonstrated here. Example DDEs are shown to have solutions which exhibit transient growth not accounted for by eigenvalues alone. Pseudospectra are computed and related to transient growth.

Control And Inverse Problems For One Dimensional SystemsThe thesis is devoted to control and inverse problems (dynamical and spectral) for systems on graphs and on the half line. In the first part we study the boundary control problems for the wave, heat, and Schrodinger equations on a finite graph. We suppose that the graph is a tree (i.e., it does not contain cycles), and on each edge an equation is defined. The control is acting through the Dirichlet condition applied to all or all but one boundary vertices. The exact controllability in L2classes of controls is proved and sharp estimates of the time of controllability are obtained for the wave equation. The null controllability for the heat equation and exact controllability for the Schrodinger equation in arbitrary time interval are obtained. In the second part we consider the inplane motion of elastic strings on a treelike network, observed from the 'leaves.' We investigate the inverse problem of recovering not only the physical properties, i.e. the 'optical lengths' of each string, but also the topology of the tree which is represented by the edge degrees and the angles between branching edges. It is shown that under generic assumptions the inverse problem can be solved by applying measurements at all leaves, the root of the tree being fixed. In the third part of the thesis we consider Inverse dynamical and spectral problems for the Schrodinger operator on the half line. Using the connection between dynamical (Boundary Control method) and spectral approaches (due to Krein, GelfandLevitan, Simon and Remling), we improved the result on the representation of socalled Aamplitude and derive the "local" version of the classical GelfandLevitan equations.

Control Theoretic Approach To Sampling And Approximation ProblemsWe present applications of some methods of control theory to problems of signal processing and optimal quadrature problems. The following problems are considered: construction of sampling and interpolating sequences for multiband signals; spectral estimation of signals modeled by a finite sum of exponentials modulated by polynomials; construction of optimal quadrature formulae for integrands determined by solutions of initial boundary value problems. A multiband signal is a function whose Fourier transform is supported on a finite union of intervals. The approach used in Chapter I is based on connections between the sampling and interpolation problem and the problem of the controllability of a dynamical system. We prove that there exist infinitely many sampling and interpolating sequences for signals whose spectra are supported on a union of two disjoint intervals, and provide an algorithm for construction of such sequences. There exist numerous methods for solving the spectral estimation problem. In Chapter II we introduce a new approach to this problem based on the Boundary Control method, which uses the connection between inverse problems of mathematical physics and control theory for partial differential equations. Using samples of the signal at integer moments of time we construct a convolution operator regarded as an inputoutput map of a linear discrete dynamical system. This system can be identified, and the exponents and amplitudes of the signal can be found from the parameters of the system. We show that the coefficients of the signal can be recovered by solving a generalized eigenvalue problem as in the Matrix Pencil method. Our method allows to consider signals with polynomial amplitudes, and we obtain an exact formula for these amplitudes. In the third chapter we consider an optimal quadrature problem for solutions of initial boundary value problems. The problem of optimization of an error functional over the set of solutions and quadrature weights is a problem of optimal control of partial differential equations. We obtain estimates for the error in quadrature formulae and an optimality condition for quadrature weights.

An exposition on the KroneckerWeber theoremThe KroneckerWeber Theorem is a, classification result from Algebraic Number Theory. Theorem (KroneckerWeber). Every finite, abelian extension of Q is contained in a cyclotomic field. This result was originally proven by Leopold Kronecker in 1853. However, his proof had some gaps that were later filled by Heinrich Martin Weber in 1886 and David Hilbert in 1896. Hilbert's strategy for the proof eventually led to the creation of the field of mathematics called Class Field Theory, which is the study of finite, abelian extensions of arbitrary fields and is still an area of active research. Not only is the KroneckerWeber Theorem surprising, its proof is truly amazing. The idea of the proof is that for a finite, Galois extension K of Q, there is a connection between the Galois group Gal(K/Q) and how primes of Z split in a certain subring R of K corresponding to Z in Q. When Gal(K/Q) is abelian, this connection is so stringent that the only possibility is that K is contained in a cyclotomic field. In this paper, we give an overview of field/Galois theory and what the KroneckerWeber Theorem means. We also talk about the ring of integers R of K, how primes split in R, how splitting of primes is related to the Galois group Gal(K/Q), and finally give a proof of the KroneckerWeber Theorem using these ideas.

Exact and numerical solutions for stokes flow in glaciersWe begin with an overview of the fluid mechanics governing ice flow. We review a 1985 result due to Balise and Raymond giving exact solutions for a glaciologicallyrelevant Stokes problem. We extend this result by giving exact formulas for the pressure and for the basal stress. This leads to a theorem giving a necessary condition on the basal velocity of a gravityinduced flow in a rectangular geometry. We describe the finite element method for solving the same problem numerically. We present a concise implementation using FEniCS, a freelyavailable software package, and discuss the convergence of the numerical method to the exact solution. We describe how to fix an error in a recent published model.

Minimal covers of the Archimedean tilings, part II AppendicesThese files contain a full descriptions of the relations in the presentations of the monodromy groups for the (3.3.4.3.4), (3.3.3.4.4), (4.6.12), and (3.3.3.3.6) tilings. This material was prepared to provide additional material or the possibility of verification of our work for the interested reader of the associated article.

Tsunami runup in U and V shaped baysTsunami runup can be effectively modeled using the shallow water wave equations. In 1958 Carrier and Greenspan in their work "Water waves of finite amplitude on a sloping beach" used this system to model tsunami runup on a uniformly sloping plane beach. They linearized this problem using a hodograph type transformation and obtained the KleinGordon equation which could be explicitly solved by using the FourierBessel transform. In 2011, Efim Pelinovsky and Ira Didenkulova in their work "Runup of Tsunami Waves in UShaped Bays" used a similar hodograph type transformation and linearized the tsunami problem for a sloping bay with parabolic crosssection. They solved the linear system by using the D'Alembert formula. This method was generalized to sloping bays with crosssections parameterized by power functions. However, an explicit solution was obtained only for the case of a bay with a quadratic crosssection. In this paper we will show that the KleinGordon equation can be solved by a spectral method for any inclined bathymetry with power function for any positive power. The result can be used to estimate tsunami runup in such bays with minimal numerical computations. This fact is very important because in many cases our numerical model can be substituted for fullscale numerical models which are computationally expensive, and time consuming, and not feasible to investigate tsunami behavior in the Alaskan coastal zone, due to the low population density in this area

A temperature only formulation for ice sheetsTemperature plays an important role in the dynamics of large flowing ice masses like glaciers and ice sheets. Because of this role many models for ice sheets model temperature in some form. One type of model for polythermal glaciers (glaciers which contain both ice below, and at the pressuremelting temperature) explicitly separates the ice into distinct cold and temperate regimes, and tracks the interface between them as a surface. Other models track the enthalpy (internal energy) across both domains, with temperature being a function of enthalpy. We present an alternative mathematical formulation for polythermal glaciers and icesheets, in the form of a variational inequality for the temperature field only. Using the calculus of variations, we establish some sufficient conditions under which our formulation is wellposed. We then present some numerical approximations of solutions obtained via the Finite Element Method.

Phylogenetic trees and Euclidean embeddingsIn this thesis we develop an intuitive process of encoding any phylogenetic tree and its associated treedistance matrix as a collection of points in Euclidean space. Using this encoding, we find that information about the structure of the tree can easily be recovered by applying the inner product operation to vector combinations of the Euclidean points. By applying Classical Scaling to the treedistance matrix, we are able to find the Euclidean points even when the phylogenetic tree is not known. We use the insight gained by encoding the tree as a collection of Euclidean points to modify the Neighbor Joining Algorithm, a method to recover an unknown phylogenetic tree from its treedistance matrix, to be more resistant to treedistance proportional errors.

The treatment of missing data on placement tools for predicting success in college algebra at the University of AlaskaThis project investigated the statistical significance of baccalaureate student placement tools such as tests scores and completion of a developmental course on predicting success in a college level algebra course at the University of Alaska (UA). Students included in the study had attempted Math 107 at UA for the first time between fiscal years 2007 and 2012. The student placement information had a high percentage of missing data. A simulation study was conducted to choose the best missing data method between complete case deletion, and multiple imputation for the student data. After the missing data methods were applied, a logistic regression with fitted with explanatory variables consisting of tests scores, developmental course grade, age (category) of scores and grade, and interactions. The relevant tests were SAT math, ACT math, AccuPlacer college level math, and the relevant developmental course was Devm /Math 105. The response variable was success in passing Math 107 with grade of C or above on the first attempt. The simulation study showed that under a high percentage of missing data and correlation, multiple imputation implemented by the R package Multivariate Imputation by Chained Equations (MICE) produced the least biased estimators and better confidence interval coverage compared to complete cases deletion when data are missing at random (MAR) and missing not at random (MNAR). Results from multiple imputation method on the student data showed that Devm /Math 105 grade was a significant predictor of passing Math 107. The age of Devm /Math 105, age of tests, and test scores were not significant predictors of student success in Math 107. Future studies may consider modeling with ALEKS scores, and high school math course information.

The geometry in geometric algebraWe present an axiomatic development of geometric algebra. One may think of a geometric algebra as allowing one to add and multiply subspaces of a vector space. Properties of the geometric product are proven and derived products called the wedge and contraction product are introduced. Linear algebraic and geometric concepts such as linear independence and orthogonality may be expressed through the above derived products. Some examples with geometric algebra are then given.

Effect of filling methods on the forecasting of time series with missing valuesThe Gulf of Alaska Mooring (GAK1) monitoring data set is an irregular time series of temperature and salinity at various depths in the Gulf of Alaska. One approach to analyzing data from an irregular time series is to regularize the series by imputing or filling in missing values. In this project we investigated and compared four methods (denoted as APPROX, SPLINE, LOCF and OMIT) of doing this. Simulation was used to evaluate the performance of each filling method on parameter estimation and forecasting precision for an Autoregressive Integrated Moving Average (ARIMA) model. Simulations showed differences among the four methods in terms of forecast precision and parameter estimate bias. These differences depended on the true values of model parameters as well as on the percentage of data missing. Among the four methods used in this project, the method OMIT performed the best and SPLINE performed the worst. We also illustrate the application of the four methods to forecasting the Gulf of Alaska Mooring (GAK1) monitoring time series, and discuss the results in this project.

An investigation into the effectiveness of simulationextrapolation for correcting measurement errorinduced bias in multilevel modelsThis paper is an investigation into correcting the bias introduced by measurement errors into multilevel models. The proposed method for this correction is simulationextrapolation (SIMEX). The paper begins with a detailed discussion of measurement error and its effects on parameter estimation. We then describe the simulationextrapolation method and how it corrects for the bias introduced by the measurement error. Multilevel models and their corresponding parameters are also defined before performing a simulation. The simulation involves estimating the multilevel model parameters using our true explanatory variables, the observed measurement error variables, and two different SIMEX techniques. The estimates obtained from our true explanatory values were used as a baseline for comparing the effectiveness of the SIMEX method for correcting bias. From these results, we were able to determine that the SIMEX was very effective in correcting the bias in estimates of the fixed effects parameters and often provided estimates that were not significantly different than those from the estimates derived using the true explanatory variables. The simulation also suggested that the SIMEX approach was effective in correcting bias for the random slope variance estimates, but not for the random intercept variance estimates. Using the simulation results as a guideline, we then applied the SIMEX approach to an orthodontics dataset to illustrate the application of SIMEX to real data.

Numerical realization of the generalized CarrierGreenspan Transform for the shallow water wave equationsWe study the development of two numerical algorithms for long nonlinear wave runup that utilize the generalized CarrierGreenspan transform. The CarrierGreenspan transform is a hodograph transform that allows the Shallow Water Wave equations to be transformed into a linear second order wave equation with nonconstant coefficients. In both numerical algorithms the transform is numerically implemented, the resulting linear system is numerically solved and then the inverse transformation is implemented. The first method we develop is based on an implicit finite difference method and is applicable to constantly sloping bays of arbitrary crosssection. The resulting scheme is extremely fast and shows promise as a fast tsunami runup solver for wave runup in coastal fjords and narrow inlets. For the second scheme, we develop an initial value boundary problem corresponding to an Inclined bay with U or V shaped crosssections that has a wall some distance from the shore. A spectral method is applied to the resulting linear equation in order to and a series solution. Both methods are verified against an analytical solution in an inclined parabolic bay with positive results and the first scheme is compared to the 3D numerical solver FUNWAVE with positive results.

Gaussian process convolutions for Bayesian spatial classificationWe compare three models for their ability to perform binary spatial classification. A geospatial data set consisting of observations that are either permafrost or not is used for this comparison. All three use an underlying Gaussian process. The first model considers this process to represent the logodds of a positive classification (i.e. as permafrost). The second model uses a cutoff. Any locations where the process is positive are classified positively, while those that are negative are classified negatively. A probability of misclassification then gives the likelihood. The third model depends on two separate processes. The first represents a positive classification, while the second a negative classification. Of these two, the process with greater value at a location provides the classification. A probability of misclassification is also used to formulate the likelihood for this model. In all three cases, realizations of the underlying Gaussian processes were generated using a process convolution. A grid of knots (whose values were sampled using Markov Chain Monte Carlo) were convolved using an anisotropic Gaussian kernel. All three models provided adequate classifications, but the single and twoprocess models showed much tighter bounds on the border between the two states.

Expectation maximization and latent class modelsLatent tree models are tree structured graphical models where some random variables are observable while others are latent. These models are used to model data in many areas, such as bioinformatics, phylogenetics, computer vision among others. This work contains some background on latent tree models and algebraic geometry with the goal of estimating the volume of the latent tree model known as the 3leaf model M₂ (where the root is a hidden variable with 2 states, and is the parent of three observable variables with 2 states) in the probability simplex Δ₇, and to estimate the volume of the latent tree model known as the 3leaf model M₃ (where the root is a hidden variable with 3 states, and is the parent of two observable variables with 3 states and one observable variable with 2 states) in the probability simplex Δ₁₇. For the model M₃, we estimate that the rough percentage of distributions that arise from stochastic parameters is 0:015%, the rough percentage of distributions that arise from real parameters is 64:742% and the rough percentage of distributions that arise from complex parameters is 35:206%. We will also discuss the algebraic boundary of these models and we observe the behavior of the estimates of the Expectation Maximization algorithm (EM algorithm), an iterative method typically used to try to find a maximum likelihood estimator.