Browsing Mathematics and Statistics by Title
Now showing items 120 of 41

A temperature only formulation for ice sheetsTemperature plays an important role in the dynamics of large flowing ice masses like glaciers and ice sheets. Because of this role many models for ice sheets model temperature in some form. One type of model for polythermal glaciers (glaciers which contain both ice below, and at the pressuremelting temperature) explicitly separates the ice into distinct cold and temperate regimes, and tracks the interface between them as a surface. Other models track the enthalpy (internal energy) across both domains, with temperature being a function of enthalpy. We present an alternative mathematical formulation for polythermal glaciers and icesheets, in the form of a variational inequality for the temperature field only. Using the calculus of variations, we establish some sufficient conditions under which our formulation is wellposed. We then present some numerical approximations of solutions obtained via the Finite Element Method.

Analyzing tree distribution and abundance in YukonCharley Rivers National Preserve: developing geostatistical Bayesian models with count dataSpecies distribution models (SDMs) describe the relationship between where a species occurs and underlying environmental conditions. For this project, I created SDMs for the five tree species that occur in YukonCharley Rivers National Preserve (YUCH) in order to gain insight into which environmental covariates are important for each species, and what effect each environmental condition has on that species' expected occurrence or abundance. I discuss some of the issues involved in creating SDMs, including whether or not to incorporate spatially explicit error terms, and if so, how to do so with generalized linear models (GLMs, which have discrete responses). I ran a total of 10 distinct geostatistical SDMs using Markov Chain Monte Carlo (Bayesian methods), and discuss the results here. I also compare these results from YUCH with results from a similar analysis conducted in Denali National Park and Preserve (DNPP).

An application of an integrated population model: estimating population size of the Fortymile caribou herd using limited dataAn Integrated Population Model (IPM) was employed to estimate the population size of the Fortymile Caribou herd (FCH), utilizing multiple types of biological data. Current population size estimates of the FCH are made by the Alaska Department of Fish and Game (ADF&G) using an aerial photo census technique. Taking aerial photos for the counts requires certain environmental conditions, such as the existence of swarms of mosquitoes that drive the majority of caribou to wide open spaces, as well as favorable weather conditions, which allow lowaltitude flying in midJune. These conditions have not been met in recent years so there is no count estimate for those years. IPMs are considered as alternative methods to estimate a population size. IPMs contain three components: a stochastic component that explains the relationship between biological information and population size; demographic models that derive parameters from independently conducted surveys; and a link between IPM estimates and observedcount estimates. In this paper, we combine census count data, parturition data, calf and female adults survival data, and sex composition data, all of which were collected by ADF&G between 1990 and 2016. During this time period, there were 13 years  including two fiveconsecutiveyear periods  for which no photo census count estimates were available. We estimate the missing counts and the associated uncertainty using a Bayesian IPM. Our case study shows that IPMs are capable of estimating a population size for years with missing count data when we have other biological data. We suggest that sensitivity analyses be done to learn the relationship between amount of data and the accuracy of the estimates.

An application of Bayesian variable selection to international economic dataGDP plays an important role in people's lives. For example, when GDP increases, the unemployment rate will frequently decrease. In this project, we will use four different Bayesian variable selection methods to verify economic theory regarding important predictors to GDP. The four methods are: gprior variable selection with credible intervals, local empirical Bayes with credible intervals, variable selection by indicator function, and hyperg prior variable selection. Also, we will use four measures to compare the results of the various Bayesian variable selection methods: AIC, BIC, AdjustedR squared and crossvalidation.

Assessing year to year variability of inertial oscillation in the Chukchi Sea using the wavelet transformThree years of ocean drifter data from the Chukchi Sea were examined using the wavelet transform to investigate inertial oscillation. There was an increasing trend in number, duration, and hence total proportion of time spent in inertial oscillation events. Additionally, the Chukchi Sea seems to facilitate inertial oscillation that is easier to discern using northsouth velocity records rather than eastwest velocity records. The data used in this analysis was transformed using wavelets, which are generally used as a qualitative statistical method. Because of this, in addition to measurement error and random ocean noise, there is an additional source of variability and correlation which makes concrete statistical results challenging to obtain. However, wavelets were an effective tool for isolating the specific period of inertial oscillation and examining how it changed over time.

Bayesian predictive process models for historical precipitation data of Alaska and southwestern CanadaIn this paper we apply hierarchical Bayesian predictive process models to historical precipitation data using the spBayes R package. Classical and hierarchical Bayesian techniques for spatial analysis and modeling require large matrix inversions and decompositions, which can take prohibitive amounts of time to run (n observations take time on the order of n3). Bayesian predictive process models have the same spatial framework as hierarchical Bayesian models but fit a subset of points (called knots) to the sample which allows for large scale dimension reduction and results in much smaller matrix inversions and faster computing times. These computationally less expensive models allow average desktop computers to analyze spatially related datasets in excess of 20,000 observations in an acceptable amount of time.

A comparison of discrete inverse methods for determining parameters of an economic modelWe consider a timedependent spatial economic model for capital in which the region's production function is a parameter. This forward model predicts the distribution of capital of a region based on that region's production function. We will solve the inverse problem based on this model, i.e. given data describing the capital of a region we wish to determine the production function through discretization. Inverse problems are generally illposed, which in this case means that if the data describing the capital are changed slightly, the solution of the inverse problem could change dramatically. The solution we seek is therefore a probability distribution of parameters. However, this probability distribution is complex, and at best we can describe some of its features. We describe the solution to this inverse problem using two different techniques, Markov chain Monte Carlo (Metropolis Algorithm ) and least squares optimization, and compare summary statistics coming from each method.

Control And Inverse Problems For One Dimensional SystemsThe thesis is devoted to control and inverse problems (dynamical and spectral) for systems on graphs and on the half line. In the first part we study the boundary control problems for the wave, heat, and Schrodinger equations on a finite graph. We suppose that the graph is a tree (i.e., it does not contain cycles), and on each edge an equation is defined. The control is acting through the Dirichlet condition applied to all or all but one boundary vertices. The exact controllability in L2classes of controls is proved and sharp estimates of the time of controllability are obtained for the wave equation. The null controllability for the heat equation and exact controllability for the Schrodinger equation in arbitrary time interval are obtained. In the second part we consider the inplane motion of elastic strings on a treelike network, observed from the 'leaves.' We investigate the inverse problem of recovering not only the physical properties, i.e. the 'optical lengths' of each string, but also the topology of the tree which is represented by the edge degrees and the angles between branching edges. It is shown that under generic assumptions the inverse problem can be solved by applying measurements at all leaves, the root of the tree being fixed. In the third part of the thesis we consider Inverse dynamical and spectral problems for the Schrodinger operator on the half line. Using the connection between dynamical (Boundary Control method) and spectral approaches (due to Krein, GelfandLevitan, Simon and Remling), we improved the result on the representation of socalled Aamplitude and derive the "local" version of the classical GelfandLevitan equations.

Control Theoretic Approach To Sampling And Approximation ProblemsWe present applications of some methods of control theory to problems of signal processing and optimal quadrature problems. The following problems are considered: construction of sampling and interpolating sequences for multiband signals; spectral estimation of signals modeled by a finite sum of exponentials modulated by polynomials; construction of optimal quadrature formulae for integrands determined by solutions of initial boundary value problems. A multiband signal is a function whose Fourier transform is supported on a finite union of intervals. The approach used in Chapter I is based on connections between the sampling and interpolation problem and the problem of the controllability of a dynamical system. We prove that there exist infinitely many sampling and interpolating sequences for signals whose spectra are supported on a union of two disjoint intervals, and provide an algorithm for construction of such sequences. There exist numerous methods for solving the spectral estimation problem. In Chapter II we introduce a new approach to this problem based on the Boundary Control method, which uses the connection between inverse problems of mathematical physics and control theory for partial differential equations. Using samples of the signal at integer moments of time we construct a convolution operator regarded as an inputoutput map of a linear discrete dynamical system. This system can be identified, and the exponents and amplitudes of the signal can be found from the parameters of the system. We show that the coefficients of the signal can be recovered by solving a generalized eigenvalue problem as in the Matrix Pencil method. Our method allows to consider signals with polynomial amplitudes, and we obtain an exact formula for these amplitudes. In the third chapter we consider an optimal quadrature problem for solutions of initial boundary value problems. The problem of optimization of an error functional over the set of solutions and quadrature weights is a problem of optimal control of partial differential equations. We obtain estimates for the error in quadrature formulae and an optimality condition for quadrature weights.

Edge detection using Bayesian process convolutionsThis project describes a method for edge detection in images. We develop a Bayesian approach for edge detection, using a process convolution model. Our method has some advantages over the classical edge detector, Sobel operator. In particular, our Bayesian spatial detector works well for rich, but noisy, photos. We first demonstrate our approach with a small simulation study, then with a richer photograph. Finally, we show that the Bayesian edge detector performance gives considerable improvement over the Sobel operator performance for rich photos.

Effect of filling methods on the forecasting of time series with missing valuesThe Gulf of Alaska Mooring (GAK1) monitoring data set is an irregular time series of temperature and salinity at various depths in the Gulf of Alaska. One approach to analyzing data from an irregular time series is to regularize the series by imputing or filling in missing values. In this project we investigated and compared four methods (denoted as APPROX, SPLINE, LOCF and OMIT) of doing this. Simulation was used to evaluate the performance of each filling method on parameter estimation and forecasting precision for an Autoregressive Integrated Moving Average (ARIMA) model. Simulations showed differences among the four methods in terms of forecast precision and parameter estimate bias. These differences depended on the true values of model parameters as well as on the percentage of data missing. Among the four methods used in this project, the method OMIT performed the best and SPLINE performed the worst. We also illustrate the application of the four methods to forecasting the Gulf of Alaska Mooring (GAK1) monitoring time series, and discuss the results in this project.

Estimating confidence intervals on accuracy in classification in machine learningThis paper explores various techniques to estimate a confidence interval on accuracy for machine learning algorithms. Confidence intervals on accuracy may be used to rank machine learning algorithms. We investigate bootstrapping, leave one out cross validation, and conformal prediction. These techniques are applied to the following machine learning algorithms: support vector machines, bagging AdaBoost, and random forests. Confidence intervals are produced on a total of nine datasets, three real and six simulated. We found in general not any technique was particular successful at always capturing the accuracy. However leave one out cross validation had the most consistency amongst all techniques for all datasets.

Exact and numerical solutions for stokes flow in glaciersWe begin with an overview of the fluid mechanics governing ice flow. We review a 1985 result due to Balise and Raymond giving exact solutions for a glaciologicallyrelevant Stokes problem. We extend this result by giving exact formulas for the pressure and for the basal stress. This leads to a theorem giving a necessary condition on the basal velocity of a gravityinduced flow in a rectangular geometry. We describe the finite element method for solving the same problem numerically. We present a concise implementation using FEniCS, a freelyavailable software package, and discuss the convergence of the numerical method to the exact solution. We describe how to fix an error in a recent published model.

An existence theorem for solutions to a model problem with Yamabepositive metric for conformal parameterizations of the Einstein constraint equationsWe use the conformal method to investigate solutions of the vacuum Einstein constraint equations on a manifold with a Yamabepositive metric. To do so, we develop a model problem with symmetric data on Sn⁻¹ x S¹. We specialize the model problem to a twoparameter family of conformal data, and find that no solutions exist when the transversetraceless tensor is identically zero. When the transverse traceless tensor is nonzero, we observe an existence theorem in both the nearconstant mean curvature and farfromconstant mean curvature regimes.

Expectation maximization and latent class modelsLatent tree models are tree structured graphical models where some random variables are observable while others are latent. These models are used to model data in many areas, such as bioinformatics, phylogenetics, computer vision among others. This work contains some background on latent tree models and algebraic geometry with the goal of estimating the volume of the latent tree model known as the 3leaf model M₂ (where the root is a hidden variable with 2 states, and is the parent of three observable variables with 2 states) in the probability simplex Δ₇, and to estimate the volume of the latent tree model known as the 3leaf model M₃ (where the root is a hidden variable with 3 states, and is the parent of two observable variables with 3 states and one observable variable with 2 states) in the probability simplex Δ₁₇. For the model M₃, we estimate that the rough percentage of distributions that arise from stochastic parameters is 0:015%, the rough percentage of distributions that arise from real parameters is 64:742% and the rough percentage of distributions that arise from complex parameters is 35:206%. We will also discuss the algebraic boundary of these models and we observe the behavior of the estimates of the Expectation Maximization algorithm (EM algorithm), an iterative method typically used to try to find a maximum likelihood estimator.

An exploration of two infinite families of snarksIn this paper, we generalize a single example of a snark that admits a drawing with even rotational symmetry into two infinite families using a voltage graph construction techniques derived from cyclic PseudoLoupekine snarks. We expose an enforced chirality in coloring the underlying 5pole that generated the known example, and use this fact to show that the infinite families are in fact snarks. We explore the construction of these families in terms of the blowup construction. We show that a graph in either family with rotational symmetry of order m has automorphism group of order m2m⁺¹. The oddness of graphs in both families is determined exactly, and shown to increase linearly with the order of rotational symmetry.

Extending the LatticeBased Smoother using a generalized additive modelThe Lattice Based Smoother was introduced by McIntyre and Barry (2017) to estimate a surface defined over an irregularlyshaped region. In this paper we consider extending their method to allow for additional covariates and noncontinuous responses. We describe our extension which utilizes the framework of generalized additive models. A simulation study shows that our method is comparable to the Soap film smoother of Wood et al. (2008), under a number of different conditions. Finally we illustrate the method's practical use by applying it to a real data set.

Gaussian process convolutions for Bayesian spatial classificationWe compare three models for their ability to perform binary spatial classification. A geospatial data set consisting of observations that are either permafrost or not is used for this comparison. All three use an underlying Gaussian process. The first model considers this process to represent the logodds of a positive classification (i.e. as permafrost). The second model uses a cutoff. Any locations where the process is positive are classified positively, while those that are negative are classified negatively. A probability of misclassification then gives the likelihood. The third model depends on two separate processes. The first represents a positive classification, while the second a negative classification. Of these two, the process with greater value at a location provides the classification. A probability of misclassification is also used to formulate the likelihood for this model. In all three cases, realizations of the underlying Gaussian processes were generated using a process convolution. A grid of knots (whose values were sampled using Markov Chain Monte Carlo) were convolved using an anisotropic Gaussian kernel. All three models provided adequate classifications, but the single and twoprocess models showed much tighter bounds on the border between the two states.

The geometry in geometric algebraWe present an axiomatic development of geometric algebra. One may think of a geometric algebra as allowing one to add and multiply subspaces of a vector space. Properties of the geometric product are proven and derived products called the wedge and contraction product are introduced. Linear algebraic and geometric concepts such as linear independence and orthogonality may be expressed through the above derived products. Some examples with geometric algebra are then given.