• Analyzing tree distribution and abundance in Yukon-Charley Rivers National Preserve: developing geostatistical Bayesian models with count data

      Winder, Samantha; Short, Margaret; Roland, Carl; Goddard, Scott; McIntyre, Julie (2018-05)
      Species distribution models (SDMs) describe the relationship between where a species occurs and underlying environmental conditions. For this project, I created SDMs for the five tree species that occur in Yukon-Charley Rivers National Preserve (YUCH) in order to gain insight into which environmental covariates are important for each species, and what effect each environmental condition has on that species' expected occurrence or abundance. I discuss some of the issues involved in creating SDMs, including whether or not to incorporate spatially explicit error terms, and if so, how to do so with generalized linear models (GLMs, which have discrete responses). I ran a total of 10 distinct geostatistical SDMs using Markov Chain Monte Carlo (Bayesian methods), and discuss the results here. I also compare these results from YUCH with results from a similar analysis conducted in Denali National Park and Preserve (DNPP).
    • An application of an integrated population model: estimating population size of the Fortymile caribou herd using limited data

      Inokuma, Megumi; Short, Margaret; Barry, Ron; Goddard, Scott (2017-05)
      An Integrated Population Model (IPM) was employed to estimate the population size of the Fortymile Caribou herd (FCH), utilizing multiple types of biological data. Current population size estimates of the FCH are made by the Alaska Department of Fish and Game (ADF&G) using an aerial photo census technique. Taking aerial photos for the counts requires certain environmental conditions, such as the existence of swarms of mosquitoes that drive the majority of caribou to wide open spaces, as well as favorable weather conditions, which allow low-altitude flying in mid-June. These conditions have not been met in recent years so there is no count estimate for those years. IPMs are considered as alternative methods to estimate a population size. IPMs contain three components: a stochastic component that explains the relationship between biological information and population size; demographic models that derive parameters from independently conducted surveys; and a link between IPM estimates and observed-count estimates. In this paper, we combine census count data, parturition data, calf and female adults survival data, and sex composition data, all of which were collected by ADF&G between 1990 and 2016. During this time period, there were 13 years - including two five-consecutive-year periods - for which no photo census count estimates were available. We estimate the missing counts and the associated uncertainty using a Bayesian IPM. Our case study shows that IPMs are capable of estimating a population size for years with missing count data when we have other biological data. We suggest that sensitivity analyses be done to learn the relationship between amount of data and the accuracy of the estimates.
    • An application of Bayesian variable selection to international economic data

      Tian, Xiang; Goddard, Scott; Barry, Ron; Short, Margaret; McIntyre, Julie (2017-06)
      GDP plays an important role in people's lives. For example, when GDP increases, the unemployment rate will frequently decrease. In this project, we will use four different Bayesian variable selection methods to verify economic theory regarding important predictors to GDP. The four methods are: g-prior variable selection with credible intervals, local empirical Bayes with credible intervals, variable selection by indicator function, and hyper-g prior variable selection. Also, we will use four measures to compare the results of the various Bayesian variable selection methods: AIC, BIC, Adjusted-R squared and cross-validation.
    • Assessing year to year variability of inertial oscillation in the Chukchi Sea using the wavelet transform

      Leonard, David (2016-05)
      Three years of ocean drifter data from the Chukchi Sea were examined using the wavelet transform to investigate inertial oscillation. There was an increasing trend in number, duration, and hence total proportion of time spent in inertial oscillation events. Additionally, the Chukchi Sea seems to facilitate inertial oscillation that is easier to discern using north-south velocity records rather than east-west velocity records. The data used in this analysis was transformed using wavelets, which are generally used as a qualitative statistical method. Because of this, in addition to measurement error and random ocean noise, there is an additional source of variability and correlation which makes concrete statistical results challenging to obtain. However, wavelets were an effective tool for isolating the specific period of inertial oscillation and examining how it changed over time.
    • Bayesian predictive process models for historical precipitation data of Alaska and southwestern Canada

      Vanney, Peter; Short, Margaret; Goddard, Scott; Barry, Ronald (2016-05)
      In this paper we apply hierarchical Bayesian predictive process models to historical precipitation data using the spBayes R package. Classical and hierarchical Bayesian techniques for spatial analysis and modeling require large matrix inversions and decompositions, which can take prohibitive amounts of time to run (n observations take time on the order of n3). Bayesian predictive process models have the same spatial framework as hierarchical Bayesian models but fit a subset of points (called knots) to the sample which allows for large scale dimension reduction and results in much smaller matrix inversions and faster computing times. These computationally less expensive models allow average desktop computers to analyze spatially related datasets in excess of 20,000 observations in an acceptable amount of time.
    • A comparison of discrete inverse methods for determining parameters of an economic model

      Jurkowski, Caleb; Maxwell, David; Short, Margaret; Bueler, Edward (2017-08)
      We consider a time-dependent spatial economic model for capital in which the region's production function is a parameter. This forward model predicts the distribution of capital of a region based on that region's production function. We will solve the inverse problem based on this model, i.e. given data describing the capital of a region we wish to determine the production function through discretization. Inverse problems are generally ill-posed, which in this case means that if the data describing the capital are changed slightly, the solution of the inverse problem could change dramatically. The solution we seek is therefore a probability distribution of parameters. However, this probability distribution is complex, and at best we can describe some of its features. We describe the solution to this inverse problem using two different techniques, Markov chain Monte Carlo (Metropolis Algorithm ) and least squares optimization, and compare summary statistics coming from each method.
    • Edge detection using Bayesian process convolutions

      Lang, Yanda; Short, Margaret; Barry, Ron; Goddard, Scott; McIntyre, Julie (2017-05)
      This project describes a method for edge detection in images. We develop a Bayesian approach for edge detection, using a process convolution model. Our method has some advantages over the classical edge detector, Sobel operator. In particular, our Bayesian spatial detector works well for rich, but noisy, photos. We first demonstrate our approach with a small simulation study, then with a richer photograph. Finally, we show that the Bayesian edge detector performance gives considerable improvement over the Sobel operator performance for rich photos.
    • Effect of filling methods on the forecasting of time series with missing values

      Cheng, Mingyuan (2014-12)
      The Gulf of Alaska Mooring (GAK1) monitoring data set is an irregular time series of temperature and salinity at various depths in the Gulf of Alaska. One approach to analyzing data from an irregular time series is to regularize the series by imputing or filling in missing values. In this project we investigated and compared four methods (denoted as APPROX, SPLINE, LOCF and OMIT) of doing this. Simulation was used to evaluate the performance of each filling method on parameter estimation and forecasting precision for an Autoregressive Integrated Moving Average (ARIMA) model. Simulations showed differences among the four methods in terms of forecast precision and parameter estimate bias. These differences depended on the true values of model parameters as well as on the percentage of data missing. Among the four methods used in this project, the method OMIT performed the best and SPLINE performed the worst. We also illustrate the application of the four methods to forecasting the Gulf of Alaska Mooring (GAK1) monitoring time series, and discuss the results in this project.
    • Extending the Lattice-Based Smoother using a generalized additive model

      Rakhmetova, Gulfaya; McIntyre, Julie; Short, Margaret; Goddard, Scott (2017-12)
      The Lattice Based Smoother was introduced by McIntyre and Barry (2017) to estimate a surface defined over an irregularly-shaped region. In this paper we consider extending their method to allow for additional covariates and non-continuous responses. We describe our extension which utilizes the framework of generalized additive models. A simulation study shows that our method is comparable to the Soap film smoother of Wood et al. (2008), under a number of different conditions. Finally we illustrate the method's practical use by applying it to a real data set.
    • Gaussian process convolutions for Bayesian spatial classification

      Best, John K.; Short, Margaret; Goddard, Scott; Barry, Ron; McIntyre, Julie (2016-05)
      We compare three models for their ability to perform binary spatial classification. A geospatial data set consisting of observations that are either permafrost or not is used for this comparison. All three use an underlying Gaussian process. The first model considers this process to represent the log-odds of a positive classification (i.e. as permafrost). The second model uses a cutoff. Any locations where the process is positive are classified positively, while those that are negative are classified negatively. A probability of misclassification then gives the likelihood. The third model depends on two separate processes. The first represents a positive classification, while the second a negative classification. Of these two, the process with greater value at a location provides the classification. A probability of misclassification is also used to formulate the likelihood for this model. In all three cases, realizations of the underlying Gaussian processes were generated using a process convolution. A grid of knots (whose values were sampled using Markov Chain Monte Carlo) were convolved using an anisotropic Gaussian kernel. All three models provided adequate classifications, but the single and two-process models showed much tighter bounds on the border between the two states.
    • An investigation into the effectiveness of simulation-extrapolation for correcting measurement error-induced bias in multilevel models

      Custer, Christopher (2015-04)
      This paper is an investigation into correcting the bias introduced by measurement errors into multilevel models. The proposed method for this correction is simulation-extrapolation (SIMEX). The paper begins with a detailed discussion of measurement error and its effects on parameter estimation. We then describe the simulation-extrapolation method and how it corrects for the bias introduced by the measurement error. Multilevel models and their corresponding parameters are also defined before performing a simulation. The simulation involves estimating the multilevel model parameters using our true explanatory variables, the observed measurement error variables, and two different SIMEX techniques. The estimates obtained from our true explanatory values were used as a baseline for comparing the effectiveness of the SIMEX method for correcting bias. From these results, we were able to determine that the SIMEX was very effective in correcting the bias in estimates of the fixed effects parameters and often provided estimates that were not significantly different than those from the estimates derived using the true explanatory variables. The simulation also suggested that the SIMEX approach was effective in correcting bias for the random slope variance estimates, but not for the random intercept variance estimates. Using the simulation results as a guideline, we then applied the SIMEX approach to an orthodontics dataset to illustrate the application of SIMEX to real data.
    • Moose abundance estimation using finite population block kriging on Togiak National Wildlife Refuge, Alaska

      Frye, Graham G. (2016-12)
      Monitoring the size and demographic characteristics of animal populations is fundamental to the fields of wildlife ecology and wildlife management. A diverse suite of population monitoring methods have been developed and employed during the past century, but challenges in obtaining rigorous population estimates remain. I used simulation to address survey design issues for monitoring a moose population at Togiak National Wildlife Refuge in southwestern Alaska using finite population block kriging. In the first chapter, I compared the bias in the Geospatial Population Estimator (GSPE; which uses finite population block kriging to estimate animal abundance) between two survey unit configurations. After finding that substantial bias was induced through the use of the historic survey unit configuration, I concluded that the ’’standard” unit configuration was preferable because it allowed unbiased estimation. In the second chapter, I examined the effect of sampling intensity on performance of the GSPE. I concluded that bias and confidence interval coverage were unaffected by sampling intensity, whereas the coefficient of variation (CV) and root mean squared error (RMSE) decreased with increasing sampling intensity. In the final chapter, I examined the effect of spatial clustering by moose on model performance. Highly clustered moose distributions induced a small amount of positive bias, confidence interval coverage lower than the nominal rate, higher CV, and higher RMSE. Some of these issues were ameliorated by increasing sampling intensity, but if highly clustered distributions of moose are expected, then substantially greater sampling intensities than those examined here may be required.
    • Reliability analysis of reconstructing phylogenies under long branch attraction conditions

      Dissanayake, Ranjan; Allman, Elizabeth; McIntyre, Julie; Short, Margaret; Goddard, Scott (2018-05)
      In this simulation study we examined the reliability of three phylogenetic reconstruction techniques in a long branch attraction (LBA) situation: Maximum Parsimony (M P), Neighbor Joining (NJ), and Maximum Likelihood. Data were simulated under five DNA substitution models-JC, K2P, F81, HKY, and G T R-from four different taxa. Two branch length parameters of four taxon trees ranging from 0.05 to 0.75 with an increment of 0.02 were used to simulate DNA data under each model. For each model we simulated DNA sequences with 100, 250, 500 and 1000 sites with 100 replicates. When we have enough data the maximum likelihood technique is the most reliable of the three methods examined in this study for reconstructing phylogenies under LBA conditions. We also find that MP is the most sensitive to LBA conditions and that Neighbor Joining performs well under LBA conditions compared to MP.
    • Statistical analysis of species tree inference

      Dajles, Andres; Rhodes, John; Allman, Elizabeth; Goddard, Scott; Short, Margaret; Barry, Ron (2016-05)
      It is known that the STAR and USTAR algorithms are statistically consistent techniques used to infer species tree topologies from a large set of gene trees. However, if the set of gene trees is small, the accuracy of STAR and USTAR in determining species tree topologies is unknown. Furthermore, it is unknown how introducing roots on the gene trees affects the performance of STAR and USTAR. Therefore, we show that when given a set of gene trees of sizes 1, 3, 6 or 10, the STAR and USTAR algorithms with Neighbor Joining perform relatively well for two different cases: one where the gene trees are rooted at the outgroup and the STAR inferred species tree is also rooted at the outgroup, and the other where the gene trees are not rooted at the outgroup, but the USTAR inferred species tree is rooted at the outgroup. It is known that the STAR and USTAR algorithms are statistically consistent techniques used to infer species tree topologies from a large set of gene trees. However, if the set of gene trees is small, the accuracy of STAR and USTAR in determining species tree topologies is unknown. Furthermore, it is unknown how introducing roots on the gene trees affects the performance of STAR and USTAR. Therefore, we show that when given a set of gene trees of sizes 1, 3, 6 or 10, the STAR and USTAR algorithms with Neighbor Joining perform relatively well for two different cases: one where the gene trees are rooted at the outgroup and the STAR inferred species tree is also rooted at the outgroup, and the other where the gene trees are not rooted at the outgroup, but the USTAR inferred species tree is rooted at the outgroup.
    • Testing multispecies coalescent simulators with summary statistics

      Baños Cervantes, Hector Daniel; Allman, Elizabeth; Rhodes, John; Goddard, Scott; McIntyre, Julie; Barry, Ron (2018-12)
      The Multispecies coalescent model (MSC) is increasingly used in phylogenetics to describe the formation of gene trees (depicting the direct ancestral relationships of sampled lineages) within species trees (depicting the branching of species from their common ancestor). A number of MSC simulators have been implemented, and these are often used to test inference methods built on the model. However, it is not clear from the literature that these simulators are always adequately tested. In this project, we formulated tools for testing these simulators and use them to show that of four well-known coalescent simulators, Mesquite, Hybrid-Lambda, SimPhy, and Phybase, only SimPhy performs correctly according to these tests.
    • Toward an optimal solver for the obstacle problem

      Heldman, Max; Bueler, Ed; Maxwell, David; Rhodes, John (2018-04)
      An optimal algorithm for solving a problem with m degrees of freedom is one that computes a solution in O (m) time. In this paper, we discuss a class of optimal algorithms for the numerical solution of PDEs called multigrid methods. We go on to examine numerical solvers for the obstacle problem, a constrained PDE, with the goal of demonstrating optimality. We discuss two known algorithms, the so-called reduced space method (RSP) [BM03] and the multigrid-based projected full-approximation scheme (PFAS) [BC83]. We compare the performance of PFAS and RSP on a few example problems, finding numerical evidence of optimality or near-optimality for PFAS.
    • The treatment of missing data on placement tools for predicting success in college algebra at the University of Alaska

      Crawford, Alyssa (2014-05)
      This project investigated the statistical significance of baccalaureate student placement tools such as tests scores and completion of a developmental course on predicting success in a college level algebra course at the University of Alaska (UA). Students included in the study had attempted Math 107 at UA for the first time between fiscal years 2007 and 2012. The student placement information had a high percentage of missing data. A simulation study was conducted to choose the best missing data method between complete case deletion, and multiple imputation for the student data. After the missing data methods were applied, a logistic regression with fitted with explanatory variables consisting of tests scores, developmental course grade, age (category) of scores and grade, and interactions. The relevant tests were SAT math, ACT math, AccuPlacer college level math, and the relevant developmental course was Devm /Math 105. The response variable was success in passing Math 107 with grade of C or above on the first attempt. The simulation study showed that under a high percentage of missing data and correlation, multiple imputation implemented by the R package Multivariate Imputation by Chained Equations (MICE) produced the least biased estimators and better confidence interval coverage compared to complete cases deletion when data are missing at random (MAR) and missing not at random (MNAR). Results from multiple imputation method on the student data showed that Devm /Math 105 grade was a significant predictor of passing Math 107. The age of Devm /Math 105, age of tests, and test scores were not significant predictors of student success in Math 107. Future studies may consider modeling with ALEKS scores, and high school math course information.
    • Vertex arboricity of triangle-free graphs

      Warren, Samantha; Gimbel, John; Faudree, Jill; Allman, Elizabeth (2016-05)
      The vertex arboricity of a graph is the minimum number of colors needed to color the vertices so that the subgraph induced by each color class is a forest. In other words, the vertex arboricity of a graph is the fewest number of colors required in order to color a graph such that every cycle has at least two colors. Although not standard, we will refer to vertex arboricity simply as arboricity. In this paper, we discuss properties of chromatic number and k-defective chromatic number and how those properties relate to the arboricity of trianglefree graphs. In particular, we find bounds on the minimum order of a graph having arboricity three. Equivalently, we consider the largest possible vertex arboricity of triangle-free graphs of fixed order.