next up previous
Next: Mutual Information and Redundancy Up: Low Dimensional Behavior in Previous: Introduction

Analysis of a Run

In this article, we choose to investigate the state of a system by applying various nonlinear time series analysis tools to one or more of the quantities stored from a simulation of the system. The tools include various spectral estimates, time delay embeddings, singular-value embeddings, embeddings using different variables, phase portraits, false nearest neighbor analysis, global and local singular-value decompositions, various dimension estimates

Fig 1a,1b show the behavior of the domain-integrated potential energy p(t) over the first and last 100 years of a 1010 year run. The long Rossby waves have a period of about 90 days while the slower oscillation has an average period of about 2.5 years. The data itself is sampled every 5 days. Thus, if we assume that the long Rossby waves are of significance in the overall dynamical behavior, then the data is oversampled roughly by a factor of about 3. Further, the system spends more time near the high energy state compared to near the low energy state and the long Rossby waves are clearest near the high energy state.

We consider the time delay reconstruction[#!packard!#,#!takens!#] of the phase space of the double-gyre model, presently using the domain-integrated potential energy time series: , to unfold the attractor of the model. Such a reconstruction requires a choice of a time delay to make each of the delay vectors yield as much new information as possible and a choice of an embedding dimension de to qualitatively preserve desired aspects of the attractor. For finite precision data, too small a value of implies strongly correlated delay vectors, while too large a value would make the delay vectors almost uncorrelated, and in both the cases, it would be difficult to estimate the properties of the attractor. While most aspects of the attractor can be preserved by choosing the embedding dimension greater than two times the dimension of the attractor[#!takens2!#], an embedding dimension just greater than the dimension of the attractor has been proven to be sufficient to preserve certain measures of the attractor (Hausdorff dimension[#!hunt!#], correlation dimension[#!ding!#]). While an embedding dimension larger than that minimum might reveal other aspects of the attractor, it would be computationally more expensive, and would add additional parameters to a simple model of the low dimensional behavior. We, therefore, need a good estimate of the minimum embedding dimension necessary for the time series to preserve desired aspects of the attractor,


  
Figure 1: First and last 100 years



 
next up previous
Next: Mutual Information and Redundancy Up: Low Dimensional Behavior in Previous: Introduction
Balasubramany (Balu) Nadiga
1/8/1998