Animating Movement with ArcGIS Pro
This blog describes the nuts and bolts of how this bird movement data was animated using ArcGIS Pro. It won’t necessarily be a perfect step by step guide, but it will provide some background on this specific project that you may find useful when animating your own movement data.
Let’s get started with some context. I think that birds are such an interesting subject to map. Specifically, their migratory movements are such a fascinating spatial-temporal phenomenon. Where they travel, when they travel, and where they stop along the way can become a richly detailed narrative when laid out upon a map. I love these types of maps for the depth of the story that can be weaved from a single species or even just a single bird. In this case, I wanted to create a unique and eye-catching animation to serve as the header for my story.
This ArcGIS Insights migration story was the perfect excuse to try animating data in ArcGIS Pro and attempt to illustrate migratory movements. To make this animation visually interesting, I wanted each GPS track within the data to become a comet-like datapoint flying across the map. In order to assemble components of this comet; the faint trail, the comet tail, and the comet itself would require both some data processing and symbology wizardry.
With just a rough idea and enthusiasm at my disposal, I dove right In and now I’m taking you along as I stumble back through my methodology. Buckle up, this is where things get weird.
The workflow could be used to animate any type of movement data, but I was inspired to turn to Movebank as my datasource.
I searched through studies where data was available. The map view shows tracks and by inspecting them here you can gather some helpful attributes and get a sense of number of birds, length of the study, location of tracks, and species.
I knew I wanted to craft my story around a handful of species and preferably ones that occupied different parts of the continent to showcase the variety of migratory routes. I explored Movebank noting several datasets and ultimately selected a handful of studies that seemed well suited to this experiment and downloaded the data as csv. The data in this format contained time stamped coordinates for each GPS point recorded by the sensors and a variety of other attributes identifying the individual bird.
With data downloaded, I began the preparation and processing. I needed to assemble those points into a series of lines. I used FME for this task purely out of personal familiarity, but I bet there’s dozens of other ways you might accomplish the same.
I started by reading the dataset and performing some very basic schema preparation. I dropped fields that I wouldn’t be using, renamed fields for clarity and extracted a few date components from the date/time fields. This step helped with clarity and eased processing moving forward.
Next up, I organized the data into unique combinations of bird and GPS tag IDs. Initially, I had more than a few misadventures and iterations until I realized that GPS units were often recovered in the field over the course of the study. Recovered sensors were then subsequently redeployed to another bird. The unique ID I created here would be the attribute I used within any Group By operation and would keep the individual bird tracks separate.
Sorting and grouping the data by this new ID and timestamp, meant I could order the points to create logical lines features from them. I specifically wanted line segments for the purposes of animation so that I could show small portions of the tracks one frame at a time (think stopframe animation).
To create these line segments, I assigned each GPS point its own coordinates and that of its sequential neighbour. In FME, this mean using the Adjacent Feature Attributes in an Attribute Manager I appended the coordinates each points’ next adjacent point. I then used these coordinate pairs as vertices and constructed a line. Effectively, this provided me with a chain of two-point line segments spanning each of the GPS points along each birds’ track.
Still with me? Well, that was the straight-forward bit and yielded some simple time stamped line segments. At this point, I gave everything a trial run with a test animation.
Not bad right? As a proof of concept, it was fine, but it wasn’t quite flashy enough for me. It needed more pizzaz! Fuelled by this initial success and unchecked ambitions, I forged ahead and introduced a few more requirements:
The temporal density of data points varied between studies. In some studies devices produced many points per day, in others the data was very sparse. To deal with this I’d need to aggregate the data into common temporal ‘bin’.
I also wanted to show several species at once. However, all the date ranges on the studies for my species of interest didn’t overlap. In order for the animation to make any sense, I’d need to synchronize them like watches in a bank heist movie.
Finally, I really wanted to have a ‘comet effect’ for each bird. This subtle comet and breadcrumb trail would serve to communicate movement and indicate where the bird was and where it had already travelled. I hoped I might even be able to illustrate common migratory routes by accumulating these lines (more on that later). Cartographic effects, symbology, and layering would help me achieve this effect, but it would require some additional data manipulation as a foundation.
First setting my sights on smoothing the temporal aspects of the data, I decided that a single day would be the contents of each frame in my animation. Therefore, I would need to aggregate the data into frames of one day each.
This task was fairly straightforward. I sorted the track segments by their timestamp and aggregated them together; lines recorded on the same day would be appended to each other. In FME this used the LineCombiner. The unique bird tag ID and the date elements extracted earlier help delineate where lines would be split. The output resulted in one continuous line, segmented by day, spanning all of the GPS tracks recorded for each bird (a distinct line for each combination of bird and day).
Synchronizing the data
As I added datasets to my animation, another tricky temporal problem arose. The studies producing the data occurred over many different dates and often didn’t overlap. I needed to synchronize the data so that tracks not only animated together but were also synchronized seasonally so migratory movements were in context for the viewer.
I solved this issue with a quick data hack. I calculated the relative year for every data point belonging to each sensor based on when it was activated. Using this relative year value, I could add it to the minimum year value that recorded across all sensors. This zeroed year attribute would allow me to synchronize all the sensor data.
Now the data all effectively started within the same year but retained the month/seasonality. When played back it would appear as though each bird was tagged and released at the same time (at least in the same year), even though their actually study and tag dates varied greatly.
Animating the synchronized line segment tracks showed great promise and already did a great job of illustrating the movement of the tracked birds. However, due to the style of animation, the effect required the birds to have been constantly moving in order for a track segment to display in the animation frame. Alas, yet another hurdle!
Fantastic non-stop migratory flights are exceptional but are far from the norm. Most birds aren’t always flying and often stop to rest or feed over the course of their migrations. This was reflected in the data. In initial tests, I noticed gaps in the animations where tracks were often absent from several frames before reappearing. This made it very difficult to keep track of bird locations and was an ineffective visual to watch. I wanted a better experience for viewers.
I decided that a ‘comet head’ could be used to always denote the location of the bird, even if it hadn’t moved that day. To create a point when movement wasn’t recorded meant I’d have to identify temporal gaps between movement recordings in the data and fill them.
The datetime attributes I created earlier became useful again! By comparing each point to that of its neighbour, again using the Adjacent Feature Attributes, I was able to identify where there was more than a day between recorded GPS points. I then created a clone of the latest GPS location (prior to the temporal gap) for each day within the void of missing data. In FME, I used the number of blank days as the number of copies attribute of the Cloner transformer and incremented a new date value.
This solved my data gaps and meant my animation was visually much smoother to watch and significantly improved the ability to understand the movements and progression of the birds’ migrations.
Now most of the technical data wrangling is complete, hang in there I promise this will be worth it. To recap, I now had:
A line dataset parsed into segments aggregating movements for each day
And, I had a point dataset to act as the head of the ‘comet’.
I had one final dataset to create. This additional line dataset would tie everything together. As mentioned before, I wanted to visualize the cumulative movement as a lengthening ‘comet trail’ behind the animated movements.
This comet trail required yet more data manipulation. This layer would need to contain a feature for each frame of the animation, and it would have to contain an aggregation of all the movements animated so far. In data terms, every feature would need to be an accumulation of all feature geometry from previous frames (days). Think of this dataset as the ‘snake’ that continuously consumes geometry from the preceding frame and grows an ever longer neck.
Admittedly, this one had me stumped. I had it worked out the logic in my head but when it came down to implementation I was at a loss. I reached out to the twitterverse and some FME pros came to the rescue. Their suggestions and code examples opened my eyes to using a PythonCaller within a workbench and ultimately, got me on the right track.
import fme import fmeobjects class LineMotionBuilder(object): def __init__(self): # Initialize with Null Line self.line = fmeobjects.FMELine() self.feature_list =  def input(self, feature): # append feature to list self.feature_list.append(feature) def close(self): pass def process_group(self): for feature in self.feature_list: # append current point to existing line self.line.appendLine(feature.getGeometry()) # set feature's geometry to current line feature.setGeometry(self.line) # output the last feature for the unique playback day self.pyoutput(self.feature_list[-1]) self.feature_list =  self.line = fmeobjects.FMELine()
The final step was to convert all of the date/time information I’d been using into a simple ‘playback_key’. I sorted data by the day and synchronized relative year from earlier and assigned an animation frame as a sequential numeric attribute. This value would be used to determine when features appeared as the data was animated.
At this point, I also created a series of additional playback attributes (playback_1, playback_2, etc.) and contained the playback value of the next sequential frame (n+1). These values would be used later on when stacking symbology and allowed me to persist a feature across multiple frames.
Now the fun part! I’d now take all the hours of data work that happened behind the scenes and assemble it into a magnificent ten second GIF.
Using the various layers and attributes I was able to easily configure everything I needed in ArcGIS Pro. Here’s a quick rundown on how everything was stacked.
The comet created from the point dataset was the straight-forward starting point. This layer was configured with a range using the first playback field in the dataset ensuring that it would always appearing at the head of the line and indicate the current position of the bird. The points were styled with a small solid dot and colour-coded to the species.
Next, I loaded in the daily line segments dataset. This layer was configured with the next sequential playback field after the comet so it would appear as the immediate tail. I then duplicated this layer and incremented the playback range value on the layer by 1 field so that each duplicate would represent the segment from the previous animation frame, and so on. I also modified the symbology of each duplicated layer increasing the transparency. The hope was to make it appear as if there was a transparency gradient along the entirety of the bird tracks.
I repeated this duplication until I had just one playback field remaining. This final attribute was reserved for the cumulative lines I had created. As with the previous line layers, I applied the playback field as the range attribute and made the transparency very faint. I wanted to use this layer to subtlety hint at the cumulative journey of each bird and provided some context of the ranges of the various species.
At this point, I found that my DIY gradient effect was OK but it fell apart when birds travelled long distances in one day; these lines would be rendered with one consistent transparency and I thought this could be improved. I wanted a proper gradient along each line, and it just so happened that my friend Tommy was working a very similar problem.
He was in the midst of recreating Sarah Bell’s amazing bird migration map in ArcGIS Pro (seriously go check it out!) and he had implemented some Arcade expressions and attribute driven symbolology to help him orient a gradient along a line feature.
v a r g e o m =