Module 6 lab 3.1
Scale and Spatial Data Aggregation
For this last module, we were tasked with multiple forms of data aggregation and analysis.
The first part of the lab had us focus on vector data in the form of polyline feature classes and polygon feature classes. We were instructed to determine the differences in line length, perimeter, and area of the data at different scales.
At a large scale, (zoomed in closely) you are able to see most if not all of the vertices of a shape perimeter or line which by nature would make a perimeter or line much more complete in terms of accuracy.
At a small scale, (zoomed out far away) the total number of vertices that a map can display goes way down, because often, multiple vertices can fall within a single pixel. And sometimes an entire body of water can fall within a single pixel. At some level once you zoom your view out far enough, you wont see near as much as you would than if you were zoomed in closely to a study area.
The data sets were recorded at different scales 1:1200 (large scale), 1:20000 (medium/large scale), and 1:100,000 (medium scale)
As the logic follows, you would assume that the total length of all of the lines in a data set would be much longer when recorded in a large scale vs. a high scale.
The table we created bore this out where at larger scales, the over all line lengths and areas were more complete than small scale.
The next part of the lab had us take Lidar data in the form of a single DEM and create multiple slope rasters with different cell values. I chose to go with cell sizes 1, 2, 5, 10, 30, and 90.
From this point we created a small table in word showing the average slope for each of these rasters. In word we then created a scatter plot based on the slope / cell size data.
The next part of the lab gave me the most trouble as it was very data heavy and some of the tools that were required to run did not have parameters that matched the lab instructions so I had to sort of go off script. I wont spend too much time on this part but essentially we took data that was provided that outlined many population attributes in Florida such as age, ethnicity, relation to the poverty line, and crime rate. We then added x,y coordinates to an exported feature class to create further scatter plots.
The last step was working from a single vector feature class which represented the US congressional districts. The focus was to determine how some congressional districts have been affected by gerrymandering. I'll jump to the last step of this part of the lab as it was asking for specific information about polsby-popper ratings.
For the final step of determining the compactness of the
congressional districts, I performed the following steps
Starting with my “Congressional_Districts_Valid” feature class
(Excluding the prior mentioned states like Alaska and Hawaii) I used the
project tool to reproject the feature to the USA Albers conical equation
coordinate system. Since I was working with such a large area I wanted to
minimize distortion.
Next I created three new fields in the projected
featureclass. Those fields were
AreaKM2 = for polygon area in square kilometers
PerimKM = for the total perimeter in kilometers.
Polsby_Score = for the final score of the districts in the data
set.
I needed to create these before performing the function to
ensure the data was not in conflict.
I then used the calculate geometry tool the get the area and
perimeter of each polygon in Kilometers Squared and kilometers.
Comments
Post a Comment