For this week's lab, we focused on crime hotspots and how these hotspots can be created and used practically by police forces in crime prediction to analyze patrol routes. In order to compare the three different methods of mapping and data visualization I started by creating a grid based thematic map, and performing a spatial join between grid cells and the 2017 homicides. For the spatial join the target feature was the grid, and the join feature was the 2017 homicides layer. I then used an SQL query to select the grids with homicides, and then selected the top quintile by sorting the entries by burglaries and the selecting the top quintile to make a separate layer which I then exported. The next method, kernel density, started with the Kernel density tool. I used the tool and input the 2017 homicides layer with a cell size of 1000 and search radius of 2630, in square miles. The output was then put into 2 breaks, more or less than 3 times the mean which I view in the statistic tab of the symbology window. I used the Reclassify tool to change these parameters, and then created a polygon from the raster data. For the Moran’s I, I used spatial join with census tracts and the 2017 homicide data the same way I did with the Grid overlay method. I then used the local Moran’s tool and analyzed the output data, and created a single polygon with the dissolve tool.
For this week's lab, we focused on crime hotspots and how these hotspots can be created and used practically by police forces in crime prediction to analyze patrol routes. In order to compare the three different methods of mapping and data visualization I started by creating a grid based thematic map, and performing a spatial join between grid cells and the 2017 homicides. For the spatial join the target feature was the grid, and the join feature was the 2017 homicides layer. I then used an SQL query to select the grids with homicides, and then selected the top quintile by sorting the entries by burglaries and the selecting the top quintile to make a separate layer which I then exported. The next method, kernel density, started with the Kernel density tool. I used the tool and input the 2017 homicides layer with a cell size of 1000 and search radius of 2630, in square miles. The output was then put into 2 breaks, more or less than 3 times the mean which I view in the statistic tab of the symbology window. I used the Reclassify tool to change these parameters, and then created a polygon from the raster data. For the Moran’s I, I used spatial join with census tracts and the 2017 homicide data the same way I did with the Grid overlay method. I then used the local Moran’s tool and analyzed the output data, and created a single polygon with the dissolve tool.
Comments
Post a Comment